Intentional Teaching, a show about teaching in higher education

Take It or Leave It with Michelle Beavers, Leo Lo, and Sara McClellan

Derek Bruff Episode 79

Questions or comments about this episode? Send us a text message.

I’m back with another “Take It or Leave It” panel! This one is a little different. On October 2nd, in my role as associate director of the UVA Center for Teaching Excellence, I hosted a virtual panel titled “Take It or Leave It: AI’s Role in Online Learning” featuring three fantastic UVA colleagues. The conversation went very well, and the panelists and the CTE gave me permission to share the audio from the panel here on the Intentional Teaching podcast. 

The panelists for this edition of Take It or Leave It are all at the University of Virginia. Michelle Beavers is associate professor and coordinator of the Administration and Supervision Program in UVA’s School of Education and Human Development. Leo Lo is dean of libraries and university librarian, advisor to the provost on AI literacy, and professor of education. Sara McClellan is assistant professor of professional studies and program coordinator of the Public Administration Certificate Program at UVA’s School of Continuing and Professional Studies.

You’ll hear me briefly describe five recent op-eds on teaching and learning in higher ed. For each op-ed, I’ll ask each of our panelists if they “take it,” that is, generally agree with the main thesis of the essay, or “leave it.” This is an artificial binary that I’ve found to generate rich discussion of the issues at hand. 

Episode Resources

·       Michelle Beavers’ faculty page

·       Leo Lo’s LinkedIn page

·       Sara McClellan’s website

·       Essay 1: “Are You Ready for the AI University?”, Scott Latham, April 4, 2025

·       Essay 2: “AI Risks Undermining the Heart of Higher Education,” Zahid Naz, April 21, 2025

·       Essay 3: “Urgent Need for AI Literacy,” Ray Schroeder, April 30, 2025

·       Essay 4: “Sometimes We Resist AI for Good Reasons,” Kevin Gannon, September 24, 2025

·       Essay 5: “On AI, We Reap What We Sow,” Chad Hanson, September 10, 2025

Support the show

Podcast Links:

Intentional Teaching is sponsored by UPCEA, the online and professional education association.

Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe

Subscribe to Intentional Teaching bonus episodes:
https://www.buzzsprout.com/2069949/supporters/new

Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching

Find me on LinkedIn and Bluesky.

See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.

SPEAKER_02:

Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. I'm your host, Derek Bruff. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time. I'm back with another Take It or Leave It panel. This one is a little bit different. Y'all have told me many times that you like the take it or leave it episodes of this podcast. So, the summer, when my University of Virginia colleagues and I were planning a series of professional development workshops for the fall aimed at faculty who teach online, I suggested that we use the take it or leave it format for one of the events. My colleagues liked the idea, and on October 2nd, in my role as Associate Director of the UVA Center for Teaching Excellence, I hosted a virtual panel titled Take It or Leave It, AI's Role in Online Learning, featuring three fantastic UVA colleagues. The conversation went very well, and the panelists and the CTE gave me permission to share the audio from the panel here on the Intentional Teaching Podcast. The panelists for this edition of Take It or Leave It are all at the University of Virginia. Michelle Beavers is an associate professor and coordinator of the Administration and Supervision Program in UVA's School of Education and Human Development. Leo Lowe is Dean of Libraries and University Librarian, also advisor to the provost on AI literacy and a professor of education. Special thanks to Leo for agreeing to do this panel just two weeks after he started his job at UVA. Our third panelist is Sarah McClellan, Assistant Professor of Professional Studies, and Program Coordinator of the Public Administration Certificate Program at UVA's School of Continuing and Professional Studies. For those of you new to the Take It or Leave It format, you'll hear me briefly describe five recent op-eds on teaching and learning in higher ed. For each op-ed, I'll ask each of our panelists if they take it, that is, generally agree with the main thesis of the essay or leave it. This is an artificial binary that I've found to generate rich discussion of the issues at hand, and that was definitely true for this panel, focused on hot takes about generative AI and online education. See the show notes for links to the essays we discussed if you want to read them in their entirety. Our first hot take, our first essay to discuss, is titled Are You Ready for the AI University? This was published in the Chronicle of Higher Education earlier this year. This was written by Scott Latham, who is a professor of strategy at the University of Massachusetts at Lowell. And in this piece, Professor Latham casts a fairly big vision for how generative AI will disrupt higher education in the coming years. There is a lot, there are a lot of big claims in this essay, but we're going to focus on just one of them. And for a bit of context, here's a quote from the essay: faculty will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course. While students will readily accept non-human professors, any new technology will have hiccups that require human intervention to resolve. And here's the thesis statement I'll ask our panelists to respond to. Once the training wheels are off, AI taught courses will become the dominant paradigm. That is a pretty bold statement. Sarah, I'll start with you. Do you take that or leave that?

SPEAKER_04:

I leave it in terms of the scope and speed of the argument that this author outlined. And I think for two big reasons. I think first, some of the predictions come up against energy shortfalls that aren't yet resolved. And so if we look, it's not just energy shortfall, it's transmission, it's chips, it's cybersecurity, it's all of the infrastructure and environmental features need needed to scope such a grand vision. Uh assuming that if such a vision were to come to pass, it would be an effort all over the country and be putting pressure on all of our systems, which would require addressing mega project problems, political gridlock. So that's one. And uh yeah, it also, I will say it also I think ignores the um the inconsistencies in human systems, or federalist structure, shared governance, uh, the way that higher education works across states. Um I think it's it's a pretty sweeping vision given our current patchwork system.

SPEAKER_02:

Gotcha, gotcha. All right. Michelle, do you take this or leave this?

SPEAKER_03:

I leave it, Derek.

SPEAKER_02:

And why do you leave it?

SPEAKER_03:

Well, I'm gonna lean towards the um relational aspect of teaching and say that AI does not have the disciplinary um expertise. It's that ethical judgment that faculty bring, our pedagogical pedagogical vision that we represent. Um and the most important piece is that interaction and relational trust and accountability with our students. I could see it as a back-end partner, uh, potential support for async or a flipped classroom environment. Okay.

SPEAKER_02:

And you say this as someone with a fair degree of online teaching experience. And in fact, someone who uses AI a lot in the development of your learning materials.

SPEAKER_03:

I certainly do, but it's not taking over.

SPEAKER_02:

It's not taking over. Okay. Good. So we've got two leaves that leave it. Leo, are you gonna are you gonna complete the trio here? Do you take this or leave this?

SPEAKER_00:

Well, I'm gonna go the other way then. I'll lean. Take it. Okay. Yes, I'll just make try to make it more uh uh interesting. And um Sarah hit the the key point that I kind of the crux of this kind of well, why do I lean, you know, uh take it is that the timeline. I think right now, in the short term, it's I would I'll I'll leave it. But in the long term, I'm actually thinking that maybe a um so I try to compare AI to let's say the internet. I think right now we're in the dialogue phase. Can you imagine us doing our job now with dialogue technology? So it's hard to imagine what it's gonna be like in, let's say, 20 years' time, right? If it keeps improving in that way, including um um the you know how it solves some environmental issues and other things, if it keeps going that way, I can see AI being a really good teacher in that way, that it can improve in that way. It has unlimited patience. If it can improve its accuracy, then yeah, possibly, because I I honestly I'm using AI to learn things now. Obviously, low-stakes stuff. I'm not using it to teach me how to cure cancer, although it might happen in in the future, but it's hard to speculate 20 years from now. But at the same time, what Michelle just said about all the human relations, I think that's always going to be important. So the profess the human professor will become less of teaching the uh the technical part, perhaps, but teaching other things that, let's say, a college student might need, coming to college, the social aspect, the you know, the emotional intelligence, uh, even some human judgment part that AI may be may still have difficulty in in kind of understanding or even teaching. But in terms of strictly technical stuff, I think AI could could take on that that role in the future, in the longer term, not right now.

SPEAKER_02:

Yeah. What's that line about? We tend to overestimate the short-term impact of new technologies and underestimate the long-term impact. Um, that said, I feel like I've been hearing this refrain for my entire career in higher ed, which is we're gonna have some magic technology that will provide wonderful personalized learning to all of our students and they will learn all the things that they need to know through this technology. Um, and it will it will mean we don't have huge class sizes anymore, right? It will change everything. And I feel like I've been waiting for that to actually happen for 20 years now. So um I think I'm a little skeptical that such a thing can exist myself. Um, what do you all think about that? How would you respond to Leo's claim that yeah, this is something that that we can imagine happening 20 years down the road? Michelle or Sarah?

SPEAKER_03:

I think Leo brings up a good point that some of the um practical knowledge-based pieces could provide that one-on-one instruction for students and to support their learning where they are. But the piece that it doesn't um recognize is where we negotiate meaning and understanding with our students, where we challenge them, their values, the way they're confronting issues, um, all that still remains human. So Leo may have a point that could sway me closer to possibly take it. When I think about could we partner and have different roles within the classroom environment?

SPEAKER_01:

AI as an instructional assistant, but not as the sole instructor.

SPEAKER_03:

Still not letting them take my place.

SPEAKER_00:

Right, right. I absolutely agree with Michelle just what Michelle just said. The the rows, the uh the rows may change, but I think humans always will prefer to have human interaction. So yeah, so that's just kind of our our job in kind of defining what that is going to be like moving forward. Yeah.

SPEAKER_02:

And I think when we've seen efforts along these lines in the past to try to turn the education over to the robots, whether it was AI or the MOOC craze of the 2010s. It does work for some students. Like there are students who have sufficient background knowledge and sufficient motivation that they'll kind of take these tools and really learn well. But I find that it doesn't work for 85% of the students who need the human relationship, they need the structure, they need the motivation to do that kind of learning. Um, I am skeptical. Uh uh Latham writes in the piece, I said this earlier. While students will readily accept non-human professors, I don't know about that. Again, some students will, I'm sure. They'll be happy. Um, but that that seems like a stretch. I think most of our students are not are not wanting to have the AI tutor as their sole instructor.

SPEAKER_04:

That makes me think about the gap we're seeing when Pew and other organizations have studied public perspectives on AI and the perspectives of experts versus the average member of the public. There's a pretty big gap, and we're seeing a larger gap than we've seen with the introduction of a lot of other technologies. Um so perhaps we'll tip over that gap at some point in certain areas. But right now, I certainly hear that I teach mid-career professionals, and I that gap is alive and well as I hear technology experts in my classes interacting with folks who are pretty wary. Um, and then looking at that data and talking it through. Yeah, yeah.

SPEAKER_02:

All right. Okay, so we have two take it, uh, no, two leave it's and a take it for our first one. Um, let's move on to our second one. I've tried to pick some really interesting essays for us to respond to that kind of go in very different directions. Um uh let's see, essay number two is titled AI Risks Undermining the Heart of Higher Education. And this is a piece by Zaheed Nas, who is a senior lecturer in academic and professional education at Queen Mary University of London. Um, and in this piece, Nas expresses a very common concern that I hear from faculty about AI's role in learning. A worry that students using AI will avoid the hard work of learning. Right? The first one was a little bit about AI doing the instruction, the teaching. This one is about AI doing the learning. Uh Nas writes, uh, the process of learning involves much more than reading texts. It requires students to grapple with intricate concepts, compare and contrast ideas, and navigate the nuanced arguments presented in academic literature. This sharpens critical thinking, cultivates original thought, and builds the foundation for intellectual independence and what I have identified as the thesis. In short, the unchecked use of AI could ultimately undermine the very intellectual rigor that makes higher education meaningful. I'll start with you this time, Leo. Do you take that or leave that?

SPEAKER_00:

I'll take that. I agree with that. Um so I and you said something earlier, Derek, about motivation of students. And I think a lot of the times, especially with our the current kind of system for them for the masses, is that it's very uh assessment grade, kind of you know, uh uh playing a huge part in why they do certain things. Uh of course there are people who are just they just want to learn, they want to improve, they want to grow, but the pressure is really on our students to produce something that will give them a good grade so they can get a good job and and and all of that. And that makes it a lot tempting, more tempting to take shortcuts. Uh because they just the grades become more important than the learning part in many cases. So I think uh unless we can tap into um really in this you know incentivizing the students' motivation part, that we have a good grasp of that, having some a tool that is so easily that can be used so easily to get you know, to get a passing grade or a good grade, I can see people using it. So that means we need to have some mechanism to help people wanting to learn, or maybe change the assessment system a little bit so that there's no so high stake for them. Um so I I would take that. I I agree that you know we that there needs to be some uh checked use uh of AI right now and in the short term.

SPEAKER_03:

Yeah, yeah.

SPEAKER_02:

How about you, Michelle?

SPEAKER_03:

I will say I agree with Leo. I'm gonna take the part about the warning about um the unchecked deployment of AI, but I'm gonna leave the idea that AI can degrade learning because I feel like um as faculty, as professors, we have this responsibility to create these guardrails and these scaffolds so that AI can actually augment our student learning rather than replace it. Um and if we design that learning that creates productive struggle for our students, we're actually enhancing that thinking and that learning rather than allowing AI to do that. But it has to be an intentional design.

SPEAKER_02:

So before I go to Sarah, what what is what what does that look like in your courses, Michelle? What are you what are some design elements you use to try to move students to that productive struggle?

SPEAKER_03:

So um I will say that first of all, that um I use AI as a cognitive partner and it's a teammate within my um classroom. So students work collectively with their human friends, and then they're allowed to invite AI into the conversation as that third critical partner. Um I'm also dabbling with the AI agent in which students can interact and receive feedback on the work that they're doing, whether it's a process or a structure they don't understand, training that agent to not give the answer but to ask questions that leads the student to find their own answer. Um I think when we create that ambiguity, that um the the requirement to compare a, for example, in an essay assignment, compare an AI outcome to your own outcome and give the rationalization of why you agree or you disagree helps to enhance that thinking process as opposed to saying, oh, you can use AI for this assignment. Scaffold it, structure it, and design it, helping students understand the why behind the practice. So looking at the process, not just that outcome.

SPEAKER_02:

Right. And so if the unchecked use is problematic, your checks involve the structures and sequences that you're asking students to work through, right? Um, as well as what I heard is some checks on the AI itself to kind of point it in useful directions and not uh less helpful directions.

SPEAKER_03:

Correct. And the only other thing I'll add because Leo said it, he said emotional intelligence, and I gave a workshop this morning on emotional intelligence, and we used AI to demonstrate our emotional intelligence by having this interactive conversation. So AI is capable of not only the technical skills but the dispositional skills as well.

SPEAKER_02:

All right. I'm gonna go to you now, Sarah. Take it or leave it.

SPEAKER_04:

I I take it, and one thing I'm I'm really aware of right now is that I teach relatively small classes, 15, 20, 25 students. Um, I have a great um teaching excellence center. I have access to lots of resources. I'm communicating with friends and colleagues around the country who work in really big systems that are pushed to the absolute max. They're sometimes teaching dozens or hundreds of students in a single class with limited TA support uh and don't necessarily have strong instructional support at the university for transitions. And so uh I think I struggle at the individual level and where resources are there. I think that there will be ways, as Michelle said, to make this um a really strong learning tool and and create spaces strategically where students can lean into that tool and and reflect on it. Um, but I try to think about I think I suspect that will be pretty spotty and there'll be systems where that's much, much harder.

SPEAKER_02:

I'm curious, Sarah. So so Leo pointed to the kind of the role of motivation in all this, right? And if students are just box checking, then yeah, it's easy to use use some unchecked AI. Um you mentioned teaching mid-career professionals. Do you find that they come in wanting a certain degree of rigor? And that makes it easier then to kind of avoid the the AI shortcuts.

SPEAKER_04:

You know, I would I would say it's a mixed bag. I would say the majority of students uh want a reasonable degree of rigor, but I definitely get those students who are in the program because they've been pushed to be in the program around a promotional opportunity, or somebody's told them that they really need to check that box uh professionally. And um, those do tend to be the students that I sometimes have conversations with about AI use and and um to the extent to which they're leaning on AI, but they're also really busy professionals. So I think the temptation, I have a lot of students who are emergency services worker, and so the temptation is I've just worked, you know, a crazy shift and I'm exhausted and I'm trying to do 10 different things, and now there's another emergency, and so AI looks really good at that point, and uh that's that's a hard one.

SPEAKER_02:

Thank you. I think sometimes I have this vision of like the the first year undergraduate taking a writing course they don't want to have to take. And that's that's a hard, hard course to teach when AI is out there. And sometimes I feel like the professional studies programs must just it's just all you know, sunshine and roses there. Like they all want to be there and learn. Um, but we're all humans, right? We don't we don't show up with our best selves every day in class. Yeah. Yeah. Um Sarah, one one other follow-up. We talked about some of the the checks that Michelle has in terms of the structures and the tools that she's using. What are what are some of the checks that you have in your courses to try to kind of orient students towards a more rigorous application of learning?

SPEAKER_04:

Well, I'm experimenting. This term is the first term that I did a brief, like three to four minute video right at the start of my public policy course. And I said, let's talk about AI. Let's talk about the get to badly and how we can use it in policy analysis. And so I sort of stepped right out there and um students really responded to that. And I created an AI agent, a tool that students could use around policy analysis. And I fed that tool and I explained to them how I did it with readings and concepts and frameworks from the course. And then I asked them to please not use that tool until a very specific point in the course, because I didn't want the tool to take the learning away from them. Uh and I said, you know, policy practice is a way of thinking about and approaching problems. And if we let AI learn how to approach this problem, you know, we're paying for a lot and we're showing up and we're not learning. And so I said, but it's a fantastic partner to sort of think through and challenge the assumptions that you have and the research research that you're using. And so I tried to give them strategic places where I said, please, please test the tool here. Tell us how it's working. Um, and so far I think that's I think it's going um much better than it did in the last term, where it was a little bit more hands-off about it.

SPEAKER_02:

Yeah. Because you're saying we're gonna use AI. It's not like we're gonna pretend it doesn't exist, but we're gonna be thoughtful and intentional about how we use AI, not to replace our own work, but maybe as as Michelle was saying, as a conversation partner, as a as a uh um uh a tool that can kind of provoke us with interesting ideas, perhaps.

SPEAKER_04:

Yes, and I linked to articles about AI and motivation. Um, and several of them responded it was optional reading, but they clearly did it.

SPEAKER_02:

Yeah, yeah. Um Leo, anything to add after hearing Michelle and Sarah talk about their approaches?

SPEAKER_00:

No, I absolutely agree with with those approaches. I think um ultimately, but ultimately it is about setting up, like you said, some guardrails, some kind of guidance on how to actually use the tool to to learn and um benefit them. Um and you know, some people will never do that. They was, you know, they will find ways to, you know, uh, like you said, Derek, we're all humans. Sometimes we don't show up with our best selves. But at the same time, that I also believe there are many people who really wanting to learn just you know, in this kind of uh environment, don't know how or what's the best way. So that I love that, you know, Sarah and Michelle are really giving them uh ways to do that, to optimize their experience using the technology.

SPEAKER_02:

And I think one thing I've observed is as these new technologies come on, it takes higher ed a little time to figure out what are some of better practices, what are what are they good for, what are they bad for, what are the structures that we could put into place? Um you know, there's some collective figuring out that happens. Well, and that's a great segue into our next essay. And Leo, I'm coming for you on this one first, because this is this is your wheelhouse. Um, this one is about AI literacies. Uh, this is an essay by Ray Schroeder called The Urgent Need for AI Literacy. Uh, Ray Schroeder is a senior fellow at UPSIA, the online and professional education association. And in this piece, he cites a number of workforce studies pointing to the growing role of AI and the need to prepare students to have AI literacies. Um he says uh the rapid advent of AI capabilities coupled with the developing economic pressures worldwide have led to a surge in employers seeking to reduce operating expenses through widespread use of generative and agentic AI to augment and in some cases replace humans in their workforce. And then what I've identified, there's kind of an implied thesis here. So I'm going to read the statement and then frame it up a little bit. He says we are failing to fully prepare those students to enter the workforce, where, as the World Economic Forum says, two-thirds of business leaders surveyed said they wouldn't hire a candidate without AI skills. And nearly three-quarters said they would rather hire a less experienced candidate with AI skills than a more experienced candidate without them. And so I'm going to focus on the first part of that thesis statement, which I'll drop in the chat. We are failing to prepare, failing to fully prepare those students to enter the workforce. Um Leo, would you take that thesis or leave it?

SPEAKER_00:

Let me share a recent experience first, and then I'll answer answer you. Uh this week I I was at a gathering with uh a small group of, I would say AI experts. They are very close to the action. And um it was quite depressing, actually, um, because they thought, really believed, that in about 18 months' time, uh pretty much all the entry-level jobs will be gone. And you can see it kind of happening now, but they are very close to the action. And I quote one of them saying, you know, when they saw the um the new newest models, I quote, almost like magic. So um you can kind of okay, I put some weight on what they're saying. Let's just say, assume that. So, in that, if that is true, then I'll take it that we are not really preparing our students because what students come to a university like UBA so that they can get good jobs. We want them to you know prepare them for the workforce so that they learn the skills and also all the other things that can help them succeed in getting that entry-level job and then succeeding you know in the uh for the rest of their lives. But if those jobs are gone, then what are we doing? Why are they paying the money to come here? Yes, there are other benefits of going to college, but at the same time, you pick out that much money to pay for something, you want some tangible return. So here's some of my, in a way, my thinking. I don't know, I would love to get everybody's take on this, right? So for the past, I don't know, 100, 200 years, we're preparing students to get jobs because of industrial revolutions. You have jobs and people want to get those. But before all of that, everybody was kind of their own entrepreneurs. You're a blacksmith, you're a farmer, you're all these, you work for yourself. Would we go back to that?

unknown:

Right?

SPEAKER_00:

If all the jobs, entry-level jobs are gone, AI can do all of that. But at the same time, everybody, every one of us, every one of the students can also have a team of AI agents to be their own bosses. As long as there are problems in the world, there is always a demand for solutions, however big or small. If there are demand for solutions, there are always opportunities for these, I would call them micro solopreneurs, that they can use a team of AI agents uh to help them solve real-world problems, however big or small. Shouldn't we be teaching them how to do that and using the technology to help them, you know, build up that kind of scope of the problem, build using infrastructure support here to learn how to do that, and then go out there and be their own bosses in many ways. I understand that not 100% of the students would want to do that, but even if there are 10%, and if there's such a radical change in the society, maybe, you know, that we need to have some kind of radical changes as well to adapt to that. I am just brainstorming right here. So I that much, but it's just an idea that I've been really kind of simmering in terms of how teachers in in AI literacy. Of course, they need to be AI literate to be able to do any of those things I just talked about. Or even just to get jobs if they ask you a bit.

SPEAKER_02:

Yeah. Well, and I've heard a lot of people say the entry-level jobs in this field or that field are disappearing. I haven't heard a lot of what then do we do? Um, so I appreciate your brainstorming with us. Uh Sarah, how would you respond to this thesis? Are we failing to fully prepare our students to enter the workforce?

SPEAKER_04:

Yes, I'm afraid so. I would take that. And I think um, Leo, you hit on uh a lot of themes that I've been hearing about and and reading about as well. Um I think one thing that really strikes me is that AI literacy, it's not just a technology. It is because, as Leo suggested, I think it is so fundamentally going to change our social fabric, our economy, what it means to be successful professionally. Um, that I think part of AI literacy for me is um change, navigating change and helping students figure out how to kind of surf with disruptions and and not seize up because we're gonna have a lot of change. Um it's already here, we'll have more. And so I think um, and that's intellectually and emotionally, it can be fun and it can also be exhausting. And I think recognizing where kind of the human condition is, we've come out of a global pandemic. Um, you know, we're in politically tumultuous times, and so, and we have institutions. That were, you know, industrial or institutions that ironically they helped sort of create or unleash, depending on how you look at it, all of these technologies, but they're now socially accelerated to a point that our institutions aren't built to manage, to develop public policy, to do quick responses. And that creates a lot of tensions. And so I think that a big piece of this is figuring out when to go fast and when to go slow. There's a huge temporal element to all of this because reason, judgment, our democratic process, our shared governance system, it all takes time. And so I think when we think about AI literacy, not just for students, for faculty, for administrators, it's got to be about human strategy and change strategy and trying to sort of assess what's coming and get much, much more creative than we've been. And so I feel like Leo, we could have a whole session on just what are the creative possibilities when those entry-level jobs go away. And I think we need a big, big temp approach to trying to solve those problems.

SPEAKER_02:

Michelle.

SPEAKER_03:

Struggling with how to respond to this, Derek. For everyone joining us today, I'm an educator for an education society. So I'm looking at preparing the folks that work in educational institutions, or mean meaning K-12 education. And I recently read an article that in 2023, only 50% of school divisions have either said, yes, I'm going to use AI or no or not, but haven't developed policy yet. So, in quite honest, in full honesty, education is behind the eight-ball. We're moving slow already. We're not keeping up with what the business world is doing. And what I'm seeing here is that school divisions are looking for the tools. They're not looking at the skills. And so therefore, we're not the they want to know how to make their life easier. What tools should we adopt or not adopt? And they haven't gotten to the point of recognizing that it's not the tools that we need to teach people how to use. It's the AI skill, the AI literacy, the judgment, the critical thinking, the metacognition, et cetera. So I'm kind of like sitting on a double-edged sword here, not really sure which way to go. And I think the other piece that we really have to consider is you're looking at a panel of three individuals. Well, what if you looked at the system? How many people are actually integrating teaching AI skills? And if we had to say that, I'd say I'd take it because if I look at my own department or my own school of education, not everybody is at that place yet. And so we need to be.

SPEAKER_02:

Yes, that was last week, I think, by Kevin Gannon. It's called Sometimes We Resist AI for Good Reasons. Uh Kevin Gannon is a professor of history and director of the teaching center at Queen's University of Charlotte. Um I've met Kevin, he's a teaching center guy. Um his his online handle is the Tattooed Professor, and it is very accurate. He is covered in tattoos. Um Kevin uh takes his essay, most of the essay details four big questions related to AI policy in higher education, where he argues the voices of AI skeptics and even resistors need to be involved. He writes on top of the dizzying rate of technological change is a relentless boosterism surrounding generative AI. As faculty members, we are told that there is no alternative, that these tools are the future of work, that we must either adapt or become obsolete. That framing is responsible, however, for the exclusion of important perspectives from our institutional conversations on AI. And so the thesis statement I'll ask you to respond to is rather this is a call to make sure that everyone is at the table when you are setting these policies, even and especially the harshest campus critics of AI. Michelle, I'll start with you.

SPEAKER_03:

Um I will take it and I'll take it strongly. Um I feel like this is a shift for me. I started using AI um in the summer of 22, excuse me, 23, and um I was an early adopter and very enthusiastic about it. But it wasn't until I sat amongst the resistors that I recognized the importance of slowing down, the importance of hearing dissenting views, um, especially those who are skeptical and critical of AI, because it helps to build a better policy and practice for the whole organization, but for myself too, who would be ready green light, jump in, let's go, when in reality, it might be time to pause and consider how to do it more effectively.

SPEAKER_02:

Sarah, how about you?

SPEAKER_04:

I would take it and I partly I own this. Um maybe I'm a closet contrarian. Uh, I have I'm I'm ambivalent about artificial intelligence. I've jumped into it, I'm working with it, I'm learning with my students. Um and I, as I look at other technologies like social media and everything we've learned over the course of the last 15 plus years, um, I if I could push that button and shut it all down, I might do that. Uh so, you know, so don't give me that power. Um, but one of the things that I think about a lot is I think it's I felt like it was important as a contrarian that I stepped into the process and that I be tried to become AI literate. Uh, and and thus my ambivalence. I would, I guess I'm not a solid contrarian anymore. I'm a bit ambivalent. And um I so I think the friendly amendment for me with the take it is that I would expect if we engage contrarians, that we ask or expect them to step up and work on some AI literacy in the process so that people aren't just uh hardstop no uh without understanding much about the technology.

SPEAKER_02:

Right. I have certainly encountered the contrarians and the resistors who read a couple of headlines about AI three years ago and made up their mind then that it was useless. And I feel like that's that's not an informed position.

SPEAKER_04:

Um it's hard to critique something that you don't understand very well and you haven't experimented with. Yeah.

SPEAKER_02:

Leo, how about you? Do you take this or leave it?

SPEAKER_00:

Oh, I'll take it. Um, I even though I can see enormous potential with the technology, we I think we can all agree that it has some inherent flaws, limitations, and a lot of problems with it. And we're in this rapidly growing stage, but it hasn't grown yet in certain ways. So this is the time to make changes, right? To make sure that you know we actually have that window um to set some rules. Once the rules are set, they're difficult to change. But this is the time to set those rules. And so I encourage people who have uh uh who are skeptical of this technology or have you know want to be critical about it to really raise their, you know, to have their voice heard, basically. But at the same time, I agree with Sarah, basically people need to be AI literate first. Because and I always I say that to everybody, is whether you're for the technology, technology or if you're against it, your arguments are going to be stronger if you know more about it. And that's what I'm really advocating for, is for for all of us to learn more, to be more AI literate, to learn more about it, and from the librarian's perspective, where to find credible information so that it can substantiate your arguments and all of that, so that your voice becomes stronger to make the technology better, to make the changes better. Because right now we're in a period of chaos, in a period of rule setting, and there aren't that many rules. Even if you look at countries and regions with regulations, it's chaotic chaotic right now. So is this this is time to help shape that. So I take it.

SPEAKER_03:

I think um there's a question in the chat from Tom about how this relates to um online teaching. And I think that um by bringing these naysayers, these dissenters into the conversation, especially if they are AI, if they are faculty who teach online, it's critically important going back to that first article about will AI be the instructor for our courses online in the future. And like Leah said, faculty have to be AI literate in order to partner with AI in online environments or face-to-face environments.

SPEAKER_04:

Yeah, I uh along those lines are in relationship to teaching for me. Um, I'm co-teaching a public sector AI leadership course, and one of the things we're doing is working to integrate um AI critics into our curriculum, including from the tech industry, experts who are leaders in AI who are now or or have been raising a flag and highlighting significant risks. And so I want us to be able to talk about those in classes that involve government leaders making big decisions about AI, because otherwise tech vendors have enormous power and we don't understand the big arguments.

SPEAKER_02:

I'm gonna take us to our last essay, and I want to use Tom's question as a jumping-off point because I do want to talk concretely about what online teaching and learning looks like in this AI era. Um this essay is called On AI We Reap What We Sow. This is a piece by Chad Hansen, who teaches sociology and religion at Casper College in Wyoming. He makes the case that higher ed has focused far too much on outcomes and products of learning and not on the value of the process of learning itself. Um He says, Enter AI. Today we live in an era in which students can feed a prompt into an automated prose generator and in seconds have a viable draft of a writing assignment. What are they supposed to think? We've spent three decades acting like outcomes assessments are the only things we value. And here's the thesis statement I'll ask you to respond to. We should see the AI era as providing us with a reason and an opportunity to expand our interest to include an analysis of the broadly formative processes involved in education as opposed to focusing solely on narrow sets of outcomes. And I'll ask uh Sarah, do you take that and do you leave it and and what does that look like in online ed, right? This this kind of tension between outcomes and process.

SPEAKER_04:

I take that absolutely, and I think in in online ed, I think we're potentially even more at risk. Um I think we have often marketed ourselves to understandably to professionals who have very um specific career goals and they they tie the coursework and the degrees or certificates to those career goals. And um, some people who are very busy, and and we often have students who want to get through quickly. Uh, and I think we have often supported that with our business models, and I felt conflicted about it. Um, I think one of the things that uh I totally taken this is that rethinking what education is, I think we might loop back to the comments Leo was making about um, you know, what are the bigger questions that if people are going to struggle to get some of the traditional jobs or to get the or to follow a sort of straight career path, um, then what does that mean for education and especially online, uh, say adult learning? Uh, because I think we have marketed ourselves as all about jobs, jobs, jobs. And that has concerned me for some time. And you take like communication scholars like Stan Dietz have uh he's talked about corporate colonization and this idea that we've sort of taken corporate logics around productivity and efficiency and career progression, and we've absorbed those as the goals for so many other institutions, including education. Uh, and I think that links to the obsession with outcomes and efficiency and getting people through. And while I don't want to exclude the pragmatic reasons to do that, I also think it's part of how we end up at this moment, um, where bigger questions about what it means to be human, what it means to be embodied, what it means to work and live in a liberal democracy, um, I think we've gotten really rusty at making education about these big topics as well. Uh, and I'm hoping that AI this moment will give us a chance to say, once again, we've asked for centuries, what is this thing, this education? Uh, what is it good for? And and who are we as human beings, uh, especially in relationship to AI?

SPEAKER_02:

Leo, do you take this or leave us?

SPEAKER_00:

Oh, I take it. I definitely see see right now as an opportunity. Um I have grown up with the education that is very standardized, and the beauty, the potential of AI is uh the personalization, the customization of the technology. This could really transform teaching and learning. Um, we're just playing with it right now, and the technology is not probably not quite there yet, but moving in a few years and a decade's time. It could, I mean, why do we need any kind of standardized assessment anymore for the standardized jobs? I mean, I think I mentioned a couple of things in my different answers, in that there may be a future where it's less standardized. And um, that means the teaching and learning, we could be freed from producing things that are to satisfy those kind of requirements that we have now. I hope that's the way we're going, to be honest. I you I mean, education, I mean, I don't think anybody thinks current education is perfect. It's kind of you know, teaching to the to the in some ways, the average, the masses in some ways. So it a lot of marginalized groups, people, you know, a little bit of outliers, they they they don't get the benefits of it. And with AI, there is a possibility, there is a potential that everyone gets that benefit that's personalized. So I um I want to see that.

SPEAKER_02:

Yeah. Thank you, Leo. Michelle?

SPEAKER_03:

I'm gonna take it as well. Um, for me, this has been an exciting shift. My courses are heavily writing intensive, so I'll use writing as the example. Um, I have definitely undervalued that process, that formative part of evaluation, looking at drafts and revisions and productive struggle, et cetera. But now that I've integrated AI into that process, we use the draft changes features where students can actually engage in that critical thinking and processing of accuracy and inaccuracies and et cetera. And what I've seen is um I understand why the gaps are in that polished product at the end now, or previously I couldn't see that process. Um, so I think that that's a really valuable contribution of AI. And so I see it kind of as a dual prong. Um, AI, I don't want it to relieve us of that polished end product. I do want a polished end product, but I think it allows us to reclaim that idea of um formative assessment and learning in the process as they reach that final product.

SPEAKER_02:

I'm reminded of a talk I heard Randy Bass give years ago. He's a vice provost at Georgetown University now. Um, and he was telling about an American studies course that he was teaching. He was having his students create these short video assignments. This was almost 20 years ago. So this was a big lift technically to do that. Um, and they were trying to make arguments through these short video assignments, and he would watch their final products and he said, It's like there was learning left on the cutting room floor that he wasn't privy to. He only saw their final product. He didn't see the decisions they made, the things they decided not to include, right? The arguments they started and then decided, no, this isn't isn't the right way, right? Um, and so um anyway, I I feel like yes, the process is really important. Um, and maybe it's it's yeah, AI is pointing us to that, certainly. Sarah?

SPEAKER_04:

One one thing I I'm I've been wrestling with lately around um the customized learning, which I think I I agree, Lou, is one of the really exciting things about AI. Um, the sort of uh flip side or double-edged that I worry a little bit about how to balance is um I could see us move towards a form of customized learning that means students don't have as much shared experience. And so I just I'm thinking a lot in my classes about uh you know universal design and giving students options and allowing and encouraging them to use AI in different ways and getting excited about the even more customized opportunity. But then I see places where I think, oh my goodness, if we're if we're too much working at different paces and using different tools and creating different projects, the opportunity to have like in communication, we talk about a boundary object, a sort of shared object for learning. I I could we could lose that. So I'm I'm trying to think about how to create that balance.

SPEAKER_02:

Yeah, there's um Flower Darby has written some great books on teaching online, and she talks about how uh she was working with a graphic designer to create some icons for her slide deck to talk about online teaching. And the first set of icons were all like some person at a keyboard. And she's like, No, learning is a community endeavor. You need multiple people in these pictures, right? Talking to each other. Um, and you know, that's that's there are, you know, if you're teaching fully asynchronous, there's a set of tools to try to create those shared experiences that are different than if you have some kind of synchronous online environment. Um so I'm mindful of time. I want to thank our panelists for sharing their experiences and their wisdom on this topic. I really appreciate diving into this with all three of you. Thank you for being here and for doing this. Thanks so much.

SPEAKER_03:

Thanks for having us. Thank you.

SPEAKER_02:

Thanks to our three take it relieve it panelists, Michelle Beavers, Leah Lowe, and Sarah McClellan, for sharing their perspectives and experiences with AI and online education. In the show notes for this episode, you'll find links to all the essays I cited during the panel, and links to more information about each of our three panelists. I would love to hear from you, dear listener, about today's panel. What perspectives would you take? Which ones would you leave? How are you navigating the role of AI in online education? You can click the link in the show notes to send me a text message. Be sure to include your name so I know who you are, or just email me at derek at derekbruff.org. Intentional Teaching is sponsored by UPSIA, the Online and Professional Education Association. In the show notes, you'll find a link to the UPSIA website where you can find out about their research, networking opportunities, and professional development offerings. This episode of Intentional Teaching was produced and edited by me, Derek Bruff. See the show notes for links to my website and socials, and to the Intentional Teaching newsletter, which goes out most weeks on Thursday or Friday. If you found this or any episode of Intentional Teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Tea for Teaching Artwork

Tea for Teaching

John Kane and Rebecca Mushtare
Teaching in Higher Ed Artwork

Teaching in Higher Ed

Bonni Stachowiak
Future U Podcast - The Pulse of Higher Ed Artwork

Future U Podcast - The Pulse of Higher Ed

Jeff Selingo, Michael Horn
Dead Ideas in Teaching and Learning Artwork

Dead Ideas in Teaching and Learning

Columbia University Center for Teaching and Learning
The American Birding Podcast Artwork

The American Birding Podcast

American Birding Association