
Intentional Teaching
Intentional Teaching is a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. Hosted by educator and author Derek Bruff, the podcast features interviews with educators throughout higher ed.
Intentional Teaching is sponsored by UPCEA, the online and professional education association.
Intentional Teaching
What Can We Expect from the "AI University"? (Bonus)
This episode is only available to subscribers.
Intentional Teaching Bonus Episodes
Exclusive access to bonus episodes!Questions or comments about this episode? Send us a text message.
Recently on the show I hosted a Take It or Leave It panel with my colleagues Betsy Barre, Bryan Dewsbury, and Emily Donahoe. One piece we discussed was “Are You Ready for the AI University?” by Scott Latham, professor of strategy at the University of Massachusetts Lowell, published in April 2025. In this bonus episode, I'm sharing a few more of my thoughts about the article and the provocative predictions it makes.
Podcast Links:
Intentional Teaching is sponsored by UPCEA, the online and professional education association.
Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe
Subscribe to Intentional Teaching bonus episodes:
https://www.buzzsprout.com/2069949/supporters/new
Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching
Find me on LinkedIn and Bluesky.
See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.
Recently on the show, I hosted a Take It or Leave It panel with my colleagues Betsy Berry, Brian Dewsbury, and Emily Donahoe. We reviewed several op-eds on teaching and learning in higher ed and decided for each one if we would take it or leave it. One of the pieces we discussed was titled Are You Ready for the AI University by Scott Latham, professor of strategy at the University of Massachusetts Lowell, an article published in April 2025. In this bonus episode, I'd like to share a few additional thoughts on Latham's article that I didn't get a chance to make during our panel discussion. One of Latham's most provocative predictions is that once the training wheels are off, as he says, AI taught courses will become the dominant paradigm in higher ed. On the panel, I was a leave it on this, and I still am. This feels like the MOOC hype cycle all over again. You remember those massive open online courses that were going to transform higher education? Well, they turned out to be useful to a subset of learners, mostly older, self-directed students with clear goals, but they never really transformed undergraduate education. Degree programs largely look the same now as they did pre-MOOC, and I don't think AI will change that either. A fully developed AI-taught course might help a small group of highly motivated students, but most undergraduates still need human teachers for motivation and support. Latham poses a hypothetical choice. He writes, when students are given a choice between an AI-taught virtual class with a high degree of accessibility and personalization, or a brick-and-mortar human-taught class, at the same time every week with little or no flexibility, which will fill up first. He expects the AI course to win. I am not convinced of that. I think most students would choose the human taught option. And if they need flexibility, perhaps an online human taught course that has that flexibility. The dichotomy that Latham proposes here, I think is a bit false. There are hybrid models that combine AI's strengths with the social learning and mentoring that humans provide that I think will be a better model for the future of higher ed teaching and learning. Latham also predicts that there will not be as many faculty in 10 years as there are today. I will take that prediction, but not because of AI. I think other pressures are already shrinking the professoriate. And if that trend continues, I'd actually like to see something To see more of what Juan Gutierrez talked about back in episode four of Intentional Teaching, he's a department chair in mathematics at the University of Texas at San Antonio. He did an analysis of student success in various service and entry-level courses in mathematics and identified a couple of large courses that were taught by a couple of adjuncts that didn't have great success rates. Juan's approach was not to swap out those adjuncts or try to change the core structure or anything. He just started paying them more. And by increasing their salary, they didn't need to take as many other side gigs and could focus more on the students that they had, and that actually moved the needle at UT San Antonio for those math courses. The labor question in higher ed is certainly a very important one, and I know AI will have some impact on it, but I think there are larger forces at work that we need to pay attention to and perhaps harness for more fair working conditions for faculty and better student success. Latham writes, will there still be human-led instruction in some places? Of course, we still have record shops and drive-in movies. That's a good line, but I don't think human-to-human instruction is going to go the way of the drive-in movie theater. Outside of a few perhaps specialized areas, humans teaching humans will remain essential. Latham makes a number of as I said, provocative predictions in this article. But I have to kind of think, like, is this essay overall useful for us to think through the future role of AI in higher ed? And I think I'm going to leave it on that, actually. Like, there are helpful parts, and I'll get to that. But I think there's an underlying assumption in this essay that AI is more capable than even expert humans, not just in teaching, but in other domains. And I just don't see that, not yet. And it's kind of hard to imagine that happening. I think what's Well, and then he goes from there to think about if AI is better than experts at a variety of things, and it's going to replace those experts at doing those things, when in fact, I think we're going to see a different model, more like the co-intelligence that Ethan Mollick writes about, where experts are using AI to enhance their work or do it faster. I think human plus AI will often exceed what a human alone can do, and will definitely exceed what an AI alone can do. I'll share a quick example. I recently played around with the deep research tool that chat GPT provides. This is an interesting tool. So instead of going back and forth with the chat bot, trying to kind of coach it to give you what it wants to give you what you want. With the deep research, as I understand it, it kind of does its own coaching. So you give it a task, and then it will kind of query itself in various ways to try to accomplish that task. So it ends up working, in my case, it worked for about 10 minutes, searching the web, analyzing things. I had asked it to put together some resources on cognitive load theory for a writing project. I wanted, you know, a kind of summary of some of the research on cognitive load theory, and I wanted to make some connections to a few particular domains And to back everything up with with actual resources. And, you know, it took 10 minutes, which is a long time in chat GPT world. But it came up with a really fantastic report that I found very helpful. And that I think, directed me to some resources and some ideas that I probably could have found eventually myself. But I think chat GPT was able to do in about 10 minutes, what might have taken me four to six hours. And so, you know, it was impressive what it did. But it wasn't necessarily better than And what I could have done, it just did it faster. And so that's where I feel like this kind of co-intelligence model that Moloch writes about is a better way to think about the future of AI in the university. So if on balance, I think this essay is kind of resting on an assumption about the power of AI that I don't really believe. But what what can we take from this essay that that is useful? Latham writes at one point, if your interest in AI doesn't extend beyond cheating, you're missing the bigger picture. Yes, I think I would agree with that. I think that's an important conversation to have. But, you know, if students are using AI to get out of the hard work of learning or to kind of fake their way through assessments. That's problematic, but, you know, I think a lot about Jim Lang's book, Cheating Lessons, from about a decade ago. He surveyed the research on academic integrity and cheating, and... came away with the notion that, in fact, cheating is not typically a moral failure. It's more kind of contextual in nature, so that if a student is in a course and there are high stakes riding on the outcome of this assignment, there's not any kind of opportunities to practice or get feedback or to try again. If they're not particularly motivated or interested in the assignment, these are all things that are going to lead them to take a shortcut and maybe use AI to get out of doing some of the work. But Lange argued that a lot of that we can design for we're not going to eliminate all the cheating in our courses but we can think about how can we lower the stakes how can we give students chances to practice and get feedback to have multiple opportunities to show us what they know as they move towards mastery how can we help them see themselves in the courses that we're teaching how can we design assignments that are more authentic and more relevant to our students I think So there's a lot of good that comes in a course that's designed that way, right? It's going to be more motivational to students. It's going to reduce cheating. It may help solve the AI cheating problem to some degree. But more importantly, it's going to be a valuable learning experience for our students. And I think that's the direction we need to be thinking about and thinking about what role AI might play in all of that. At some point in the essay, Latham writes, do professors really think that AI can't narrate and flip through PowerPoints as well as a human instructor? Well, it's true that if all you're doing as a teacher is narrating and flipping through PowerPoints, then perhaps you will get replaced by AI. But I think teaching involves a lot more than that. And if you're designing courses with authentic assignments and a real attention to student motivation and social learning, then you're not just flipping through PowerPoints in your class. And so I think my takeaway from this is what role will AI play in the teaching that we do in the future in our fields and our disciplines? Latham writes, how does AI alter how we teach engineering or pursue basic science? AI's impact will vary widely across disciplines, but all will be affected. And I think that's right. I think those are good questions. I think we need to be thinking about what are the... roles that AI is increasingly playing in the fields that we are preparing our students to enter, whether that's academic or professional areas? What are the competencies that our students need to develop with AI in order to use AI effectively and ethically? in those areas that they're going into. And while I think a lot of work has been done at the course level by some individual faculty thinking through those questions about the kind of useful roles for AI in their particular courses, I think what I'd like to see more of is more work at the curriculum or program level, thinking about our students when they graduate in X number of years from now, what are the AI competencies that they'll need? It's hard to predict that, right? The AI competencies they needed last year are different than the ones they need this year, and I'm sure it'll be different next year. But if we can start to think about how are we preparing students to use AI ethically and effectively in their next steps, where do we need to build those competencies into our curricula and programs? And frankly, I think some of the challenges that faculty face where maybe, you know, I'm teaching a bunch of sophomores and I'm a green light in my course. I allow them to use AI in a lot of ways. But, you know, that was not true for their instructor last year, and it won't be true for their instructor the following year. That's that causes all kinds of stress for students, especially as they try to navigate various different AI policies. But if we had a more coherent set of AI competencies that we were developing at a program or curricular level, then you could have a kind of more intentional sequencing of courses where perhaps students move from a kind of a red light AI to a more green light AI over time or vice versa. I don't know what the right answer is for all of those things, but I do think that we need to be thinking more at the curricular or program level and not just at the course level when it comes to responding to AI. That's it for this bonus episode of Intentional Teaching. This was a new format, just me reflecting on a conversation. Let me know what you think. And thanks, as always, for listening and supporting the show.