Intentional Teaching

Developing AI Literacy with Alex Ambrose

Episode 72

Questions or comments about this episode? Send us a text message.

Today on the podcast, we’ll get a window into how AI is affecting the teaching and learning landscape at one university, the University of Notre Dame in Indiana. My guest today is Alex Ambrose, professor of the practice and director of the Lab for AI in Teaching and Learning at the Kaneb Center for Teaching Excellence at Notre Dame.

Alex discusses Notre Dame’s recent decision to adopt Google Gemini campuswide, surveys of Notre Dame students and faculty about their changing views of generative AI, and the need for higher ed to do a better job teaching AI literacy than we did teaching digital literacy a decade ago. Plus, we hear about a really interesting project in the Notre Dame physics department using AI to provide feedback on handwritten student work on physics problems.

Episode Resources

Alex Ambrose’s website

Navigating AI’s Evolving Role in Teaching and Learning” with Jim Lang and Alex Ambrose, Designed for Learning podcast

What Is AI Literacy? Competencies and Design Considerations,” Duri Long & Brian Magerko, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems

Assessing and Developing Generative AI Literacy in Instructors,” Alex Ambrose, Si Chen, & Xiuxui Tang, University of Central Florida 2025 Teaching & Learning with AI Conference

Student Perspectives on Generative AI: Usage, Ethics, and Institutional Support in the Humanities,” Xiuxui Tang et al., 2025 Midwest Scholarship of Teaching and Learning Conference

Leveraging AI for Rubric Scoring and Feedback: Evaluating Generative AI’s Role in Academic Assessment,” Xuixui Tang et al., University of Central Florida 2025 Teaching & Learning with AI Conference

Anthropic’s AI Fluency course, https://www.anthropic.com/ai-fluency 

Validity of peer grading using Calibrated Peer Review in a guided-inquiry, conceptual physics course,” Price, Goldberg, Robinson, & McKean, Physics Review Physics Education

Support the show

Podcast Links:

Intentional Teaching is sponsored by UPCEA, the online and professional education association.

Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe

Subscribe to Intentional Teaching bonus episodes:
https://www.buzzsprout.com/2069949/supporters/new

Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching

Find me on LinkedIn and Bluesky.

See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.

Derek Bruff:

Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. I'm your host, Derek Bruff. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time.

Derek Bruff:

Just last week, as I record this, Instructure, the company behind the learning management system Canvas, announced a new partnership with OpenAI, the company behind the AI chatbot ChatGPT. My social media feeds were still seeing hot takes on that news when I learned that OpenAI had also released a new study mode for ChatGPT that's supposed to offer students some version of Socratic tutoring instead of just answering their questions, perhaps incorrectly. It's hard to keep up with all the new AI developments and all the ways that higher ed is responding to these developments, but I will continue doing what I can here on the podcast.

Derek Bruff:

Today on the show, we'll get a window into how AI is affecting the teaching and learning landscape at one university, the University of Notre Dame in Indiana. My guest today is Alex Ambrose, Professor of the Practice and Director of the Lab for AI in Teaching and Learning at the Kaneb Center for Teaching Excellence in Notre Dame, one of my favorite teaching centers. I heard him on the Kaneb Center's podcast earlier this year talking with Jim Lang about recent experiments in AI at Notre Dame, and I just had to have Alex on Intentional Teaching to ask him some more questions.

Derek Bruff:

In our conversation, Alex discusses Notre Dame's recent decision to adopt Google Gemini campus-wide. He also talks about surveys of Notre Dame students and faculty about their changing views of Gen AI, and the need for higher ed to do a better job teaching AI literacy than we did teaching digital literacy a decade ago. Plus, we hear about a really interesting project in the Notre Dame physics department using AI to provide feedback on handwritten student work on physics problems.

Derek Bruff:

Before we go to the interview, I'd like to remind listeners that I send out an email newsletter most weeks on Thursday or Friday, and I'd love to have you subscribe. You can do so by visiting DerekBruff.org or following the link in the show notes.

Derek Bruff:

Now my conversation with Alex Ambrose.

Derek Bruff:

Hi, Alex. Welcome to the Intentional Teaching Podcast. I'm glad to have you on today and to talk to you about things AI and teaching at Notre Dame. Thanks for being here.

Alex Ambrose:

Thank you, Derek. Real excited to be here and looking forward to this conversation.

Derek Bruff:

Me too. Me too. We'll start with my usual opening question. Can you tell us about a time when you realized you wanted to be an educator?

Alex Ambrose:

Yeah, thank you for this question. I was thinking about it for a little bit. And I think when I go back to early 90s, I just got my lifeguard certification. I started my first job... being a lifeguard at this quarry that hit a spring and it turned into a really nice swimming hole and a condo development went around it. And this is still like 80s free range kids where they were out there and without a lot of parenting. And I decided to start a little side business to make some money and also make it a little safer. So I did swim lessons. And one of my first students, I think she had to have been like five going into kindergarten and she was also small for age. She on the first day of lessons and the first student in the group lessons, we got her to swim all the way out to the dock doing tickle T push, the elementary backstroke, climb up, jump off the dock, and then swim all the way back. And just to see the face of, of her, of how proud she was conquering that fear, developing that, that life skill that she could do this on her own. Yeah, that, I think that was, that was, that she was my first student. And I think after that, I got hooked into that, that kind of impact on our, hopefully we can make on our students. Yeah.

Derek Bruff:

Yeah. I love that story. I love that story. And, And, you know, you've taught in many other contexts since then, as I understand it.

Alex Ambrose:

That's right, yes. Origins with the elementary school teacher in Detroit and made the shift in college. Around 2008, 9 to higher ed. Yeah.

Derek Bruff:

Yeah. And, you know, I enjoy those. The light bulb moments sometimes look different in higher ed than in the swimming pool. But that feeling of helping someone else kind of find something within themselves they didn't know they could do. That's really powerful.

Alex Ambrose:

Exactly.

Derek Bruff:

Yeah. Well, what do you do at Notre Dame? How do you describe your job to people?

Alex Ambrose:

Yeah, I'm one of those people that don't really like boxes and labels and titles, and I'm constantly changing my role. But this latest one is a little more complicated. But somebody from the outside, I would say, you know, I study and I learn artificial intelligence, that ChatGPT thing, so that I can better train and teach professors and students to enhance but not replace their teaching and learning. So that's, you know, if I'm non-academia, neighbors or– not academics. That's the definition I use. But when we're on campus or at conferences, I'll say, you know, I'm at our Center for Teaching and Learning. I've just been– named the shepherd of the lab for AI teaching and learning. And we're really focused on three goals. That's advancing AI literacy, exploring AI-driven innovation, and then fostering AI collaboration across the campus.

Derek Bruff:

Okay, okay. Yeah, it's a little more specific. Sure. But it still has lots of moving pieces, I imagine.

Alex Ambrose:

Constantly changing too, yes, yes.

Derek Bruff:

Yeah. So I think we'll probably get into all three aspects of your work. But I understand that Notre Dame has recently made Google Gemini available to all faculty and students. So Gemini is one of many now AI-powered large language model chatbots. How did that come about? How did Notre Dame get to that point? And maybe why Gemini? It's not, it's, you know, other institutions have made other choices. I'm curious about that.

Alex Ambrose:

Yeah, yeah. This is a thing I'm really, really proud of our institution for making that call and making that bold move. I mean, we were one of the first couple dozen institutions institutions, large institutions to make that decision to what they say is flip on the Google Gemini switch. You know, because we're already a Google Apps school, the infrastructure, the licensing, were all there. So it wasn't really any more money. It was just a matter of, you know, letting go of that release of that switch to unleash it. But so I think from a cost standpoint, feasibility was very easy to do. But I was very proud of the conversations that went on on whether they were gonna do this or not. And at the core, the university did make a commitment and it was about AI access, right? You know, we believe here, give every student and faculty member and staff that equitable right and the opportunity to discover, authorize and use these AI tools that are, a safe platform and try to close those digital divides and close those access gaps to really make the platform tool a non-issue that we can start moving forward. To the second is once we do have this common universal tool across campus, now there's no real barriers for us to start looking at developing AI literacy at scale because we do have that common tool. You know, Getting a little bit into details between ChatGPT and others, you know, again, because we were at Google Apps Education School, you know, we already trusted them with our data privacy with all our Gmail and Google Docs. So that was also another big part of the decision, making sure we're going to be partnering with somebody that's going to really protect our faculty and student data. So, yeah, that's a little bit of the background. And it was just turned on, I think, in March of this past spring.

Derek Bruff:

Okay, okay. And how are faculty and students reacting to this new opportunity that they all have?

Alex Ambrose:

Yeah, actually, we were anticipating like a big, a big reaction, and it really didn't happen. Life continued to move on. And I was really expecting a little bit more, maybe concern and pushback. But yeah, it's, I think it's a point where we're at right now, you know, with ChatGPT, OpenAI releasing that, you know, two, three years ago, we've kind of turned the corner around and everybody's realizing these tools are here to stay. They're not going away. And there's kind of a little bit of consensus that we're moving forward to starting to figure things out and deploying and innovating with these tools.

Derek Bruff:

So you've said literacy a few times now. Yes. What do you mean when you say AI literacy?

Alex Ambrose:

Yeah, so I like to ground things in the literature. My favorite article right now is Long and Magerko, What is AI Literacy? Competencies and Design Considerations. They put out one of my favorite definitions, and they say a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI technologies, and use this AI as a tool online, at home, in the workplace. So the skills, the knowledge, the dispositions for them to use it in all facets of the life.

Alex Ambrose:

Again, we did this giant Notre Dame Teaching Well with AI Academy, and we really wanted to ground that academy in developing these AI literacies. So we found another great article that did a really nice meta-analysis review of the different core competencies and skills of the AI literacies, and we kind of boiled it down using the article to eight or 10 that involve knowledge of the general AI models, knowing the capacity and limitations of those models, the skill to use those tools, the ability to assess the outputs of those tools, skill in prompting the Gen AI models, knowledge of the ethical implications and the legal aspects. So these are like eight to 10 very core specific literacies that we worked with.

Alex Ambrose:

And we were lucky to have a postdoc on the team that helped us create a pre-AI Academy self-assessment on their literacy. And we did a little measurement there. And then we went through the once a month for five months. And through the AI Academy, we did a follow-up post and we measured the delta and good news to report on all of those AI literacy competencies. We had gains in every single one of them. Some definitely more than others. And I can share a link to a presentation I just did that has more of this data that people want to dig into that. But we saw the biggest gains, what faculty grew the most and appreciate the most is the prompt engineering and the continuous learning about it, knowing what they don't know and now knowing how to improve it. But that prompt engineering, that's the biggest, and I've seen this with students too, once you understand how these models work and how to talk to them in the smartest way, you really can get the smartest and best and most ethical output. So those are some of the biggest gains.

Alex Ambrose:

On the lower end that were the more modest or lower gains was detecting the AI generated content. That's a big concern with faculty. They want more training and understanding or tools to understand how to evaluate that. And the last one, which I'm kind of glad was one of the lowest gains, is understanding the AI limitations. And we're looking at the qualitative data, the reflections, and the learning logs, but I think that's another fact of, you know, before they were unconsciously incompetent, but now they're consciously incompetent. They know what they don't know, and they know what they don't know anymore. So, again, we could not have gotten any better data coming out of this AI Academy and measuring. Like you said, you're pushing me on what AI literacy is and what are those competencies, and that's the approach that we've taken with faculty.

Alex Ambrose:

Now, on the student side, we also did another pretty big survey. This was also this past spring semester. And again, I can share this poster we did. And we surveyed about 450 or so students. Most of them Well, they all came from two. One, a second semester, first year writing a rhetoric course, and anybody taking a language, a Spanish romance language, French course. And what we found there is on usage and perceptions and their literacy, 87% of the students, you know, have used generative AI tools, which is, you know, Not too surprising, but that was a big awareness that, you know, 15, 13%, you know, have not touched it.

Derek Bruff:

I was thinking that number would higher.

Alex Ambrose:

Yes, us too, yeah. So there could be a little bit of a gap. We also saw that, interestingly, the students were using... more personal applications than academic. And we saw a recent report from Harvard Business School that saw something similar. And the other shocking thing here was that when we asked them, where are you learning about AI? Like number one, social media and friends. And at the bottom of the list, I mean, I think it was like, let me look at the percentage here. From the university, it was, let's see where it was. I don't want to misquote my stats here. Here it is. Yeah. Two percent of them from the university. So that that was a big, big reality that, you know, as a higher ed institution, they're really not getting a lot of this from from us. We asked the follow up question about how they want what they want from the university to help. to understand what it means to be literate and make sure they're using it ethically. And they want the clear guidelines in standards and the academic code and the assessments expectations. They want, this is, This survey was done right before we released Gemini. The university provided a tool. So we did that. We did that. And they wanted some videos and resources and stuff like that. So, again, these were some of the things that have been driving our strategy moving forward of where we see the students were at, again, as of January this past spring and how we can help them as well move along.

Derek Bruff:

So I have lots of follow-up questions. I'll start with this. So... I was hearing from one of my colleagues at the University of Virginia this morning who had done work with some students to do some student kind of interviews. Like they actually set up a table on the quad and talked to passersby. And one of the themes that he reported, and I'm wondering if you see a version of this in your data as well, is that students, at least when you ask them what they want from their education, they don't want to outsource it to an AI tool. They'd actually like to develop the skills themselves. the skills of the domains and the disciplines themselves. Um, and that they're not like actively seeking shortcuts. Um, some of them may still find those shortcuts under certain pressures, but are you seeing that as well too, that they're like, there's still a kind of core student who's like, yeah, I would actually like to learn stuff while I'm at college.

Alex Ambrose:

Yeah. Yeah, I mean, again, from anecdotal, I mean, some of the survey questions get at this. But I'm thinking with your question, we did an undergraduate panel on AI. And there's this one young man who said something that it still sticks with me. And he said, you know, professors, if you're not using AI, that's fine. Okay. But tell me why you're not using it. And professors, if you are using AI, okay, tell me. Why and how are you using it, right? They want that transparency, right? I think they do. Generally, a lot of these students are making big commitments to really gain these skills, and they don't want to necessarily cheat themselves, but they also want to be aware of how these tools are being impacted in these particular domains and its careers and what they might think about. So I am actually pretty impressed with the students, too, with your sentiment that they're– I don't think on a whole they're looking at it more as a crutch, but more as a tool.

Derek Bruff:

The way I see it is I do hear from a lot of students who want... this literacy. They want these competencies. And they are looking for guidance from their instructors and from their universities. I think, you know, as a whole, universities should be providing that for students. I think the impact of these tools in the jobs that our students are going into is varying, but it's not insignificant. And I like to, you know, I think there are some faculty who can be thoughtful skeptics and say, not in my course, right? For what I'm doing here, this is not going to be helpful. But I do think that kind of at a curriculum level, this is something that makes sense to help our students with.

Alex Ambrose:

Yeah, yeah. As you mentioned, I think of like our computer science department here. I've talked to some people over there. I mean, as a department, they made the decision, like no AI for the first two years. Let's just make sure they learn the code. And then, you know, the the way the curriculum set up the last two years, then we can start thinking about once they know how to code, how do we develop them into their practice and become more effective and more efficient at it and what it means. So I think there's going to be a lot of thoughtful integration and conversations. Again, maybe and my hope is not everybody just goes in and does none or just the same surface level intro to AI. The funny curriculum map conversation I like, my background is in assessment and I remember working with accreditation, like physicists, like what if everybody just did the basics of like quantum theory and like the kids are getting just the basics intro of quantum theory and all these little classes. And as you go up to the four years, they never really get past that little basic intro to quantum physics. But if you look at it as a spiral curriculum, like, okay, we're making a decision that, you know, some of these courses in the first year are at least going to address it at this level. In the second year, we're going to maybe go over to, but at fourth year, it'll be fluent. And I think we're going to have to have some of those conversations about, as a program, when, if, how, where, that those skill development, those AI fluencies are going to make sense.

Derek Bruff:

Yeah, so I'm going to jump to the question I put at the end of our list.

Alex Ambrose:

Sure, sure, let's do it.

Derek Bruff:

I think it's relevant. On the Design for Learning podcast from Notre Dame, you indicated, not the first one that I've heard say this, but I think it's a provocative statement, that maybe a decade ago, there was a need for higher ed to teach our students better digital and information literacy, and we didn't do a good job of doing

Alex Ambrose:

Yeah.

Derek Bruff:

And so I'm curious... I'm paraphrasing, so feel free to state your argument in your own words, but why do you say that? And are there any lessons to be learned from the last decade and a half of trying to teach digital literacy for the current AI moment?

Alex Ambrose:

All right, yeah, let's go there. Again, we might be switching from the pedagogical to more the philosophical and geopolitical, but... In 2005, I was an infantry officer in Baghdad, and I did the first free elections, the Green Thumb thing. And so I've been in a collapsed capital of a country. I've seen democracy fail or try to pursue. So fast forward, when we look at 2016, 2020, 2024 elections, Those elections really scared me. Like, what we did as a society, as a country, the inability to have discourse, the lack of critical thinking, of fact-checking and researching, the misinformation campaigns, the Twitter files. I mean, you go through it. We... Our country has been really fractured and divided and polarized. So as a citizen, as a veteran, combat veteran, as a father of two teenage daughters, like I get worried if we don't participate in this AI race.

Alex Ambrose:

You know, back then, 2009, 10, I'm working on my PhD in education technology. We see the 2009 Iranian green movement, the Twitter revolution. We thought the great China firewall was going to collapse. How are these governments going to hold up with this free social media information? And I don't know. The optimism came out. I think it was very pessimist. I think... Some of these governments found ways to use the technology to deeper control. Through my own personal experiences going around the world and seeing these things, I get worried that if America does not win and stay on top of this AI race, the West and the world, they need us to lead it in the way we do for For freedom and for human flourishing. And this gets, you're pushing me. And this is where sometimes I get a little, like I said, a little philosophical and geopolitical.

Alex Ambrose:

And I make a call to faculty to say, you know, I understand the moral panic. You know, I understand what this is. This is a pretty big identity shifting and change to our practice. core profession or ways of knowing or thinking or training, but we have to start moving towards the practical solutions. Like, yes, we have to have these big AI ethical conversations and talk about access, but we also need to start moving into the AI literacy. Yes, AI policy on cheating is important, but let's also talk about the AI pedagogy as well. You know, Yes, AI can be harmful, a crutch and a weapon for learning, but it also can be a lever and a tool. So that's really what you're getting me. I'm making a call to our faculty to help us out of that. Maybe it was a little bit of like a digital dark ages the last 15, 20 years. And again, I was in the middle of it. I was trying to teach these digital literacy like 15, 10, 15 years ago. Hey, let's use blogs, wikis, and podcasts. Hey, let's teach students WYSIWYG editors and How to hyperlink and fact-check Wikipedia and not cyberbully and comments. And what happened with Twitter and Facebook and elections since then, I don't know if we did what we needed to do as an institution, a higher ed institution, to provide our public the tools, the ways of knowing to get through this. So that's where... That's where that comment from my interview with Jim Lang is coming from, is that I used to be a techno-optimist. I'm getting a little techno-pessimist. I'm really calling for help, especially from the traditional AI skeptics. We need your help with this to figure this out.

Derek Bruff:

Yeah. So I have a theory thinking about what didn't happen 10 or 15 years ago in higher ed. Yeah. Because I know... You know, I knew individual faculty and especially librarians who were very passionate about teaching digital and information literacy and were really good at it. But I feel like it was always like these little pockets of people who were passionate about this.

Alex Ambrose:

I think that's right.

Derek Bruff:

And I wonder if our mistake was to not shift the curriculum fast enough. to say we need a systematic approach to helping our students grapple with these new technologies, as opposed to kind of depending on the goodwill of the one-off faculty member or librarian.

Alex Ambrose:

Yeah.

Derek Bruff:

And I'm wondering if we're in a similar position now where if we don't have a curricular response to AI and AI literacy, that we're going to end up with similarly distressing outcomes.

Alex Ambrose:

I take it. I believe it. I think you're onto something there. You're right. Yeah. We can't just count on those few, you know, early adopters, those few innovators. A student can go there four years, maybe get one or two of those professors. But yeah, I think there's something larger that collectively across departments, across colleges, across the university that we could work towards and, you know, We're still in the early stages of what those principles, those literacies, those fluencies are about. But I think you're right. And that's why, again, I'm proud of our institution starting to develop, providing the tool, creating this lab or try to create more support for faculty and students to take on this challenge.

Derek Bruff:

Yeah, because I remember back in the day. if the faculty don't have the right literacies, they're not in a position to teach it to students, right? Like I remember lively debates about Wikipedia and its role in teaching. And we kind of, we had to have those debates as a faculty first before we could decide, okay, here's how we're going to help our students navigate this. Yeah. Yeah. Let's take it a little bit more concrete. Okay. Because you mentioned kind of innovations in pedagogy.

Alex Ambrose:

Mm-hmm.

Derek Bruff:

thinking about AI as kind of where it goes in our pedagogy. Let's say I am interested in doing a scholarship of teaching and learning project. And I'd really like to get a better grip on how my students are interacting with AI, how AI is affecting their learning. Like maybe I haven't made up my mind yet about what I want to do, or maybe I have a theory. Like I'm going to try this particular intervention. But what advice would you give to a faculty member who wants to get a better handle on on AI and learning in their own teaching?

Alex Ambrose:

Yeah, I love this question. Thank you. I have a series of questions I usually do. There's the if question. If technology, or specifically AI, is the solution, then what's the actual problem? What are we trying to solve, right? Let's step back. Let's have a discussion about that. Then we can say the what and the why. So what kind of impact are you looking on student learning or why are you trying to make it? And then maybe we can find something in the literature, some learning theories, some practices in the discipline-based educational journal to say, hey, you're not the first person to think about using polls in the classroom. Let's see how others have done it. Maybe there's something you can build off of or use there. And then And then the how, again, we mentioned Jim Lang. It's great that he's with us. I have his book on my desk here. And I usually tell faculty the how is, Jim says, don't redo the whole course, every assignment, just one small step. What's one assignment we can change? What's one instructional strategy we're going to do? So I say, let's bring it down. Don't redo your whole course syllabus. Change all your course goals. Let's stick with one. And then we'll figure out where and when that makes sense. So, yeah, those are the basic steps I'll talk to faculty to get them to start thinking like a scholar practitioner, start using that literature. And then our lab, our undergraduate research assistants, our postdocs can help them get familiar with the literature, help them develop a survey or come up with a methodology that might help them collect a little data on that. And then it's fun to see them apply research to their own teaching, not just their fields. Yeah. Yeah,

Derek Bruff:

yeah. Okay, so that's the framework. Can you share an example or two of some projects at Notre Dame that have started this exploration?

Alex Ambrose:

Sure, sure. I think you want to talk a little bit about the physics one, and we can go through on that one. So this was about a year ago when ChatGPT 4.0 came out, the Omni, which allowed you to take pictures and it had a stronger reasoning and computational. Well, one of our physics professors said, We've been working on a large course, curricular transformation. It's a three-year project to rethink intro to physics for engineers. So he, this professor, he took his first exam with ChatGPT 4, took a picture of it, and ChatGPT got a 91% showing the work and making human-like mathematical errors in these highly integrative complex word problems. So that created a lot of... discussion and thought about what does this mean to teaching physics, our homework, our assessments.

Alex Ambrose:

And we came, one of the lines in which we decided to address the problem is like, well, how can we make maybe better, how can we make small stakes, low stakes, formative assessments that there's no reason to cheat that would support tied higher stakes summative assessments, the exams, and how can we get better at being clear and transparent on the learning goals that we're trying to assess and creating rubrics that would really pinpoint, do they know these? Not that I got you on taking partial point off of this for that, but if a student's going to get something wrong, not a gotcha checklist, but how can we use a rubric to really diagnose and give feedback to students? But, you know, that takes time and effort to make these rubrics. There's a lot of challenges of making grading and feedback consistent.

Alex Ambrose:

So we created a rubric, a very detailed rubric. We had four professors in physics score some student samples on that. And they had a range, you know, 18 out of 20 or 17 out of 20. And then we gave that same rubric to ChatGPT. And ChatGPT was right in that variance of those professors. So that was a big... big takeaway is that, whoa, if we make a good rubric, that's actually a good prompt and that's actually being transparent and that's actually allowing ChatGPT to go in there and do all that heavy lifting and do all that written feedback that a human can come in and potentially... spot check, evaluate, credential, tweak, and pass on. So that was our problem. Our solution was to see if AI could do it. We did inter-rater reliability. We found, again, I can share this post-training show notes too, that, again, ChatGPT is doing just as good as four different human physics professors in scoring them in a reliable and accurate way. And And then

Derek Bruff:

just for clarity, you're looking at handwritten physics by students, right?

Alex Ambrose:

It's taking a picture of all that. You're in math. I know you're in math. So Chat GPT can take up, like read all that handwriting, read all those equations and, look at the rubric, and diagnose their computation and their error. It was absolutely mind-blowing. So that was an example. And so we didn't just do this little innovation. We tested it. We grounded it. We studied it. And that gave the faculty some more confidence to start moving forward. And just like two weeks ago, I'm meeting with that same group, and now we're looking at, well, hey, if AI could– take the exam, and AI could grade an exam. Well, we want to make a bunch more of these better assessments, but they're taking a lot of our summer episodes. So can AI create exams and create rubrics in these question banks? So we use something called assessment blueprints for mapping out those learning objectives, making assessments. giving the rubrics and exams and latex and training that. And now we're looking and exploring, can AI actually assist in rubric and exam creation to make assessment construction more efficient? Which, you know, that's one of the hardest and most time-consuming things. So, again, that's a little example of how we do, you know, applied research, SOTL, with AI here.

Derek Bruff:

So... So what changes then in that physics course, knowing that AI can use a well-structured rubric to provide feedback that is remarkably similar to expert feedback? What does that mean for that course going forward?

Alex Ambrose:

Great question. Yeah. So this is a three-year course redesign that's meant to happen next spring. So we haven't deployed it yet. We're all in the testing phases. The fact that we're looking at a couple of things. Again, some of them are still very careful and cautious. There's a lot of faculty at how much we can and should outsource or assist with the grading and scoring. We know there's a lot of ethics involved in that, that we gotta keep the human in the loop if we're gonna do something like that. But what we're looking at is if we have these rubrics, because again, a lot of the students, most of them are getting it right. Very few are getting it all wrong. And then there's that chunk of students, you know, with math that are just making some silliness. And those take the most time to grade, right? So I know you know that.

Derek Bruff:

A score of two out of 10 is very quick to assess. A score of nine out of 10 is also quick to assess.

Alex Ambrose:

Yes, it's those five, six, sevens. And you got to figure out what a good feedback and assessment is going to figure out exactly. Was it just a calculation silliness? mistake or is there was there a concept that they did not get and that's what broke down so we're looking at can we run reports on these rubrics and create like student uh we call it um profile reports like after the exam or after the quiz hey these are the objectives you know these are you don't know if you don't know these then you say it to get spun up on those areas um so that's one area we're thinking about could it give better feedback and report, and can the instructor get a nice, just, you know, my background learning, Alex, can the instructor get a just-in-time report after the first quiz with all the rubrics and say, hey, 90% of my class knows these four objectives, but these two are the ones that the most are having, so I'm going to add a little mini lesson next class on this. This stuff takes so long. It's the old-fashioned way, but could AI-assisted assessment here help?

Derek Bruff:

You said something a second ago that creating the exams is the hard part. What I've always told faculty is that there's a lot of effort involved, either writing good multiple choice questions that assess what you really want. And then the grading is a breeze. Or you write open-ended questions and the evaluation is the hard part, right?

Alex Ambrose:

That's exactly it.

Derek Bruff:

So what I'm hearing is that this might give the benefit of an open-ended question, which kind of allows for different types of questions, different types of explorations by students. You get the benefit of an open-ended, but you get the kind of batch processing that comes with a multiple choice because the AI is the one looking for the patterns in student mistakes.

Alex Ambrose:

Well said. Absolutely right. That's exactly it. We know the constructed response is going to get a better assessment, but it takes longer to make and longer to create. But can we do better assessments faster with AI than using the traditional selective response?

Derek Bruff:

Yeah. Yeah. Well, and I know some of the faculty listen to this. If you've got 12 students, this doesn't sound very exciting. If you've got 175 students, there's challenges of scale. Yeah, absolutely. Well, thank you, Alex. This has been really great. Thanks for sharing a little window into the world of AI on your campus right now. Yeah, you've given us a lot to think about. Thank you so much.

Alex Ambrose:

Thank you, Derek. It's an absolute pleasure.

Unknown:

Thank you.

Derek Bruff:

That was Alex Ambrose, Professor of the Practice and Director of the Lab for AI in Teaching and Learning at the Kaneb Center for Teaching Excellence at the University of Notre Dame.

Derek Bruff:

I keep thinking about that physics project Alex mentioned. When I talk to faculty about the idea of generative AI providing feedback to students on their work, there's often some skepticism, as you might expect. I'll point out that there is emerging research like the project at Notre Dame indicating that when an AI tool is provided clear criteria for feedback, can actually do a pretty good job giving that feedback. I'm reminded of the skepticism I heard a decade ago about having students peer review each other's work in those massive online courses. It's true that expert feedback is generally going to be better than peer feedback, but there are a number of studies, mostly looking at the approach called calibrated peer review, showing that when four or five students give feedback on a peer's work, their average response is right in line with expert feedback. We seem to be getting similar results when it comes to AI feedback, but calibration continues to be a key element.

Derek Bruff:

See the show notes for a link to one of those calibrated peer review studies, along with links to the many studies and projects that Alex mentioned in his interview. And thanks again to Alex for taking the time to update us on the state of AI at Notre Dame.

Derek Bruff:

Intentional Teaching is sponsored by UPCEA, the online and professional education association. In the show notes, you'll find a link to the UPCEA website, where you can find out about their research, networking opportunities, and professional development offerings. This episode of Intentional Teaching was produced and edited by me, Derek Bruff. See the show notes for links to my website and socials, and to the Intentional Teaching newsletter, which goes out most weeks on Thursday or Friday. If you found this or any episode of Intentional Teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Tea for Teaching Artwork

Tea for Teaching

John Kane and Rebecca Mushtare
Teaching in Higher Ed Artwork

Teaching in Higher Ed

Bonni Stachowiak
Future U Podcast - The Pulse of Higher Ed Artwork

Future U Podcast - The Pulse of Higher Ed

Jeff Selingo, Michael Horn
Dead Ideas in Teaching and Learning Artwork

Dead Ideas in Teaching and Learning

Columbia University Center for Teaching and Learning
First Player Token Artwork

First Player Token

Derek Bruff
The American Birding Podcast Artwork

The American Birding Podcast

American Birding Association