Intentional Teaching

AI-Integrated Assignments with Kiera Allison, Jamie Jirout, Spyros Simotas, & Jun Wang

Derek Bruff Episode 66

Questions or comments about this episode? Send us a text massage.

On the podcast today, I talk with four University of Virginia faculty who are serving this year as Faculty AI Guides. This provost-funded program has enlisted 51 faculty to explore potential uses of generative AI in their teaching and to share what they learn with colleagues in their departments and schools. Back in January, we invited the Faculty AI Guides to share assignments from their fall courses that thoughtfully integrated AI to support student learning. I put some of these assignments in a collection on the UVA Teaching Hub website (see the link below), and on this episode of the podcast, I talk with four of the Faculty AI Guides who contributed assignments.

Kiera Allison is an assistant professor of management communication, Jamie Jirout is an associate professor of education, Spyros Simotas is an assistant professor of French, and Jun Wang is a lecturer in Chinese. In our conversation, the four Faculty AI Guides talk about their motivations for being in the program, what they have learned about AI and teaching through their experiments, how they respond to concerns about students outsourcing their learning to AI, and what’s next for their use of AI in teaching.

Episode Resources

·       Faculty AI Guides website

·       “Integrating AI into Assignments to Support Student Learning,” UVA Teaching Hub

·       “Red Lights, Green Lights, and AI-Integrated Assignments,” Derek Bruff, March 4, 2025

·       AI Needs You: How We Can Change AI’s Future and Save Our Own, Verity Harding, Princeton University Press, 2024

·       “How to Encourage Students to Write without AI,” Beth McMurtrie, Chronicle of Higher Education, February 13, 2025

·       “AI Podcast 1.0: Rise of the Machines,” Planet Money, May 26, 2023

·       “Comparing the Quality of Human and ChatGPT Feedback on Students’ Writing,” Jacob Steiss et al., Learning and Instruction, June 2024

·       “Exquisite AI Corpse,” Maria Dikcis, AI Pedagogy Pr

Support the show

Podcast Links:

Intentional Teaching is sponsored by UPCEA, the online and professional education association.

Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe

Subscribe to Intentional Teaching bonus episodes:
https://www.buzzsprout.com/2069949/supporters/new

Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching

Find me on LinkedIn and Bluesky.

See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.

Derek Bruff:

One of my roles at the University of Virginia this year has been supporting the university's Faculty AI Guides program. The provost office has funded 51 faculty to explore potential uses of generative AI in their own teaching, and to share what they learn with colleagues in their departments and schools through informal consultations, as well as presentations, workshops, and lunch and learns. The faculty AI guides have learned a lot this year, and they have helped the university move toward the goal of having every instructor make informed, intentional choices about the proper role of generative AI in their teaching.

Derek Bruff:

Back in January, we invited the faculty AI guides to share assignments from their fall courses that thoughtfully integrated AI to support student learning. Several of the guides were willing to share their assignments publicly, so I put them together in a collection on the UVA Teaching Hub website. One of the most frequent requests I received from faculty this year was for concrete examples of AI-integrated assignments, and I was glad to have a collection I could point people to. See the show notes for a link to this assignment collection.

Derek Bruff:

On the podcast today, I talked with four of the faculty AI guides who contributed assignments to that collection. Kiera Allison is an assistant professor of management communication. Jamie Jirout is an associate professor of education. Spyros Simotas is an assistant professor of French and Jun Wang is a lecturer in Chinese. They each took different approaches to potential roles for AI in their courses. For instance, Kiera and Jun shared green light assignments that invited students to explore any and all ways that AI might be useful and to document and reflect on their explorations. Spyros, on the other hand, took what I call a red light then green light approach to his assignment, asking students to complete the first half of the writing assignment without AI, and then using AI in targeted ways for the second half.

Derek Bruff:

This episode is longer than typical here on Intentional Teaching. It's also the first time I've had four guests on one show. In our conversation, the four faculty AI guides talk about their motivations for being in the program, what they have learned about AI and teaching through their experiments, how they respond to concerns about students outsourcing their learning to ai and what's next for their use of AI in teaching.

Derek Bruff:

Thank you, Faculty AI Guides, for being on Intentional Teaching today. I'm very excited to talk with you and to learn more about your work in the classroom and with faculty at our institution, UVA. Thanks for being here. I'm going to start with a question I'd like all of you to answer, and that is, what motivated you to apply to be a Faculty AI Guide this year? And I'll start with Jamie. Jamie?

Jamie Jirout:

Sure. My motivation was a little bit selfish and that I was just really curious about AI and how it could be used for teaching. And I saw this as an opportunity to kind of hold myself accountable to learning. And also I love trying new things and then sharing what happens in my own courses. So it seemed like a great opportunity where that was kind of the idea of us learning and then trying things out and sharing what we're learning and helping collaborate with other faculty to get them also excited and integrating AI in their own teaching.

Derek Bruff:

Nice, nice. I gather you're someone who often talks to colleagues about teaching.

Jamie Jirout:

Yes, we have groups within the School of Education where we meet informally every month to talk about teaching as kind of a professional learning community that's just general. And we also have more specific topical professional learning communities around things like educating for character. And then some of my research also involves figuring out how to teach in ways that create creates the classroom climate conducive of promoting different things like curiosity and questioning intellectual humility. So I am really excited and interested in thinking about how we design our courses and the pedagogical methods that we use to teach. So again, yeah, I also wanted to know more about AI and learn how to integrate it into my teaching. I feel responsible in preparing our students to go into the world of AI. So that's what motivated me.

Derek Bruff:

Yeah, yeah. Spyros, what about you?

Spyros Simotas:

Yeah, first of all, thanks for having me in your podcast. And my motivation, I... It was an amazing professional development opportunity that I didn't want to miss because I tend to agree with all these people who, when they talk about AI, they say that this would be one of the most consequential technologies of our time. I was listening to a podcast by Ezra Klein yesterday, and he was saying that there's going to be a before and an after. AI, like we're living in that sort of moment right now, historically. I tend to agree with that position. And then there was also a book that I read by Verity Harding, and her title is AI Needs You. And basically, she argues that in order to build the AI future that we want, all of us need to be concerned and involved. So I'm both concerned and I want to be more involved in what's happening in education around AI. So as Jamie said, I wanted to learn more and I wanted to help other people, my colleagues learn a little bit more too.

Derek Bruff:

Yeah. You didn't want to sit on the sidelines. That's great. That's great. Jun, how about you?

Jun Wang:

Yeah. Personally, I already use AI in my daily life. I found it very helpful. Back in summer, right before I saw the call for a faculty AI guide, I was like, involved in a lawsuit case that I'm trying to write a statement, but it's a small case. claim court, I think. Yeah. So like, I don't want to hire a lawyer. So I just use AI to write the statement and it looks very good and I'm all right. So I just feel like, oh, it just saved a lot of money for me and super helpful. So yeah, basically I already use AI in my daily life, but it's just like how to implement it more in teaching. That is my question. I wrote this in my application. I said, I believe AI is just a tool. but how we use it as a human being like we need some like training and equipped with some education theories or whatever. So I said, now I feel like with the training from the CTE, I feel like I know a lot. And now I'm excited to use this tool in my teaching. And as the call said, there will be learning opportunities for us. So I also want to learn. And I want to use it to improve my teaching and also can help others. and my fellow colleagues. And also, I feel like a lot of times the idea happens because we talk a lot, we discuss it with our peers. It inspired us to have better ideas. So I really want to have this opportunity to collaborate with others and just have a channel to talk.

Derek Bruff:

Right. I love that. I love that. You've also spoken to the mission of my podcast a little bit. Kiera, what about you? What brought you to this program?

Kiera Allison:

Honestly, it felt like an extension of work I was already doing. So that was an easy leap. I'm part of three different scholarship of teaching and learning projects held in part by the Center for Teaching Excellence program based on trying to understand how both students and faculty are responding to AI and adapting to it. I spent most of last summer designing and filming an online asynchronous course on communicating with generative AI. So I've been deep in the space for a while, but it felt like an opportunity as well. I mean, I'm looking back historically and wondering when else We've had faculty come together across so many departments to try and understand one thing. It feels like a moonshot moment and that's wonderful. We don't usually collaborate to that extent. We usually don't get that much disciplinary intelligence focused on one thing, which of course helps us understand that one thing, which is AI, but also just is a coming together of different thought styles. So that was exciting to me. As is the disruption. I mean, they say don't let a good crisis go to waste. Don't let a good disruption go to waste. We tend to accept the status quo until the status quo is no longer sustainable. I think COVID was a miniature version of that, you know, where under duress of a pandemic, we thought deeply about what was necessary and what was possible and what needed to change. And then we got past the pandemic and, you know, start to reconstruct the the status quo that existed before that. And with AI, that status quo seems less viable and we're seeing a larger scale rehaul or overhaul in the face of this technology, not all of which is good, but I think some of which is necessary. And I really enjoy thinking through change, kind of radical paradigm shift that we're experiencing in the face of this technology.

Derek Bruff:

Yeah, I think it's shown a light on some teaching practices that were never great to begin with and are now unsustainable in a more obvious way. So I want to focus on the assignments that you've been doing with your students, because a lot of your exploration has been the activities, the assignments, the ways that you're weaving AI into your courses. And you've taken some different approaches And we have a few of your assignments shared on the UVA Teaching Hub. And so I want to start with Kiera and Jun, because both of the assignments you have up on the Teaching Hub are what I might call a green light assignment, where you're encouraging students to explore AI in pretty much any part of the assignment and see where it's useful and where it's not useful. And so I'm curious, and I'll start with you, Kiera. What have you and your students learned through this type of assignment about the role of AI in the courses that you teach?

Kiera Allison:

Yeah, so I should preface my response by saying that we had, by the time we'd done that assignment, which was the do something impossible with AI assignment, we had done several other smaller scale scaffolded AI incorporating tasks so that students had a sense of the range of AI engagements that they could use and also we had done some work in terms of looking at what kinds of prompts were effective and why those were effective. So there was a lot of training that went into the green light version, which was now you have a sense of what AI can do and you have a sense of what you can do and you're going to put those two things together. I think that in that green light context, what students learn very quickly is what AI is actually capable of and what it's not capable of. And in that context, they learn how to pivot and adapt. So they can try a thing and see if AI helps them achieve it. And if it doesn't, they pivot and try something else. So it was, I think, a good way for students to get a feel for the technology in the context of what they were learning, which specifically was persuasion, how to be persuasive, and also to understand what AI could do to fill out their capabilities. So they're learning about AI and they're also learning about themselves and how those two agents can converge to do something hopefully bigger than either of them could do alone.

Derek Bruff:

Yeah, I love that because I do think that there's, in using AI well in a particular domain, there is a kind of There's the domain knowledge. Do I understand this task, right? A persuasive communications task. There's the AI knowledge. Do I know how to use these tools to produce certain types of results? But then also there's the self-knowledge of what skills do I bring? When do I lean into one or the other? And I don't think students get that self-knowledge without some practice having a chance to figure that out. So I love that.

Kiera Allison:

I will add that I think that piece of information deciding what they knew and what they were capable of in advance and then deciding how AI was going to play into that was actually the most important part of it because they had to establish their own goals and The metrics by which they would measure whether those goals had been achieved That some of them described that as the hardest part They did think about what am I capable of and what's out of my reach that I want to be able to do and how will I know that I was successful when they do that they're they're participating in in what we do as instructors in terms of defining the discipline, defining excellence within it. That aspect of learning, it's almost tangential to AI. Yeah.

Derek Bruff:

Yeah. Yeah. Still vital, regardless of the tools that we're using. Jun, what about you? What have you and your students learned about AI in your teaching context through these types of assignments?

Jun Wang:

So my assignment is to ask a student to use AI to help with their debate about a theory. So like throughout my guide, my guideline, so I wanted my students to No, like to practice like four aspects of like using AI. First is it can serve as their tutor to understand, deepen their understanding of the basic concept. So I ask my student to do that. Like if you feel like your student need to know more about it, just ask AI to explain. on our theory. And another one is I want them to use AI for brainstorm. So like to, like I said, like you actually need to come up with one thesis statement. And for others, if you feel like you're kind of like lack of ideas, feel free to use AI to generate like things, but you're gonna evaluate by yourself. And another one is, function is like information gathering because ask students to give specific examples from different languages. I mean, like they're like most trilingual. We cannot speak like so many languages. So yeah. So I asked them like, because their debate need to be supported by the examples. So I said, okay, yeah. Like even you only speak English, think of some examples of English and also you can use AI to and ask, like, whether there are some similar examples in other languages. Yeah. And the fourth thing that I want them to know is to do the cross-checking. I said, like, every piece of information that's generated by AI, you always, like, Google it, like, in the Google Scholar, where, like, I give them the prompts, like, oh, please cite exactly about the year, author, and journal, like, which one, and then you need to go back to Google to double-check So yeah, so the four things. And what did I learn? I feel like very surprised at that. My students like, all of them love this because i did a survey after this assignment and all of them because i have the last question i ask like in the future do you want to see more assignments uh incorporating uh like the ai um and then like 17 all of them they said like yeah i i would love to uh and then one of um I can't remember the number, but some of the students mentioned that they really appreciate that in this course, we use AI in a very positive way. We open to AI because in other courses, they say like they're very, like the professor really like forbid to use that, has a very strict policy about it. So they feel happy, like in this course, they can like use it, experience it. And what I learned from the students is like in the survey, I asked them like, how frequently do you use the AI? And almost all of them said every day. Yeah, I mean, the fact is they already use it. I also asked like how, like what do they use for AI? And they said like for like tutoring and understanding and to practice questions. Yeah, so yeah. So in general, like they feel like this is very helpful and I'm happy with it, yeah. So yeah, like this assignment is green light is because I didn't ask them like because it's not really, like doing a research final thing. If it's like a final research, of course, it should be their own idea. But this one is more like learning and gathering information to know more, to know the knowledge. So yeah, so in this one, I don't really, like I saw the question like for Spirals, like, oh, at first it's red line. Yeah, if I do like some writing assignment, of course, I think I, I will do the same. First, you have to have your original draft. You cannot rely on AI. But for this assignment, the nature is to learn more. Of course, you can Google it to learn more or just save time to stream all of the information on one platform easily to gather information and to learn, just verify the information. So yeah, I think for learning, it's super good.

Derek Bruff:

Yeah. Well, and I think that there's sometimes a discourse around AI that students are going to just use it to write their final paper. And sometimes that can happen, certainly. But the research and writing process is not just a kind of spit something out and it's there, right? Like there's a process behind it. I had James Lang on the podcast a couple of years ago and he talked about unbundling the skills that are associated with a particular assignment and not thinking of the assignment as one giant hole, but realizing there's lots of skills and steps, and that AI might play different roles with different skills and steps. And you mentioned Spyros. And so I'm going to go to Spyros next. Your assignment that you shared was more of a red light than a green light. So you asked students not to use AI in the first stages of the writing process, and then to use it in more, some intentional ways in the second half of the process. Why did you go in that direction? And how did your students respond to that kind of structure for an assignment?

Spyros Simotas:

Yeah, I must say, to add a little bit of context, this was not just an assignment, but a sort of methodology that would apply to all the written assignments for that particular class. And this sort of, the design of this methodology was very much inspired by writing seminar, which is a seminar that happens every spring at UVA, and it is organized by the writing across the curriculum. So basically, in a nutshell, what we know about writing is that students need to have a model of the text that they will produce. They go into some writing process that includes a draft, revising, going back to draft, revising again, some sort of you know like circle there of revisions and editing so and because the writing that I'm asking my students to do is mostly autobiographical they don't need to consult AI before that. AI doesn't know where they're coming from, their experiences in life. So AI basically cannot write for them their draft. And in addition to that, they're learning a foreign language, they're learning French. So it is important for them to develop some skills and try out some things. We've talked a lot in the AI guides sort of realm that we want some productive friction. So that red light first part, if you want, was that productive friction that I wanted the assignment to have. And... Plus, I also mentioned that all the stages of the writing process were happening in class. The draft, the planning and the draft would happen in class and students would upload to Canvas their handwritten notes. And then the next day or the next couple of days, they would type it out. They would sort of identify themselves weaknesses that may be improved. And then they would use the AI as sort of an editor. to help them. And this is, I mean, experience in teaching world languages and research has shown that this is where students, when they write in a world language, this is where they need most of the help. It's not about generating their draft automatically, but it's about double checking everything. And especially with a language such as French that has a lot of... It's very particular, let's say, in its grammar. So what I liked, and I think what my students also liked about this way of approaching the written assignments was the moment when, as a class, we discussed the before and the after, if you like, what you wrote and what AI gave you as a suggestion, as a correction. And that conversation like all throughout last semester, I was looking forward to that day when we will have that conversation. It was my favorite day in class because it was, we've had so many unexpected conversations about grammar, but also about storytelling. So the conversation, usually when we give feedback to students in French, we mostly focus in grammar. And I saw that becoming sort of taking a backseat with this process. And there was like, at least for the second half of the semester, we mostly focused on storytelling. How do you tell a compelling story? You have the facts. How do you basically put them together to tell a story that will capture your audience attention.

Derek Bruff:

Do you think that because you focus the AI use on the grammar and the kind of copy editing roles, did that allow students to kind of let go of the stress around that and then focus more on the storytelling?

Spyros Simotas:

I would say yes, but also my own stress about correcting their grammar, right, was lifted too. So I felt that I didn't really have to worry about that very much. So when a student came to my office to like workshop her idea about writing, you know, she showed me her handwritten notes. And before writing, like in a world before AI, I would start, you know, underlying all the grammatical mistakes and go over them one by one instead of talking about the narrative. What does she want to say in this story? So I said, look, I think that you can fix those things with AI. So let's talk about what do you want to say and how will you structure that to be compelling? And yeah, I felt liberated. I don't know how my students, like, I felt very much liberated by that process. And I hope my students felt the same way. I haven't really, but most of them, I mean, a big, a significant number of my students in that class continue with me this semester. So I hope that they saw some sort of benefits there.

Derek Bruff:

Yeah. Yeah. I want to go to Jamie now. And Jamie, you actually shared several different assignments up on the UVA Teaching Hub. And they have kind of different character to them. I feel like all of them have a bit of AI, a bit of course content, and a bit of the students' own lives, trying to make connections to their students' lives. And how do you think about kind of what needs to go in to an AI integrated assignment? Like what were some of the design choices that you were thinking through as you were experimenting with these assignments?

Jamie Jirout:

Yeah, I think, you know, I already had my class designed before AI. And so when thinking about where it could be integrated, I was thinking, where are students doing things or where am I doing things that are aren't adding to their learning but are more just like you know regular tasks and i my class is designed pretty intentionally based on you know science of learning and motivation and so i use specifications grading so there's not points and i was trying to be really thoughtful about my learning objectives and then in the beginning of the semester i talked to my students about what those learning objectives are why they're they are what they are, right? It's not very content focused, but more kind of like application focused because students are gonna use what they're learning in all different ways, depending on where they go in their futures and what careers they enter. And so a lot of kind of intentional discussion early on about why are you taking this class? What do you hope to get out of it? And then clear alignment that I explained for all the different course assignments that they do. Why are we doing this? What did I intend for you to learn by doing this? And also, how are you able to monitor your own learning and know whether or not you need to do additional reading or come to office hours for help or whatever they need to do to achieve their goals? So I think starting out in that kind of mindset that's not about AI, but more just about their learning in the course and taking ownership over what they hope to get from it. Then when I integrate AI, we try to... think about where is it useful to take some of the burden of assignments away from student what students have to do that isn't interfering with the specific goals of that assignment and then also what can it provide that we didn't have before so for example when you have you know my writing intensive course has 30 students my other course can have up to 50 students it's hard to give a lot of really detailed feedback week to week on students understanding of the materials and I think that AI provided a really great opportunity where students could get immediate, detailed feedback at the exact level they are by being quizzed in a meaningful, interactive way about that week's topic. So when students are engaging in that Feynman style review with the AI platform, they're getting that kind of feedback and support that I couldn't give because the goal there is just to have students kind of be aware of whether or not they understand where there might be gaps or weaknesses in their understanding, where their strengths are. And so they get that by interacting with the tutor. In other weeks, when I talk about research methods, I want the goal is for students to be able to read research and understand what they can take away from it to then be able to apply. So it's not a methods class where they need to design research, but I also want them to feel engaged and interested in the topic. So AI provides a way to have a platform create a case study, an example of research methodology about studying something they want to know about, and then they can critique that, and that's where the skills are applied that I want them to learn. So I think a lot of discussion not about when it's okay and not okay to use AI or even how to use AI, but more about what should you be getting from this assignment and having the students do the kind of self-reflection of, am I able to do that in the way that I'm using AI? And I give a lot of freedom because I think I don't know all the best ways of using AI and I expect students are going to come up with new creative ways of using it that I can learn from. And so I also have them just fill out a quick transparency form every time they turn anything in about how they used it. And I have them also kind of reflect on how much of the final product do I feel like was from my own thinking and ideas? How much was it informed by AI? And I'm not, assessing that. I don't have kind of expectations where I'm not going to count it if it's too much AI or whatever, but more for their own purposes, again, having that autonomy and their own learning and deciding what am I okay with? What do I want to get from this class and from my effort? And am I okay if I feel like I rely too much on AI or, you know, what are the opportunities to make my work better by using AI? So I think I don't, you know, I think it's not that I say, here's how you should or should not use AI or what's okay and what's not okay, but more kind of explaining and discussing why are we using AI and why are we doing these assignments so that the students can kind of monitor their own learning in that way and use it as they see appropriate. I think what I've been surprised about is that they're not using it as much as I would have expected and even as much as I think could be helpful and useful. So I try also to give examples of the ways that I use it. There are some great tools out there that I think students could really benefit from using more. And then I also give some examples of where you maybe don't want to use AI and where it actually ends up being not worth the time and can actually detract from your learning. So I think, yeah, I don't know if that answers the question clearly.

Derek Bruff:

Yeah, does. It also... Yeah, it explains something I observed in your assignments, which is you often have very detailed prompts for students to use. And I gather that's because you're wanting them to do very specific things that are aligned with your learning objectives and developing their metacognition. And so you've already kind of scoped out, here's a way you can get an AI tool to play that particular role. So you're giving them a little more structure around the prompting than perhaps in a more kind of green light

Jamie Jirout:

Yes. And I think that was mostly driven by my early attempts where I didn't give as good of prompts or as much scaffolding and it didn't work. Like there was a lot of variability in how well it worked for different students. And so I actually, the first times I did it, I started with very vague prompts and every week when we would do the activities, we would have a short discussion at the beginning of the class. How did it go? What ideas do you have for making it better and refining it? And so I, the prompts I have now are built off of a semester of experience and feedback and co-design with students in my previous class. And then this year with the semester where I'm getting a lot of feedback on other ways students are using it in that transparency form, I think will probably inform things I do in the future. Yeah, because I think I'm still learning and it's always this balance of how much can you teach beyond your course content? And AI prompting is not something right now that I feel like I can fit into my class, even though I think it is very important, encourage students to learn it. So yes, I try to provide models and examples, but also the freedom to change and kind of go off on their own if they want to for different assignments.

Derek Bruff:

So we've been pretty positive so far. We have noted that AI has its flaws, but we've been generally talking about kind of the explorations that we're doing, I hear from a lot of faculty who are very worried about AI, particularly that, I mean, there's worries around kind of academic integrity, but I think underlying a lot of those concerns are worries that students are going to lean too much on AI to do to do the hard work of learning and they're not gonna develop the skills and knowledge that we're hoping, that they're gonna kind of outsource that to AI. How would you respond to a faculty member who has that kind of concern that AI is really gonna kind of shortcut or short circuit the student's learning?

Jun Wang:

So I think like the way I talk is like, let them know this is the fact that like, whether you're worried or not, like the fact is, Like a lot of students are already using it. And even before the AI like was out there, like for my language courses, some students, they just use Google translation to do the work. So if you are talking about like over-reliance on tools or yeah, on the tools that you are not, like you don't want them to use, I mean, it's already there. So even before AI, so I don't think it's the AI's fault. So I think like as educator, like we need to guide our students like to how to use it well. So to let them know like what part you can work on Outsource it. And for what part that you have to do it by yourself, because especially for language learning, like learning by doing, if you don't do it, you cannot really learn it. You cannot really internalize it. So yeah, it will like jeopardize your learning, not really helping yourself. Yeah. So yeah, that's what would I say.

Jamie Jirout:

I just wanted to add that my students are similarly, they have the same concerns. They worry about our society in general kind of losing critical thinking skills and creativity and relying too much on AI. And I think it's a very valid fear for all of us to have. And I think it's important that we're thinking about it and proactively considering how we're using it and how we can avoid it. that happening. And I think the students often, some of the reasons I think they don't use it is out of these fears of both not learning and kind of over-relying and also the repercussions if they don't know how their faculty instructors feel about it. So if they're concerned about, I know students have been accused of using AI when they haven't. And so now just are very against using it. So there's always going to be students who, for whatever reason, take a shortcut you know not necessarily because they want to cheat or whatever they have a lot going on and that might happen but i think as long as we're all considering the potential risks and being aware of that that we can as a society kind of think of how we're using it and try to avoid things like students not actually learning and relying too much on ai for different things and losing kind of the human benefit of tasks that we might be automizing. So I think it's a very valid fear. And I think everything comes with risk and it's how we consider and manage that risk that matters.

Spyros Simotas:

I would like just to jump and say I totally agree with what you just said, both Jun and Jamie. And I would also like to add that we don't know. We don't know if they will learn. I mean, this is just, you know, it's an experiment that has been happening for two years. So we don't really know. That makes it more of a valid concern, basically, for all of us. We don't know, like, in the long run, what will be the repercussions of that. So, yeah, I totally agree.

Derek Bruff:

Kiera, anything to add? I

Kiera Allison:

I notice these conversations about skill development and job viability, and basically every question regarding what do humans need to do and what will they be motivated to do in the world where AI can do it, it assumes a substitutive model of AI-human interaction. If AI can do it, therefore humans won't. Or if AI can do it better, then humans will be out of a job. That is one model is viable, but that it is not the only model like complementarity amplification is another way we should be thinking. And that is not all on the students slash employees to figure out. I think if you have a large corporation saying we're going to lay off 70% of our workforce and we'll determine which 70% that will be based on seeing how well our human workers compete with each other. And with AI, my response to that is the corporation needs better projects. If you're only asking for the same work, you have like twice the toolbox and your goals are the same. That's a limitation of imagination. So we have collectively, I think as a society to expand so that we're not substituting, we are adding. I think our students are helpful, right? And working with us to think about what can you do now? What can you do that's important? That's my high level response to that. At a lower level, I think don't underestimate the jealousy that students feel for their own skill set. They do not want to find out that there are machines that can replace them. And if I do an assignment and ask students to do a little bit first before they engage with AI, they are very critical of what AI is adding or not adding to it. But I'm inspired by... an article I think it was recently published in the Chronicle of Higher Ed, just profiling a writing instructor who's doing large enrollment, writing classes, seeing a lot of AI use, and her response to that was not to prohibit AI, it was to re-tailor the assignments in a way that focused on what students wanted to develop. What are your three goals? putting it back into the student's court, asking them, what is important to you to be able to accomplish? And grading the assignments on that criteria. How successful were you at developing that skill? That got rid of the AI use. So put it back into, I guess, allow students to realize that they are capable of developing skills and allow them to define what skills are important to them to develop It matters less whether AI can do it or not. I want to be able to write whether or not AI can also write. It's important to me as a human being to be able to do this thing.

Derek Bruff:

Yeah. Early on in this ChatGPT era, there was a Planet Money podcast and they were actually using AI tools to make their own podcast and it was a fun experiment. But the topic of the podcast was the transition from human-operated phone switchboards to machine-operated phone switchboards. And that bit of technology seemed to be substitution uh and so i think there is a lot of assumption that ai will function that way but but it doesn't have to as you point out and it may not right um since we are kind of in in the middle of this great transition spurious um we get we did to hopefully figure that out um and help help decide what roles we want it to play um we're coming up on time but i wanna i wanna look ahead just a little bit um so one last question and we'll make this kind of a lightning round so Let's try to keep our answers brief. The assignments that we shared on the Teaching Hub were all from the fall. I'm wondering, what are you excited about in your teaching this spring in terms of AI experiments? And I'll start with Jun.

Jun Wang:

Okay, cool, yeah. So inspired by Spyros, I did the writing revision in my class. They need to give me the form, like to make thoughtful decision, your general work, AI's suggestion, whether you will adopt it or reject, and give me the reasoning of your decision. So I asked the student's opinion. Because for this subject assignment, I had a staff compare my correction versus AI's correction. And they all said they love my correction. And I thought, oh, I'm happy that I won't be replaced by AI for some time. The funny thing is they like my correction because my correction is very direct. don't really think twice they don't have the habit of revising like their own work but like with the ai was kind of like the writing tutor they have to go back to read it again and then again and then think of like think through the ai's explanation like information they need to absorb it and make a formal like like informed decision so it actually takes much longer for them to do and essay writing. They usually just write it and then submit it. And actually, the reason why it motivated me to do this writing revision is they're writing full of typos and grammar mistakes that I've talked again and again. I was like, guys, Please, let's do this kind of like practice and can also like, yeah, reduce my work a little bit. Yeah, so I think like shape of our future, like we can think like how it can cultivate a better study habit for our students. And yeah, like just like incorporate this too to help them to learn because you spend a longer time with AI is actually like, spending long time on learning, on thinking deeply. So it's not really the shortcut, but we need clear guidelines on this to help them to cultivate their study habit, yeah.

Derek Bruff:

Thank you, Jun. And I know you've got to go run teach. So thank you for being here. Lots about feedback, student feedback. I'll have to put a link in the show notes to an article I read recently about AI versus human generated feedback. Spyros, what are you excited about in your spring experimentations?

Spyros Simotas:

Yes. So this semester I was inspired. I teach a completely different course that is not so much heavy on writing. It has some writing assignments, but it's not so heavy in writing. And I was inspired by Adrienne Ghaly's assignment, who's also an AI guide who teaches at the English department here at UVA. And also by an assignment that I found on the AI pedagogy project by the Meta Lab at Harvard. And that assignment was by Maria Dikcis. It was called Exquisite Corpse. Basically, the idea of both these assignments was to use AI to co-write something. And most of the time, a creative work. So part of it will be written by the student, part of it will be written by the AI. So the way I structured this assignment was, again, no AI in the beginning. Come up with your plan. They had to write a fairy tale because this is what we've read in the beginning of the semester. So I asked them to come up with a plan, to have a hero, to have a situation, a challenge, blah, blah, blah, all that. And then give all that to AI with their first paragraph of the beginning of the assignment. and then ask it to write the second paragraph. And then they would write the third paragraph and then the AI. So something like that.

Derek Bruff:

So very structured co-creation. Yeah.

Spyros Simotas:

So, and I'm curious now to see, first of all, I asked my students their reaction to that sort of assignment. so they were not sure about the ending. So they asked the AI for ideas, but the ideas that they got from the AI were not very creative. They were expecting something more out of the left field sort of thing, which makes me also doubt about the argument of AI being a creative tool. I haven't seen that yet. It's a very good editor, but I'm not sure if it's a creative tool. The creativity I think comes from the person who steers it, who directs it. You give it orders and then it can combine unrelated things, no problem. But you have to think of these two things that are unrelated to be combined, so.

Derek Bruff:

Okay, I have lots of thoughts about that, but I wanna get to Kiera and Jamie really quick. Kiera, what new thing are you trying this spring?

Kiera Allison:

I'll tell you what's new and I'll tell you what I'm excited about. So I've been doing more, structured prompting where the AI masters the rubric and the assignment and walks students one step at a time through their draft. and asks probing questions to help them improve. It's a work in progress. I don't have access to the best tutors in the world, but you can do a lot just with basic prompting. So I've been playing with that. The thing that I'm excited about, but it's very imperfect at this point, but it's also the thing I'm most intrigued to develop, is an AI that can read what the student is reading. and look at the student's preliminary analysis and take them back to passages that they think the student should revisit. It's a good tool where I want students to do textual analysis in a course where they're not really trained to do that. So my students did a version of that on their earnings call memos. And it's like, be a close reader, even though you've not been trained one minute in your life to do close reading. And so having an AI prompt that would say, okay, given what I've learned, what I think I've inferred about this call. What do I need to read again? It's exciting. I mean, to have a thing that can just help students slow down and pay more attention to a relevant passage from whatever the artifact is, I think is, that's exciting to me.

Derek Bruff:

Yeah, I like that too. A very targeted reading assistant of sorts. How about you, Jamie?

Jamie Jirout:

Yeah. Something new that I'm doing that I think will be really interesting to look at after the fact is students' reflections and reporting on their use of AI across the semester and looking at how that changes with more experience and practice and just the ways that they're using it and what I can learn from that. But what I'm really excited for is I'm always super impressed by the end of semester projects that students complete. And with experience using AI throughout the semester and time to kind of think about creative ways of using it, I am excited to see if it enhances or takes their projects to the next level and just what that looks like when they have free reign and how they might end up using it.

Derek Bruff:

Yeah, yeah, yeah. Getting back to what Kiera said about what can we do now that we have this tool? How does it expand our horizons? That's great. I love it. Well, thank you all for being here. I appreciate you taking this time today and sharing with the podcast audience. This has been really great. Thanks for being here.

Spyros Simotas:

Thanks for having

Kiera Allison:

Thanks, Derek.

Derek Bruff:

That was Kiera Allison, assistant professor of management communication, Jamie Jirout, associate professor of education, Spyros Simotas, assistant professor of French, and Jun Wang, lecturer in Chinese. All four are faculty at the University of Virginia, and all four are 2024-2025 faculty AI guides at UVA, exploring uses of AI in their own teaching and sharing what they learn with colleagues in their departments and schools. Thanks to all four of my guests for coming on to talk about their AI-integrated assignments. And thanks also for sharing those assignments on the UVA Teaching Hub. In the show notes, you'll find a link to that assignment collection as well as to a blog post I wrote analyzing those assignments. You'll also find a link to more information about the Faculty AI Guides program and to some of the articles, books, and podcasts that the guides and I mentioned during our conversation.

Derek Bruff:

Intentional Teaching is sponsored by UPCEA, the online and professional education association. In the show notes, you'll find a link to the UPCEA website, where you can find out about their research, networking opportunities, and professional development offerings. This episode of Intentional Teaching was produced and edited by me, Derek Bruff. See the show notes for links to my website and socials, and to the Intentional Teaching newsletter, which goes out most weeks on Thursday or Friday. If you found this or any episode of Intentional Teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Tea for Teaching Artwork

Tea for Teaching

John Kane and Rebecca Mushtare
Teaching in Higher Ed Artwork

Teaching in Higher Ed

Bonni Stachowiak
Future U Podcast - The Pulse of Higher Ed Artwork

Future U Podcast - The Pulse of Higher Ed

Jeff Selingo, Michael Horn
Dead Ideas in Teaching and Learning Artwork

Dead Ideas in Teaching and Learning

Columbia University Center for Teaching and Learning