Intentional Teaching

AI's Impact on Learning with Marc Watkins

Episode 40

Questions or comments about this episode? Send us a text massage.

Worried about your students asking ChatGPT to write their essays for them? That's so 2023. Generative AI technology is changing fast, and now these tools have the potential to disrupt many different aspects of learning, from reading to notetaking to feedback. 

To help us explore those changes, this episode features a conversation with Marc Watkins, lecturer in writing and rhetoric and academic innovation fellow at the University of Mississippi. Marc's blog, Rhetorica, is a must read, and his workshops on teaching and AI for UM faculty have been incredibly helpful.

Marc has a new series on his blog called “Beyond ChatGPT” that explores the many ways that generative AI is affecting learning—far beyond the now-typical use of having ChatGPT write an essay on behalf of a student—and we talked about those changes to the learning landscape. 

Episode Resources

Rhetorica, Marc Watkins’ blog, https://marcwatkins.substack.com/

Marc Watkins’ website, https://marcwatkins.org/ 

ChatGPT 4o demo reels, https://openai.com/index/hello-gpt-4o/

Scarlett Johansson and OpenAI, https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her 

Explainpaper, https://www.explainpaper.com/

Student Notetaking for Recall and Understanding, https://derekbruff.org/?p=2848 

Why Use Sketchnotes in the Classroom?, https://derekbruff.org/?p=2902 

Mike Sharples’ May 2022 essay on AI in teaching and learning, https://blogs.lse.ac.uk/impactofsocialsciences/2022/05/17/new-ai-tools-that-can-write-student-essays-require-educators-to-rethink-teaching-and-assessment/ 

AnswersAi, “School on Easy Mode,” https://answersai.com/ 

DevinAI, “The First AI Software Engineer,” https://www.cognition.ai/blog/introducing-devin 

Podcast Links:

Intentional Teaching is sponsored by UPCEA, the online and professional education association.

Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe

Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching

Find me on LinkedIn and Bluesky.

See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.

Derek Bruff:

... Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas and teaching. I'm your host, Derek Bruff. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time. Way back on the second episode of this podcast, I talked with Robert Cummings from the University of Mississippi's Department of writing and rhetoric about the ways that generative AI might affect writing and the teaching of writing. That episode aired November 15th, 2022. Two whole weeks before ChatGPT was released to the public. How did Robert have anything to say about A.I. and writing before ChatGPT was released? That's because he and his colleagues in writing and rhetoric had been experimenting with earlier versions of generative A.I. technologies for more than a year. As a result of those early experiments, DWR faculty have been consistently ahead of the curve when it comes to teaching and learning and A.I.. I'm very excited to have another one of those DWR faculty on the podcast today. Marc Watkins is a lecturer in writing and rhetoric and an academic innovation fellow at the University of Mississippi. I've been wanting to have Marc on the podcast for a while now because his blog on A.I. and education Rhetorica is must read, and because his A.I. workshops at the University of Mississippi have shown that, a, he knows more about A.I. and teaching than almost anyone, and b he has a gift for helping faculty and other instructors think deeply and critically about the roles AI can play in teaching and learning. Marc has a new series on his blog called Beyond ChatGPT that explores the many ways that generative A.I. is affecting learning far beyond the now typical use of having ChatGPT write an essay on behalf of a student. And Marc and I sat down virtually to talk about those changes to the learning landscape. Marc, welcome to the podcast. I'm really glad to have you here and to talk about A.I. with you on tape. Thanks for being here.

Marc Watkins:

Thanks for having me, Derek. I'm excited to sit down, talk about AI with a person who's not going to be freaked out by it so much.

Derek Bruff:

I try not to be.

Marc Watkins:

[... ] have it too.

Derek Bruff:

Yeah. Yeah. Well, before we jump into that, I'm going to ask my usual opening question. Can you tell us about a time when you realized you wanted to be an educator?

Marc Watkins:

Right. So the thing about being an educator for me was that my mother and father are both schoolteachers, and I, of course, rebelled very hard against that idea, to never had an idea or thought becoming a schoolteacher. But when I went to college, I was a nontraditional student. I went to night school, both at community college in central Missouri and later to a local University of Central Missouri State University. And I really became sort of enamored with this idea of how teachers are working with adult learners in many ways about reaching them. So that's how I basically started becoming interested in teaching, too, mostly. I probably could talk for hours about this, but, you know, a short little segue like that is probably helpful. Yeah.

Derek Bruff:

Okay. Okay. Well, thank you. S let's talk about this series that you're doing on your newsletter right now, beyond ChatGPT. What led you to start that series?

Marc Watkins:

Well, I started exploring about in February how students are being marketed, different types of generative AI tools on social media. I'm currently in the state of Mississippi. The state of Mississippi has banned TikTok on all counts. So that means that I can't pull up tik-tok on my university computer or university Wi-Fi service here too. But students just don't even worry about that. They just pull it up on the data on their phones. So there's a little bit of a divide already happening between the way students are being explore, being exposed to these different A.I. tools and how faculty aware of them. And I wanted to really just kind of explore this. And, you know, I went into it thinking it's just going to be about using AI to, you know, solve essay questions for you to do, write for you. And it's much deeper than that. And so that's really what became the series, too, is to move beyond just ChatGPT text generation software and to start thinking about how this is actually impacting learning as a whole in terms of reading, critical notetaking, research, tutoring, feedback. It's almost to the point where any type of human endeavor that we can do as teachers can be mimicked by generative A.I.. And that's what I want people to be kind of aware of. And when we talk, we know you've spoken a lot, too, with people as well about this generative AI. They usually associate that directly with ChatGPT. And I think it's very helpful for a faculty to move beyond that and start thinking about this as really a transformative technology. I hate using that word because it's very hyped, but it is transformative in terms of all these different use cases to become thinking about.

Derek Bruff:

Well, and I think something I heard you say many, many times over the last year is that it's not just the tool ChatGPT. There are lots of other generative AI technologies that are going to be increasingly embedded in in all types of things that we do. And so I think that was really important, especially in like 2023, which is to say it's Microsoft Copilot, it's Google Bard, It's it's, you know, it's all these other things. It's not just one tool. But now what I'm hearing you say, it's not just text generation. It's not just can the tool answer an essay question, but there's a whole different set of functions involved in the learning process where these tools have a role to play, whether it's a positive or a negative one. But they're they can touch on lots of things.

Marc Watkins:

Absolutely. We're exiting the period of large language models, and now we're in the period of large multimodal models and multi modality does far more than just text. It does speech, audio music. It can do images and video. We even have examples of avatars that are being able to scan your face and your actual mannerisms and to mimic all of that as well. And that is what Openai recently announced, and Google recently announced with their new projects. For Openai. It was an upgrade to GPT four, which is going to be called GPT4o for omni and they are committed to release that as the upgrade to the free version of ChatGPT. So the fall comes around, you will have a free multimodal model with streaming audio and talking too. And I would really advise anyone to just look at some of the demos that Openai did that day too. There's there's hype, of course. But having someone talk to a voice that sounds in this case like Scarlett Johansson, which was part of the demo,that they're navigating a lot of a lot of justifiably harsh feedback for because they didn't ask her consent to do so. But if you have that in your classroom, in your phones, actually on to and scanning things, that's going to have some pretty major downstream effects for learning that we hadn't really thought about. And of course, some of this is going to be positive. We'd all be like completely, you know, black and white saying that this is going to just destroy education. It's not. But what we do want is to think about the critical adoption of this technology. And as you would say, too, from your own work in this for the intentional technology in the classroom to actually be useful in that way. And unfortunately, we're just not really seeing that right now, in part because it's no one's real fault in terms of the people who are adopting it because they're just basically being on the receiving end of a firehose of all these different technical features, too. Yeah.

Derek Bruff:

Well, and let me do a quick firehose check, because I was I was traveling last week and the week before. So and things change. But as we record this in late May 2024 the ChatGPT, the 4o, the multimodal piece, is that available in the free version or is it just in the paid version right now?

Marc Watkins:

So this is what makes this even more complicated and ridiculous. It's available in both the paid version and the free version, but the multimodal features of streaming audio and video have not been turned on yet for either. So they were there for the demo where you could see this sort of streaming voice.

Derek Bruff:

And that's where I'm holding the phone up, kind of walking around the room telling me what it sees. Right.

Marc Watkins:

Yeah, exactly. Yeah. These two are having an actual conversation with the A.I. generated voice as well. And that is something that supposedly during the summer, because we are recording right now and late May will be turned on in theory available to some degree to users for free. They don't they're not saying how, you know, how much sort of bandwidth you get with a free version. Right. My guess is with streaming audio and video, it will probably be cut off within a few minutes and we'll have to probably pay to play after that. But yeah, yeah, it's going to be a very different fall.

Derek Bruff:

Okay. Well, yeah, and I remember last summer in 2023, the conversations often it was somewhere around June, July, faculty started realizing, Oh, my assignments this fall might need to be rethought because of this text generation capability. And so now what I'm hearing is this is a summer where we think about not just those assignments, not just the written assignments, but all the assignments as well as other parts of the learning process need to be rethought. And so let me pull out one of those, because you've written about AI and reading. How might an AI reading assistant be useful in learning and how might it be perhaps problematic?

Marc Watkins:

Well, this was something that we tested here at the Department of Writing and Rhetoric at University of Mississippi, way back in. I think, the fall of 2022. So to give you an idea of the landscape right before ChatGPT was really released, if you were on Twitter at that point in time, you had my condolences because it was crazy. I mean, Openai just released the API so that builders could actually build this. And we were following some grad students in California who were building a tool called Explain Paper. It was just two grad students and they had a very genuine use case to actually build this tool to help them parse through the dozens upon dozens of different research essays. They were focused on too. One of the developers was a non-native English speaker as well. He was talking about how difficult it was for him to read. And so he developed a tool called Explain paper, which is just using Openai's API, and it would automatically summarize whatever digital document you uploaded to to your reading level. And there are different versions of that that quickly came down to another app called SciSpace, which does the same thing with different languages. So you can change around it too. So, you know, it's sort of hard. The idea behind it is it's like a amazing universal translator, you know, having anything that would be able to be using the AI to scan it and be able to automatically summarize it, and also to have you interact with at your reading level. So any student with a neurodiverse learning disability in some ways too that would have issues with reading students or studying English as a second or third or fourth language would be interested in that too, or any language for that matter as well. And of course, anyone who is just unprepared to actually go through the reading process and we tested this and it was amazing. Yes. The students that had those issues too they all reported this. This was an amazing thing too. And then the problem happened almost immediately when everyone else in the class that did not have this issue said, this is great, I never have to read ever again. And I said, Oh, no, wait, wait, wait, wait. What do you mean? And they said, you know, we don't have to use we don't have to go through these 17, 18 page PDFs you provide us for a class, we can just go load this up and becomes Cliff Notes on demand. I said, Well, wait, we don't want that to happen. What would happen as a teacher if I just took your essays and turned that into Cliff Notes on Demand and they said, You can't do that. That would be part of your job. But we don't have any like, framework right now for this. For both teachers using this or students. So, you know, I took that moment of class to kind of slow things down and talked to them about this and really starting to advocate for students to think about how they want to use these tools and how they want people who are in positions of power to be using these tools as well, too. And that's where this sort of reading example became something that was a little bit of a touchstone for a lot of folks, because you would not normally think about text generation technology being used to read for you in some ways or offload the process of reading. Yeah, but it's there.

Derek Bruff:

Right? And so so you were you were digging into that in 2022, right?

Marc Watkins:

Mm hmm. Yep.

Derek Bruff:

One of the fun things about being at the University of Mississippi these past couple of years is that our writing and rhetoric faculty have been ahead of the curve with the AI, the generative AI. A lot of us have gotten to learn from you, from your experiments, even before ChatGPT was released to the public. So. So what advice would you give the instructors now who are wanting to have students use these tools thoughtfully? Because now you, I think, ExplainPaper's still around, but there's now lots of ways you could, you know, give a PDF to an AI tool and have it summarize it for you or outline it for you, or give you the three most important takeaways, or ask it what's unique about this, this argument right there? And you know, your mileage will vary with its accuracy, but I find that often when you give a ChatGPT or similar tool, something to read it, it, it has something to work from. And so its output is actually much more reliable than if you asked a more kind of general question where it's just relying on its it's training.

Marc Watkins:

It is, Yeah. It really does sort of ground the actual training, not the training data, but it grounds the sort of context that you upload to it into it. And it has been, from my experience, very accurate compared to just going to the actual base model. Yeah, what to give faculty and this is kind of what the whole series is about. Like right now I'm just kind of I'm laying out the field and, you know, there's a lot of authors that are just now publishing books out of the ChatGPT sort of era to think about this, and they're all sort of finding what I'm finding. You're finding too, is that what we have for advice is sort of like a moving target, like it shifts constantly. I think what I'd like to explore the next time we bring our faculty together to do some development with them are ways we can develop intentional friction in the learning process. The one thing about these tools, whether it's for reading assistance, for notetaking or for writing, is that they offer the user a frictionless sort of experience, right? It's the least amount of work you have to get to an answer. It doesn't have the answers correct or not to. You're still going to be provided one from the technology and learning, if we do so properly, is about friction. And that's not a bad thing. It's not a negative thing. It is going to be challenging to some degrees. We don't want to make this so ridiculously hard that we change what it means to learn or write or raise the bar to some superhuman degree to try to keep track of ChatGPT. We don't want that. We want to think about some intentional strategies that worked in the past that can slow this process down. So for reading, what that means is probably looking at close reading exercises, being able to annotate either with pen and paper or using some digital tools. Perusall is very good about social annotation. Hypothesis is also really good, too. Slowing some things down, integrating that in part of the reading process to make it more active for the students too. Just like we would scaffold like a writing assignment or anything else we might want to think about scaffolding, some of these skills that we just have for it as well. And I think that's very true with reading. We've seen it also with the note taking stuff too. You can record and transcribe lecture for you too. And a lot of the marketing from social media influencers says just lean back, never listen to lecture again. The AI will do it for you. You know, we don't we know from, you know, dozens upon dozens of papers and years of research that just plain lecture is usually the worst way to actually impart knowledge into people, too. So maybe this will be the helping sort of push to certain people to get them to start thinking about not just lecturing during that time period, but breaking things up. But of course, when we start peeling this back, it's like an onion. There are different layers to it. Well, if we're going to change lecture, you have to look at the physical space a lot of our classes take. If we're actually teaching in person these giant lecture halls, sometimes their chairs are bolted down to the floors their desks are. You can't move around in there too, so it limits what you can actually do in that sort of regard. But yeah, it is definitely going to cause us to rethink more than just assessments but learning itself.

Derek Bruff:

Right? And you know, there were these moments in the last couple of years where the advent of ChatGPT and similar tools as text generators kind of shone a light on some forms of assessment we've been using for a while that were perhaps problematic anyway, they weren't ideals, and now that the robot can do it and in a heartbeat, like we really do need to move away from some of these forms of assessment. And so I think this is also shining a light on other things that maybe we had taken for granted, right? Students famously don't want to do the reading, right?

Marc Watkins:

Yes, famously.

Derek Bruff:

Maybe because in some cases they don't know how. They don't have the college level reading skills they need to tackle these hard assignments that we give them. And so, yeah, maybe we need to be scaffolding that. Maybe we need to be teaching those reading skills and things like close reading exercises and annotation, right, these are ways that we can actually because, you know, a lot of students, they take shortcuts when they don't feel competent at the thing you're being that they're being asked to do, right?

Marc Watkins:

Yeah, absolutely.

Derek Bruff:

If they feel like they have the skills to do it, they're much more likely to to spend the time and to do it. And so and, you know, I combine that with all the stories we've heard from folks kind of coming out of COVID and the kind of study. Skill. Gaps that seem to be showing up in our college students. I think now more than ever, we need to we need to be thinking about how to teach skills like reading and note taking.

Marc Watkins:

Yeah, I think we are going to be coming more to a foundational sort of I don't want to say reckoning because that's just a bad way, but it is kind of a reckoning at this point, right? Because we're really are going to have to start thinking about how we teach and why we teach and what we're actually teaching our students to beyond the actual disciplines of the context. And I don't know necessarily if all of our faculty are going to be there for that, because I think you and I have taken kind of a deep dive in this and we've kept sort of a pulse to it. The vast majority of faculty I've spoken to you can kind of talk about this too, just seem to like their eyes, just sort of like glaze over when you bring this up because, you know, they play around the free version of ChatGPT. It's not very impressive to them. It doesn't seem to be a threat to what they teach or how they teach. And this really is designed to persuade them to reevaluate that position and consider that. When you can really sit down with people and work with them for a few hours, even a few days at a time with this tool, these tools, and they can kind of start seeing the possibilities of the ways the tech can actually help students, but also potentially deskill them. You do get that light bulb on a lot of faculty members, who'll say oh wow the way I've done things before too. Probably not going to work for very much longer.

Derek Bruff:

Yeah. Now, well, let's talk a little bit more about note taking, because I have I have I went through a period several years ago where I was fascinated with note taking practices and and the limited research there is on note taking and how to do it well. So it's similar question How is AI going to be disruptive to note taking and and how might you know a thoughtful student use AI well as part of their note taking process.

Marc Watkins:

Well it's it's a really great question too. So a lot of the models that exist now, too, are sort of these generative AI generative. They're general foundational models that can do lots of things. Back then in I think it was still the fall of 2022, OpenAI announced whisper, which was their transcription model. You've been probably exposed to this. If you've been on Zoom, you turn down the automatic transcription because that's natural language processing. What this technology does is take your voice automatically, transcribe it, The sort of plus that this gives you then is it then takes that transcription and loads it directly into the context window of one of these large language models or large multimodal models, so you can record your voice and then ChatGPT or whatever model using can summarize it for you. A lot of the apps that exist, I think one of them is called Turbo Learn which I don't know why they call it Turbo Learn. I guess Turbo make your Turboize, your learning or something else. O right. Yeah. Add it exists it it is advertising to students to to let AI listen for you to synthesize the notes which is the really big thing too, because it's not just about listening, it's about then taking apart what is being talked to you about too, and choosing what's important to record too, and putting conversation with your own ideas and what your teacher has said beforehand too, and they're marketing this as AI will be able to do that for you to record and synthesize it, and then it will make flashcards for you to do, teach you about the knowledge as well. So you do the least amount of work possible we're going to do for you. Yeah, I mean, it's fascinating let's take in all of the different things we've taught human beings how to do to record information and process information, you know, and it's now automated that and like I look at this, I came across this like, Oh no, that's not where we want our students to do, right? Yeah, but just like with reading, it could help people that actually need it. Okay. And you know, if I'm at a conference and I'm going between different sessions or something else, I would love to have this tool available for conference goers. Right. You could go up and see a big screen TV, like instead of having flights being go to Europe, everyone's transcription for everyone's conference.You could go, there's like, Yeah, I would love to go to see that person and see what's going on there too. Or at the end of a conference, have that available for you to go through and think about. So it can have these really great learning opportunities. It's just not being adopted in a critical way that's going to help students, at least for what we're seeing from the marketing. And we just want faculty be aware of it and to consider ways that they can slow that down and consider intentional note taking as well. And like you said before, there's very little evidence about how we do this. Well, right. We have all these different people that talk about note taking and how it can be interesting. And some people will even say it could be an art form, too, you know? Yeah, and I agree with them. It's just how do you teach that skill is another thing we're going to have to start thinking about.

Derek Bruff:

Well, and it's interesting, you mentioned the kind of synthesis and then you mentioned having flashcards generated from your notes because, you know, if if if I have to sit through a lecture and I have notes from that and then I can test myself on on that information, that's a good form of learning, right? The kind of retrieval practice that's built into that. And so the ability to kind of create flashcards based on any body of information I think is a potentially useful tool for students. But I do wonder about the synthesis piece, because what research I've seen on note taking seems to indicate that when you're taking notes, there is a kind of Goldilocks zone. If you just transcribe everything that the teacher is saying, you're not actively processing it in the moment. And so it's good for reference later, but it's not helping you learn during class, right? And if you just write down two or three key words, you've done like too much synthesis, right? But yeah, that, that discernment of knowing what's important, what's less important, what are the structures, what are the relationships, what are, what are the big ideas? And so you're using that to decide what to write down and what not to write down. If you do that kind of active synthesis during the note taking that's been shown to, to lead to better learning results. Right? So that's where I really worry that if there's a shortcut for that if if the tool is doing that part of us for, for the students, that's much more problematic, I think.

Marc Watkins:

Yeah, I think so too. And the whole theme of it is that basically do the least amount of work possible to get informed with this information for the assessment that's coming up, whether that's midterm or some other sort of test or something else. And then you don't have to think about it ever again because you never went through the synthesis part. You just looked at like the flashcards that someone comes up there to. And I think this goes into a much greater theme we're looking at too, which is we don't really know what AI's impact is going to be on learning as a whole. We don't know what's on writing that right now as it is now, and that's especially true if we start letting these systems do these different skill processes for us. And it's also possibly true if we let it do more advanced feedback too. You know, I write and we speak to each other right now as human beings. What is going to change with our dynamic with each other or just our communication if we're now writing where we know AI is going to be offering this feedback, or AI is going to be listening to us. And I don't have an answer to that. I don't think anyone does. But boy, we sure are moving fast into that arena right now to do so.

Derek Bruff:

Yeah. Yeah. Well, and like you say, if you're using Zoom, you've probably seen some of these tools in action, right? I have a couple of colleagues. I zoom with occasionally and they, they use the automatic captioning and they have a tool that will generate action items. Right. It'll listen to our conversation and then identify your action items and my action items and it'll summarize the meeting for me. And you know, as a working professional who is really invested in the work that I'm doing, that is a super useful tool, right?

Marc Watkins:

Absolutely.

Derek Bruff:

I will be able to vet its output. It will help me remember things I need to remember. It will clarify things I might have missed. But if I'm a student who either doesn't have the skills yet to make good use of that tool, or isn't particularly motivated in the situation that I'm in right then it just opens up all types of problematic doors.

Marc Watkins:

Really does.

Derek Bruff:

Yeah. So you mentioned feedback and that was one of your one of your post was about kind of A.I. and feedback and this is I think feels a little more abstract or meta somehow. But I mean, I think one of the fears that people have is that, you know, a student will have an assignment produced by AI and give it to the instructor who will then have another AI, give the student feedback. And like you've basically refined this assignment with no human involvement whatsoever. Right? It's just the AIs talking to each other. And so is that the kind of thing you're worried about when you when you think about AI and feedback?

Marc Watkins:

Yeah. I mean, it's not even my idea that this is how crazy it was. There was a researcher in the UK, Mike Sharples. He was actually getting ready to retire when ChatGPT was launched and now he's not retiring, but he wrote an opinion piece way back. I think in May or June of 2022, well before ChatGPT was released, it was about, I think it was AI and education, an opinion piece about what to do moving forward. And one of the lines in there that stuck with me that I still remember is that, you know, students will use AI to produce assessments. Teachers will use AI to grade the assessments, what is gained and what is valuable in that sort of process. And he was thinking about that well before all this came up, and it was something to that I've been thinking about as well. We we tested an AI feedback system and it was bespoke. You could create your own problems with your students, you could do this and what we found is that, especially for young first year writing students who did not really have the groundwork for the feedback is they really trusted this authoritative type of voice that was like, Look, this is good, this is great. You're doing a good job and not understanding that it's just generated feedback. And the other part of it too, is that they didn't understand that no matter what you ask the AI, it's always going to give you something to work on because it's predictive, right? It's not it's not reasoning. It's not saying this is good for you. You've done two or three steps. So we had some situations where students were like cueing the system over and over again, say, okay, I fixed that part. What about this part? I fixed that part. There's nothing broken with your writing. That's not how this works. Your writing is supposed to be improving as you go through it. It's not about a task list of fixing these different problems with it, which is what the AI feedback was basically doing. And so that was text in and text out feedback. Now with the GPT four Omni, the new model, it's going to be audio and it's going to be programmable to anything that you want. They had Scarlett Johansson's voice or I'm sorry, Skye's voice. They're saying it wasn't Scarlett Johansson, but it sounded a lot like Scarlett Johannson. Yeah, yeah, it sounded a lot like her. But you could program that for a male voice. You could program that for a political perspective, too. If you did not want a perspective from a conservative teacher or a liberal teacher, you could program that in your voice too. So that's a major downstream impacts about how we learn or how we teach. And we've had these discussions too, especially through student evaluation of female teachers and talking about how they're not being honest or fair about the feedback or evaluation of them to because they are female. They're looking at you like that too. Would you then have students who did not like a female teacher that program it just to have a male voice talk to them or in some way, shape or form. And those are just some of the things we're just now becoming on people's radar is something I'm concerned about, and I wish I had more answers to that than I do now. But that's just kind of where we are.

Derek Bruff:

Yeah, well, so are you. Are you teaching this fall?

Marc Watkins:

I am teaching this fall? Yep. I'm teaching digital media studies class.

Derek Bruff:

All right, so what what what questions are you going to try to answer this fall through your teaching? What? Do you have any experiments lined up or studies that you're planning?

Marc Watkins:

We have quite a few lined up. I think the biggest one is going to be how effective we can be in advocating for our students to kind of slow things down and be a little bit more critical with their engagement with the tools. We've had some good engagement with that, and one of the things too, we would test is with student reflection talking about this process. The thing that's always concerned me about this is that it's just sort of siloed in my own teaching. Like when I talk to my students, then they reflect that they're pretty honest. Like I know I can't use the tools like this in your class. I'm very aware of that. Thank you for pointing that out. But I use it all the time in my math class. It helps me with these different questions and like we've gone over how this is terrible to be using in math questions because it only gets it right about 60% of the time. But when you talk with the students and they're like, Hey, 60% for a class that's curves, that's actually like a B minus or C plus, I'm like that's not what we want to try to do here right now.

Derek Bruff:

This is hurting my brain.

Marc Watkins:

Yeah. Yep.

Derek Bruff:

And my soul.

Marc Watkins:

I know and that's.

Derek Bruff:

As a math instructor.

Marc Watkins:

Yeah, right. Well, I had no idea that they were using this especially. So what are the apps that they're using a lot of it's called AnswersAI. And AnswersAI is either on your desktop device, your laptop, or your cell phone, and it takes a screenshot of a mathematical problem and it loads that screenshot into the AI. And the AI produces an answer, not the right answer necessarily, but produces an answer. And it gives you the step by step process of how it came to that answer too. And that is been something that a lot of folks that I've talked to in the STEM have just not realized was available for students. But students, oh boy, they they realize it fairly rapidly. They go to it. And of course, you know, math has been a discipline that's dealt with different versions of machine learning and AI now for almost a decade now, Steve Wolfram has been a leading voice in that too for a very long time. And so they were when I kind of talked a little bit with our math faculty, they were a little bit dismissive about ChatGPT because they'd been dealing with this for years. Yeah, but they hadn't realized that it's able to do that and be able to write screenshot of this things.

Derek Bruff:

Right. So I think, you know, we've had things like Wolfram Alpha for a while that were free to use, easy to use, readily accessible to students, but still required the students to kind of translate a math problem into they didn't have to do a lot of translation to get the Wolfram Alpha. It's pretty flexible in terms of its input. But they've but if you give students a complicated word problem, they're going to have to kind of get to the point of doing a calculation that they can hand off to the to the calculator. And it can be a pretty sophisticated calculator calculation. But with the multimodal AI tools, now you're looking at taking a photo of a handwritten problem, right. And and maybe even producing something that looks like a handwritten solution. Right. It's that multimodal piece that takes even that small translation step out of the process where you can just point it at the problem you're given and get something to go on that. And you know, and the thing is that the large language models by themselves are not very good at math, but when you couple them with their ability to query a Wolfram Alpha, then if you can have the the multimodal model, take a picture of the word problem, turn that into to something calculable, you're going to get pretty good answers from that. And that's you know, that is that is a kind of game changer for math.

Marc Watkins:

It is. Yeah. It's a game changer, too. It's how developers are now thinking about using these tools to, you know, completely offload learning in that situation. Yeah. And that's something we have to think about. And it also gets us into, you know, this new phase of the tools that are being almost agentic in some ways. So if you can have an LLM use a different tool. In this case, WolframAlpha or a calculator or something else to get past a problem that it can't do on its own, It's becoming not intelligence, not sentient in any way. Right. But it's becoming much more of a challenge to sort of like talk to people about it as just a tool itself. It's becoming a much more of a challenge to deal with this, too, because it is almost like another thing, another entity that can complete a task. And there's some really great examples of this out there for coding Cognition Labs. DevinAI. I think it's very worthwhile watching the demo of that because it can code a website and use five or six different tools, including a web browser and use strategic reasoning and long term planning to come up with a response. And you know, if it ever gets that good for things beyond just tasks, imagine what that's like for students with their cameras almost always on with their audio too. And then AI giving them, you know, positive feedback and also evaluating them at all points in time. And it really is going to change how education functions. And, you know, maybe that will be for good. But for me it's a little bit too alarming how quickly this is happening.

Derek Bruff:

So I've got one more question for you. And I ask this a little bit facetiously. Why can't you just pick a side like, shouldn't you be all in on AI or totally opposed to it? Isn't that aren't those the two camps that that we're supposed to pick from?

Marc Watkins:

Those are definitely the camps. And the problem with choosing either of those camps is that you silo yourself into one side or the other too. And I think the best way of dealing with A.I. right now as both a faculty member or just as a user of it, is adopt the persona of a curious skeptic. Okay. Yes. These tools I'm going to play around with them I'm going to see what they're going to be useful for, too. But I'm going to be always skeptical and thinking about if I want to adopt them. That's really what we want people to be using. Whatever technology it is, too. And unfortunately, we're on a podcast so your audience can't see this. I'm holding up my cell phone right now, and the reason why this has become such a challenge for us is that we have become sort of, for lack of a better word, programmed to adopt whether a new feature or new app that comes in there too. I've got probably 60 or 70 apps on my phone that I've signed a contract with whatever company. I haven't read a single one of them. I don't think that anyone does that. And we really just need to start thinking about being more intentional about the technology that we're using too, and how we actually integrate this into our lives and what's it's giving us. In this case, mostly it's time saving time or or helping us with a task. But we might want to be thoughtful about what we might be losing in that process too. And just getting the chance to slow things down with students is, I think, going to be really helpful. So no camps for me, but Curious Skeptic is the way I would go.

Derek Bruff:

I love it. I love it. Well, and that's a good place to end it, especially with the slowing down piece, because, you know, we can't slow down too much. We can't assume nothing is happening and we can't put our heads in the sand. But even as the technology changes fast and rapidly, I think higher education, we've got to be we've got to be critical about this and we've got to be thoughtful about it. And so it's been challenging the past couple of years. Right? There is a too slow mode. But as you say, when it comes to learning, there are there are some really useful friction points in learning and we don't want to lose those along the way.

Marc Watkins:

Absolutely. Absolutely. Thank you so much, Derek. I really appreciate this.

Derek Bruff:

Yeah, thanks for coming on, Marc. This has been great. That was Marc Watkins, lecturer in Writing and Rhetoric and Academic innovation fellow at the University of Mississippi. Thanks to Marc for coming on the podcast and giving us all so much to think about. I rely on Marc to keep me up to speed on what's coming in the world of AI and education, and I'm really glad to share his insights with my podcast audience. Please see the show notes for links to Mark's website and blog as well as to many of the articles and tools he mentioned in our conversation. What are you worried about as you think about AI's role in teaching and learning this coming fall? I'd love to find out where your headspace is right now. In the summer of 2024, you can click the link in the show notes to send me a text message. Be sure to include your name since that won't come through automatically. Or you can just email me Derek at Derek Bruff dot org. Intentional Teaching is sponsored by UPCEA, the Online and Professional Education Association. In the show notes. You'll find a link to the UPCEA website where you can find out about their research, networking opportunities and professional development offerings. This episode of Intentional Teaching was produced and edited by me, Derek Bruff. See the show notes for links to my website, the intentional teaching newsletter and my Patreon where you can help support the show for just a few bucks a month. If you found this or any episode of intentional teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Tea for Teaching Artwork

Tea for Teaching

John Kane and Rebecca Mushtare
Teaching in Higher Ed Artwork

Teaching in Higher Ed

Bonni Stachowiak
Future U Podcast - The Pulse of Higher Ed Artwork

Future U Podcast - The Pulse of Higher Ed

Jeff Selingo, Michael Horn
Dead Ideas in Teaching and Learning Artwork

Dead Ideas in Teaching and Learning

Columbia University Center for Teaching and Learning