In her 2022 book Remembering and Forgetting in the Age of Technology, Michelle D. Miller writes about the "moral panics" that often happen in response to new technologies. In his 2013 book Cheating Lessons: Learning from Academic Dishonesty, James M. Lang argues that the best way to reduce cheating is through better course design. What do these authors have to say about teaching in an age of generative AI tools like ChatGPT? Lots!
I asked Jim and Michelle on the podcast to discuss generative AI from their different perspectives, and the three of us had a wide-ranging conversation about how faculty and other instructors might respond to these new tools. Michelle is a professor of psychological sciences at Northern Arizona University and a prolific writer and speaker on teaching and learning in higher ed. Jim is a former professor of English at Assumption College and also a prolific writer and speaker on teaching and learning in higher ed. In the conversation, they raise some important questions for educators to consider this summer as we retool courses and assignments for the fall to account for AI technology.
James M. Lang's website, https://www.jamesmlang.com/
Michelle D. Miller's website, https://www.michellemillerphd.com/
Cheating Lessons: Learning from Academic Dishonesty, https://amzn.to/3q8Lxr8
Remembering and Forgetting in the Age of Technology, https://amzn.to/431YEcj
Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching
Find me on LinkedIn, Bluesky, and Mastodon, among other places.
See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.
Derek Bruff 0:28
Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas and teaching. I'm your host, Derek Bruff. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time.
This spring, I've been reading a book called Remembering and Forgetting in the Age of Technology by Michelle D. Miller. Miller is a professor of psychological sciences at Northern Arizona University. And the book considers some of the claims that have been made about digital technologies, particularly the Internet, and how they are or are not changing our brains. One of her early chapters talks about the idea of a moral panic, a response to new technologies marked by hyperbole and hysteria. As I was reading this chapter, which was published in 2022, I couldn't help but see parallel after parallel to the current response in higher education and beyond to new generative A.I. technologies like ChatGPT, a conversation that hadn't started up in its current fervor when Michele wrote her book.
And just as I was thinking I should totally have Michelle on the podcast to talk about this. I remember that James Lang also wrote a book years ago that has new resonance in 2023. Jim's 2013 book, Cheating Lessons: Learning from Academic Dishonesty, features a review of research on cheating, plagiarism and other forms of academic dishonesty. Jim's takeaway from that review was that the best way to minimize academic dishonesty was to remove the incentives for cheating through good course design. When we lower assessment stakes and build in opportunities for revision and foster intrinsic motivation, that's going to make it less likely that students will want to cheat. And isn't the worry about cheatnig with ChatGPT a big part of the current moral panic over A.I. Technologies in higher ed?
I emailed Jim and Michelle to see if they would be up for a conversation about generative A.I. from their different perspectives. They were happy to say yes. Jim and Michelle, as listeners may know, work together regularly since they are the co-editors of the Teaching and Learning in Higher Education series of books from West Virginia University Press. Full disclosure My book, Intentional Tech, is in that series, and Jim was the editor for my book. Jim is also a former professor of English at Assumption College, where he also directed the D'Amour Center for Teaching Excellence. Jim and Michelle have quite the academic bios. See the show notes for links to their websites to learn more. The three of us had a wide ranging conversation about ChatGPT and other A.I. tools and how faculty and others in higher education might respond to these new tools.
One note on the audio quality Both Michelle and Jim had terrible luck with technology the day we were scheduled to record this conversation. Jim ended up recording on his phone and Michelle recorded from her car and we had to use Zoom instead of my usual recording tool. The audio isn't up to my usual standards, but the quality of the discussion will make it worth a few glitches.
Thank you, Jim and Michelle, for being on the International Teaching podcast. I'm very excited to have this conversation with both of you here today. Welcome.
James Lang 3:33
Michelle Miller 3:34
Great to be here.
Derek Bruff 3:37
I know you two know each other and I know both of you. We all are either write or edit or both for the West Virginia University series on teaching and learning in higher education. Before we get into the the main topic, which is artificial intelligence and how it's affecting higher ed right now, I'm going to I have a spin on my usual introductory question for you guys today. Can you tell us about a time when you realized you wanted to write books about teaching for teachers?
James Lang 4:11
That's a very interesting question. I have to think about that.
I would say it is probably what I read Ken Bain's book, What the Best College Teachers Do. And I had been reading literature, literature about teaching learning because I started my career at the Searle Center for Teaching Excellence at Northwestern and under Ken's tutelage. And so I had read a lot of books and articles about teaching and learning, and I didn't feel I the information was great, but the writing was not always great. And it was very sort of grounded in sort of academic traditions of citing research. And and you know, there's there's good reasons to write that way, but it wasn't always fun to read it. It wasn't enjoyable. And when I read Ken's book, I felt like I was being taken on an intellectual journey. And I thought, if I'm going to write about this stuff, this is the kind of book I would want to write. And I was the kind of to try to write a book that was useful but also pleasurable to read. And so it was probably reading that book that showed me that that it was possible to write interesting books about teaching and learning in higher education. Yeah.
Derek Bruff 5:28
Yeah. What about you, Michelle?
Michelle Miller 5:32
Oh, well, I could. Yeah. I have a pretty distinct memory coming into focus here. So it started when I finally was able to begin teaching the graduate teaching practicum at my department. And this is a long time ago, and I had a hiatus of maybe five or six years where I was out of the department for a while. But that first semester that I got to sit with our graduate students and start putting together these materials, and this would have been, my goodness, probably around
2008, 2009, and Jim will know for sure if I've got the year off, because his book also plays a role in this, his book On Course. So there was one book really available that had that to to for psychology. It had a psychology social sciences bent. I'm not going to call it out. It was a good book at the time, but here too, the writing was it was okay, but there was so much of just like, Oh, I do this and my students like it. And you know, nothing about the teacher's position ality or his identity anything like that. And we would kind of scrape the bottom of the barrel to find some empirical studies to support like, oh my gosh, can I possibly piece together just a few things for students?
And so I did the efficient, intelligent thing and I decided I was going to write it while I was teaching. I was thinking I was going to write my own teaching manual. I was going to and I think I called it Miller's notes. You know, nothing, nothing self centered about that. So in a way, it was more of like, I do this and my students like it, Or I also had a feature in it, a running feature called Low Points in the history of teaching parts one, two, three, and I forget how many. So. So like mistakes and screw ups I made. So I would every week I would like write another chapter at the last second to my students. And of course, at the end of it I go, Oh, I'll publish this. And then I started looking around it and I discovered Jim's book On Course. And I'm like, Well, I don't well, it looks like this is I have this here. I can't really match that.
But I will say I was able to pull out one of the central chapters, which was about memory and how that was totally misconstrued by so many people in the teaching space. And and I turn that into an article by taking out a huge amount of the snark and just, you know, really snippy comments that I made turn it into something much more classy and published in a teaching journal, College Teaching. And Jim to Jim found that we found each other and so yeah that that set me on a way. Miller's notes. Fortunately, I'm really, really glad I wasn't able to publish because, oh my gosh, I look back now and I like, Oh wow, that was that was really off track. But yeah, that's that's really where I got started. So I was really pleasant to kinda have that all come back, I think.
James Lang 8:35
I would love to see Miller's notes at some point.
Derek Bruff 8:40
Well, thank you for sharing that. And and I want to talk about a couple of your books here today in the context of the conversations that higher education is having around particularly generative artificial intelligence tools like ChatGPT is the big one, but there's plenty of others now Bing and Google's Bard and the the image generation tools like DALL-E and Midjourney. And so, Michele, I'm going to start with you. I've been reading your new book, your 2022 book, Remembering and Forgetting in the Age of Technology and early in the book, you take on some common narratives about new digital technologies and the human brain, right?
These these arguments we've heard that the Internet is making the Internet is making us smarter or more often the Internet is making us stupider, Right? Are we in higher education in the middle of a moral panic about generative AI right now?
Michelle Miller 9:37
I'm going to say pretty much a big yes on on that. And it's true. I've written a lot about moral panic. It's it seems to be a kind of a well, like I returned to and a few of the things that I write of. Okay. Not just like what are some of the less fruitful narratives and assumptions that we we pull up about technology, but also, where do those come from? I mean, I'm a psychologist. I always have to try and find what's underneath the, you know, the narrative and the sub narrative and the subsub narrative and go like, where is this coming from? And so I think that that really is a powerful concept of why do certain ideas, technologies and changes make us nervous, either as a in the education community more locally or as the wider society that we're in right now. So I think we are.
And at the risk of possibly missing out on some some big trends, which I think are important, I have to say really a lot of my first reaction to a lot of this is let's take a breath. And, you know, I'm not I'm not a historian is a psychologist. We tend to be very much in the moment, not very historical in our way. But boy, I can't help but look back and say, all right, we've kind of been in this in this territory before. So also in cognitive science, this idea of of neural networks or, you know, networks that sort of you train on examples and then there's a hidden layer that does something that you're not quite sure what, but it makes these connections so that it can generate new responses. We have been playing with that since I was in grad school. At least it goes back a little bit further than that. And it was this kind of specialty thing for for a long time. Obviously, we've now hit a brand new level in terms of its power, its usability and a lot of other things. But I mean, to me, I don't have this reaction of like, where did this all of a sudden come from? Because this this has been building for a while.
Derek Bruff 11:51
Yeah. Jim That was Michelle's question, but I'm curious if you have thoughts on the, the moral panic that we're in and what that means for higher ed.
James Lang 12:02
I mean, I mean, these these, these sort of waves of new things that come along for sure. We are you know, and I felt the same way about
AI when I first heard about ChatGPT and, you know, I have the same reactions that many people do. Well, you know, I wish more of us have should have the reaction that Michelle's talking about, just to step back and take a breath. But this is my first thought, like many people's is like, well, this a little scary. It's like, how is it going to change what we do? And like, unlike, like a lot of people, I think we do a lot of things right in higher education. And I I'm never a believer that, you know, we have to change everything. And so this initially it looks like something that might change everything. But at the same time, you know, as you as you live with it for a while and it's been six months now and people are starting to think more about it and and consider both its, challenges and opportunities and seeing those things sort of being brought to life and specific examples and some of the ways that people are using it productively and finding ways to make sure that we're using it to not replace things that are important to do. I think I think we're just sort of we're to move our way through this and and find positive ways to relate to it in education and I think we will get there. But, you know, definitely we're in we're having sort of panic mode a little bit. But I think it's getting it's getting a little bit better at this point. Yeah. Yeah. As people are starting to figure out what it means, you know.
Michelle Miller 13:45
To follow on to that, I mean, you mentioned all the we already do right in higher education and contextualizing it not just sort of in terms of a technology, but what also really struck me right right away and started to set off my moral panic detectors was how little it's being talked about outside. You know, so higher education is being talked about by folks who are really not inside of it and some real tells to that. I noticed right away that, like people will be sending me articles like, well, higher education is going to really have to rethink what it does. It's going to have to revamp how it approaches education. And I'm going, wait a minute, you can't revamp something that was never vamped in first place. I mean, we just don't. I mean, I'm not necessarily criticizing this, but we don't have a top down way or mechanism that the vast majority of institutions to say, oh, wait a minute, we got to change our teaching approaches, because ChatGPT is a thing, this happens locally in different classes. Yeah, there's I mean, as if there's a switch you could flip or a memo that you could send out that would say, okay, be sure you do this. Teachers, I mean, as folks who interact with so many, you know, dynamic faculty, faculty doing fantastic things, but it's not like there's any centralization. So to the extent that we do change things, that's going to be, you know, through this faculty to faculty innovation kind of in classrooms on the ground. So that also I'm like, I don't think that you really know how we do anything in higher education. If you think that that's going to be like, Oh, well, you know, now we're going to just do it differently, it doesn't work.
James Lang 15:34
That's a great point. That is a great point.
Derek Bruff 15:36
Yeah. Sweeping statements, especially from outsiders, are a sign of the moral panic. Right. That that that there's some hyperbole happening That's that's that's not informed, then it's not helpful.
And also, I mean, you make the point we've we've done some of this before, right? Like the Internet and search changed our students access to information. And that that necessitated a change in how we teach. But it didn't happen overnight. It was something that that faculty and librarians experimented with and played with. And I've been I've been trying to dig up some some fun quotes about Wikipedia when it hit the scene ten or 15 years ago and how people initially reacted to Wikipedia. And yeah, so, you know, all of this has happened before. All this will happen again to some degree. But the the the particularities are a little bit different. on that note, Jim, I've been thinking a lot about your book Cheating Lessons, which is now a decade old, by the way. I couldn't believe that when I read the copyright date.
And you know, in that book you reviewed the research on academic dishonesty, which is where a lot of the initial conversations around ChatGPT and these tools were about right, like our students are going to use these to cheat. You know, I'm asking my students essay questions and students are just going to have the robot write it for them. But what you found through your research was that a way to
disincentivize cheating is through good course design, right? Lowering stakes, fostering intrinsic motivation, supporting students, self-efficacy, right. These types of things. So for faculty who are maybe still a little bit worried about students cheating with ChatGPT, is the solution better course design? Is that still true here?
James Lang 17:23
It's a little bit. I'm going to give a different sort of pathway here in terms of what we maybe need to do at this point. I'm to actually step back for a second. Just think about like a definition of education, which comes from John Dewey and John Dewey's lask book, he was responding to criticisms of his sort of the laboratory school that he had created was very hands on and active and all that stuff. And so he wrote this sort of final book in which he wrote he defines education as growth. Education provides opportunities for people to grow. And that can be you can grow in your career and your service, your as a person, as a learner. Right. So education is about growth.
So let's say we're trying to like look at a new something new and like a tool like this one. So our first sort of thing we would do is to say, okay, where is it? Where the better this is going to promote growth and where the place is going to limit growth. And I see we can see this in A.I., right? There are opportunities to grow as as learners and as a society. And in terms of careers, all that stuff. And we are we're already seeing that right in professionals are using ChatGPT to improve their productivity. And also students can use it to sort of push their their research further to respond to something, develop their skills for revision. There's also ways in which it might limit growth by sort of taking over skills that are actually important for me to learn and master. And so to now take that sort of framework. And so what does that mean practically? It means for those of us who are teachers that we have to unbundle our assessments and we need to think about unbundling our assessment strategy in a course, also within the specific assessments, what are the parts of this assessment are going to promote growth? That's an important
growth in this discipline for this student in this course. So, for example, a research paper has is doing lots of different things. It's supporting me to develop my research skills, my evaluations skills like sources, my writing skills, my revision skills. Maybe there's a presentation at the end. That's my speaking skills, right? So now I have to think about, okay, which of those parts actually is really important for as a, as a teacher, I'm saying to my about my students, which of those parts is really going to help my students develop things they're going to like will help them further in life. And the other one, maybe the other ones I can recognize going forward, it's going to be better for them to be able to mastery over this tool. Right. And so I think, like, I'm going to get like a very academic answer. It's going to totally depend on the context. Right. And so, like, we have to sort of we can't just say, you know, this tool is going to destroy these kinds of things. It's never going to be like that. Some of these things we still need to hang onto.
One thing that I'm concerned about is and I understand why people are saying this, it might be a right thing to do in certain contexts, but people are saying sort of, you know, ChatGPT can give me my outline and I'll just give it a prompt, it'll give me an outline, and then I'll write the paper as a result of that. Well, outlining is a thing that is like important for us to be able to do as a human, to be able to organize a knowledge field within my own brain. I'm sort of developing these skills that I can then apply to other places in sort of so. So I'm concerned that, you know, that's a quick reaction to say, yes, ChatGPT can do this, but should should it be doing those things instead of doing other things, maybe like giving my gathering my initial research, like it's great for that and maybe even giving me abstracts of the things that I'm I'm sort of that I want to use.
So like, I think we have to that's that for me because it's kind of a different approach here, which is just to think about. And then we when we identify the things we recognize, know these are skills that a human brain, that intelligent human brain that's going to be productive, is going to continue to grow for the rest of their lives. When we've identified those things, we say, okay, now how we are going to teach those? We need to start probably bringing more things into the classroom for in-class practice. We have to give more scaffolding and feedback and some then we might get to the some some things I talk about in Cheating Lessons. But we have to step back first and say, okay, we've got to start unbundling things and see what's important for a human to do.
Okay, almost done here. But the last thing I'll say, in talking to you as a math person, so so again, can say I mean, here's an analogy about calculators, which I don't really like that much because you can use a calculator once you've you've developed numeracy skills, right? So they're not using calculators in kindergarten because they have to sort of go, they have to go through the work of developing numeracy, get numeracy skills and we have to sort of do that process with all the skills that are important to us, that are important to our disciplines. Okay, that's it. I'm going to shut up now.
Derek Bruff 22:32
Michelle, you want to add to that?
Michelle Miller 22:34
Oh my gosh, I just yeah, this unbundling concept, I mean, that's that's frankly so exciting. That is exactly what we need to do. And I mean, I think that perhaps we in these higher education circles, sometimes we're a little too prone to say silver lining, everything has a silver lining. There' always a great opportunity for growth. I mean, that's a great flaw to have if we're going to have one. But at the risk of of following in exactly those tracks, I'm thinking to to couple that unbundling not just like, well, we do a paper because you do a paper turn in the paper the last week and I grade your paper, you know, But what do we also couple this with? I mean, I don't have as good a maybe a term for it, but this, this kind of nascent, non adversarial we're on the same side, what are your goals? How do I get you there? approach to how we really, you know, frame this enterprise? Not like you owe me a paper and I'm going to give you points for it, which traditionally I mean, that's kind of where we're at with with a lot of this. This that's what I came up with. Right. And and what I love about Cheating Lessons is it does identify and put that fine point on. That's the dynamic that leads me to say, you know what, I'm going to just buy this paper in the middle of the night because it's due tomorrow. And that's what this is all about. It's this transaction.
So if we couple this with the idea of like, well, sure, you could do there's 20 different things you could do to do and come up with some serviceable paper at the end of the term. But why are you here and how am I going to help you get to the next thing? Like in and I know for some this is just going to come across as well. Cheating just hurts yourself or whatever. But but seriously, it's like if this is linked now to you want to learn to write a literature review in an APA style paper. Why do you want that? Because here's the next thing you're going to do. Here's the dream that you're going to achieve, and we're going to workshop that. And yeah, there's a great paper at the end of the term super. You can put that in your portfolio or have it or whatever, but it's about what you establish in this. This is different. This is that maybe you're a long time and coming towards this point and we've needed to for a long time. But what if we had those two things together? That could be, that could be powerful.?
James Lang 25:00
Yeah. Can I say one other thing, Derek? Which is that. Michelle, just to hit on, which is that I think that I think we might have to start thinking about is the feedback that we give, because AI can give feedback, right? But what I can't do is know my specific students and what they are trying to achieve and what their potential is and what and what their experiences have been. And and as humans and like. So our feedback maybe needs to be more directed to each individual in our classes. And of course, we should have been doing that all along, right. I mean, so, you know, again, we want to think about, you know, I completely agree with Michelle. Also, we don't want to just jump to do this is great and let's embrace it. We have to be more mindful than that. And essentially, there's some good things here and there's something to be careful about. And and but sometimes I think it is pushing us to do things that we probably should have been doing already.
Derek Bruff 25:54
Yeah, I had a great conversation earlier this week. It'll be on the podcast at some point with Robert Talbert and David Clark, who have a new book out Grading for Growth. And I asked them, Why is higher Education talking about alternatives to traditional grading? And Robert said, Well,you know, that had been brewing for a while, but the COVID pandemic really kind of revealed some problems that had been there all along.
And I think there's a similar flavour here with ChatGPT when I think about some of the concerns that were expressed last November when this came out. You know, here's an essay question. I want my students to write three or four paragraphs in response to this prompt and ChatGPT can do it for my students. That may not have been a really great assessment tool to begin with. Right. And and we were kind of coasting on some a set of assumptions about how we do assessment and why we do assessment and what's good assessment. And this may have just kind of pulled back the curtain to say, you know what, that's that's maybe not a useful piece of assessment anyway. Right? What is what part of those those unbundled skills is that helping students to develop? Is it helping students to show off?
But I can also imagine in a context where students have have that understanding, here are the skills I'm building, here's the areas I'm growing, here's why I'm doing this, because it ties into my personal or professional goals. Then you get a little assignment like that and they don't want to short circuit, right? They know why they're being asked this question and what they're trying to represent or what what moves they're trying to make in their in the writing. And that's where I think the Cheating Lessons idea comes in very fully, is that if they've got that real motivation to get some meaning out of this activity, then there's far less and far less motivation to just have ChatGPT write it for them.
So, Michelle, your book also has a really wonderful chapter on how memory works, at least what we know of how memory works based on current neuroscience. Do you based on that, do you see and you know, the thinking that you've done around other kind of digital technologies, do you see some areas where ChatGPT and similar tools would be really helpful for student learning or maybe some areas where we should kind of shy away from them? And your answer may be like Jim's. It depends on the context, but I'm curious if there are some examples that occur to you of places where this might help actually in the learning process or places where it's it's, it's, it's it's going to be a short circuit that's not helpful.
Michelle Miller 28:32
Oh, gosh. Well, you know, when I think about the connections between the ChatGPT issue and some of those larger themes about memory and so on, in in that book, I think too, about the the the offloading part that there did seem to be that's one of the things that really inspired me to dig deeper and and conceive of that book was this running theme of like our minds and our brains are super efficient, right? They don't they aren't just racking up loads of information because it's out there and maybe you can use it someday. But they're really they're hardcore, minimalists, they're not hoarders in. They only want what is useful right now. And so it will be interesting to see and for me to talk about with with my psychology students and say what kind of offloading or sort of task sharing in a way is going to develop around this organically. And I mean, it's it's pretty cool that we can now that we do seem to share our memories with with things like Google, and that is really we really internalize that at a pretty deep level.So how that happens, but that's a more of a meta-conversation. I wanted from the book too. But you can have it, does it? I mean, it's not that it undermines your memory across the board, but offloading does. I mean, that's that's why your brain and mind do it like Google. Now, I don't have to worry about that. It's been like capacity and other stuff. So there are these sort of subtle undermining, undercutting effects that that happen.
So I think that that that's one of the things now getting to kind of more general ideas about teaching, I think having students really experiment with it and say, okay, let's let's put this in or ask it to do this thing. How is this different than what you would say? How do you you know, what's the tip off that this wasn't generated by a human being? And then what's, you know, surprising and novel and interesting about it?
And I guess just to be very, very practical, when I think of things like teaching in my discipline, where I mean, there is writing, writing for thinking is is part of it, but we also have some pretty almost, yeah, kind of formulaic ways of here is how you, you express this stuff I mean AP style, right? Sure. Which we love to malign, but it has a function which is to kind of take cognitive load off me as the reader, because I know exactly what is where, what is coming next. And so and there's room for creativity, but there's it's that way for a reason. So I think if I were teaching a research methods course in the fall with this in mind, I would also be experimenting to say, Are there
parts of the paper that I mean, I'm already telling students this is not the place for creativity, like the results section or how do you write up a participant section? That, too. I mean, writing an APA style result section has built. I mean, I've done that a lot and it's a character building exercise, but I don't know that. I mean, me too. I would like to be able to plug in my results and say, okay, you arrange the degrees of freedom and introduce this and have the flow. So I think there may be a place for for those parts to get to that point. Students will have already had to do so much to design an experiment to review the statistics, to say what they mean and so on. So that's kind of from the biggest, I guess, top level to the more granular. How would this affect my syllabus for sure? Thoughts on that?
Derek Bruff 32:13
Yeah, but I think about Jim's point about the calculators, right? That we would ideally we don't just hand a kid a calculator and they, they don't know, they're just pressing buttons, right. Like, like ideally they have a sense of I have a problem, I know how to get information from that problem into my calculator and press the right sequence of buttons to get the answer. I want and interpret that answer right. Like, like there's it's not a black box. It's it's a very intentional tool. And so I can imagine something like that for APA formatting, right? Students can't just blindly trust the tool to spit out, you know, things that are formatted rightly or have the right kind of sequence of topics in them. They need to have some familiarity with that genre, right? But once you have that familiarity, it's fine to have some tools that kind of shortcut that that speed up some of that process. And, you know, in mathematics, it eliminates mistakes, right? Like if I'm going to multiply or divide four digit numbers, I can do that by hand, but it's going to take me a while and I'm much more likely to make a mistake than my calculator will will be.
James Lang 33:19
I think. I mean, it's just part of I think what's being discussed here is that the sort of the scaffolding of ChatGPT into skills that we're going to be there will help us grow. And so, for example, initial learning might require practice practicing a skill that might seem rote to us, but at the same time, it's helping me to develop the sort of the ability to revise like, I mean, I'm seeing, you know, people saying again, like just there was an article in Inside Higher Ed yesterday. It was about like, you know, let ChatGPT write students first draft and it just revising writing is more important. Revising is important, but you still have to understand how to write before you, how you know how writing works essentially. And you're not going to get to that point until you try it. You done some writing yourself, and so maybe, you know, ChatGPT is sort of less helpful to a first year composition student than it would be to a senior seminar or like Michele's graduate students who who know the skills and they've developed them. And so they can then be more aware of how to use ChatGPT as opposed to a first year composition student who maybe didn't get a great education in high school. And they still need to develop the skills and understand how writing works before they can start using it productively.
And and this is sort of moving to more speculative territory, territory for me and I think for, you know, all of us essentially is. So let's just think about the image that ChatGPT is going to give me, for example, like of a beach. So like, say I want to, I want to have a, a serene image of a beach, right? And there can be some bathers there. And some, you know, some some birds and some clouds. Okay. So ChatGPT, it can do that for me and just give me multiple ones and I can change and all that stuff. And at the same time as as a someone who like I want that image, I want an image of a beach that's meaningful to me that reminds me of a day when I was there and that was particularly like and so so this is another thing I think we have to think about is the fact that ChatGPT can give me the answers and it can give me useful information, all that stuff. But do I want that information to come from this just in this abstract way from a non-human, or is there something that's important for me about knowing that it came from a human brain? And I think we're going to have to think about this like, is that important?
Like the example I was just recently thinking about, like I watch baseball and there's like, I was one of the teams I watched as the Cubs and the announcer one of the announcers for the Cubs, I like that guy. Like, I just like to hear what he what he's been doing recently. A little aside is the way his mind works is kind of interesting, and I could get a baseball game could be called by AI probably pretty well and it would sort of, you know, give me interesting stats and all that stuff. But part of the reason I like the game to watch the game, there's a lot down time in baseball, so the announcers have to just sort of riff all the time, part of it is now is like I have a relationship with that guy. I mean, on my part, he's never he has no relationship with me. But still, you know, I'm saying so there's like there's a part of that process that's important for me to, like, have a connection with him as a human. And I think both from the teacher to the learners, but then also to the learner, to the teacher, is important for it to be grounded in in a human body which has the specific life experiences Is that important for me as a learner, or do I just want to get information? And again, we're like at the cusp of. Exactly, Yeah. You just shrugged your shoulders and that's kind of where we are. We have to think about this and I think we're to have to over the next few years, we would have to decide, does it matter now?
Derek Bruff 37:08
When does it matter? When does it not matter?
James Lang 37:10
Right, Exactly. Exactly. Yeah.
Derek Bruff 37:14
Well, thank you so much, both of you. This has been a delightful conversation. We have covered a lot of ground. But also thought practically about what this means for our teaching and what we might, you know, do in the fall with whatever courses we're teaching. So thank you for taking some time today to explore these issues with me I really appreciate
Michelle Miller 37:30
You as well.
James Lang 37:32
Always a good conversation with Michelle Miller.
Derek Bruff 37:36
That was Michelle D. Miller, professor of psychological sciences at Northern Arizona University, and James Lang, writer and editor and former professor of English at Assumption College. Thanks to both of them for taking the time to come on the podcast and share their perspectives on generative A.I. and teaching and for weathering the technical challenges we faced that day. They raise some really important questions for faculty and other instructors to consider this summer as we make plans for fall courses
over on Patreon, I usually share a bonus clip from my podcast interviews for my supporters there. The clips are often pretty short, but the one for this episode is longer than usual. Jim and Michelle and I went on a seven minute riff about the changing nature of creativity in an age of generative AI. It's really great and you can listen once you sign up as an intentional educator on Patreon.
This episode of Intentional Teaching was produced and edited by me, Derek Bruff. See the show notes for links to my website, the sign up form for the Intentional teaching newsletter and my Patreon, which helps support the show. For just a few bucks a month, you get access to the occasional bonus episode, Patreon-only teaching resources, the archive of past newsletters, and a community of intentional educators. As always, thanks for listening.