Intentional Teaching

Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson

Derek Bruff Episode 73

Questions or comments about this episode? Send us a text message.

Today on the podcast, I'm excited to try out a new format. I'm calling it "Study Hall" since we're gathered together to discuss some interesting teaching and learning studies, with this edition's studies exploring the intersection of generative AI and education.

The panelists for this edition of study hall are Lance Eaton, senior associate director of AI in teaching and learning at Northeastern University; Michelle D. Miller, professor of psychological sciences at Northern Arizona University; and David Nelson, associate director at the Center for Instructional Excellence at Purdue University.

Episode Resources

Grinschgl, S., & Neubauer, A. C. (2022). Supporting cognition with modern technology: Distributed cognition today and in an AI-enhanced future. Frontiers in Artificial Intelligence, 5(July), 1–6. https://doi.org/10.3389/frai.2022.908261

Sun, Y., & Wang, T. (2025). Be friendly, not friends: How llm sycophancy shapes user trust. arXiv preprint https://arxiv.org/abs/2502.10844 

Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., & Siemens, G. (2024). Impact of AI assistance on student agency. Computers & Education, 210, 104967. https://doi.org/10.1016/j.compedu.2023.104967

Lance Eaton’s blog, AI + Education = Simplified, https://aiedusimplified.substack.com/ 

Michelle Miller’s newsletter, R3, https://michellemillerphd.substack.com/

Dave Nelson’s LinkedIn page, https://www.linkedin.com/in/dave-nelson-8698b94a/

 

Support the show

Podcast Links:

Intentional Teaching is sponsored by UPCEA, the online and professional education association.

Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe

Subscribe to Intentional Teaching bonus episodes:
https://www.buzzsprout.com/2069949/supporters/new

Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching

Find me on LinkedIn and Bluesky.

See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.

SPEAKER_02:

Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. I'm your host, Derek Brough. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time. If I sound a little different today, that's because I am recording this intro on my front steps. My family and I are moving, and the house is a bit chaotic right now, so I am recording in whatever semi-quiet space I can find. Today on the podcast, I'm excited to try out a brand new format. Once again, I've been inspired by the American Birding Association podcast. The ABA podcast uses a format they call This Month in Birding, where host Nate Swick invites three great guests to discuss recent studies or news articles from the world of ornithology. I learn a lot listening to these episodes, and I thought I would try the format out here on my podcast. Doing something called This Month in the Scholarship of Teaching and Learning sounded a little ambitious to me. There's no way I can do this monthly, so I'm calling this format Study Hall, since we've gathered together to discuss some interesting teaching and learning studies. My panelists for this first edition of Study Hall are all colleagues of mine in the field of educational development. who do a great job finding and sharing educational research that's both interesting and practical. Lance Eaton is a Senior Associate Director of AI in Teaching and Learning at Northeastern University and author of a great blog exploring the intersection of AI and education. Michelle D. Miller is a Professor of Psychological Sciences at Northern Arizona University and author of multiple fantastic books applying psychology to teaching and learning. David Nelson is Associate Director at the Center for Instructional Excellence at Purdue University, where he's been supporting a variety of teaching initiatives for 17 years. For this first edition of Study Hall, we're focusing on scholarly articles that have something to say about generative AI and education. I'll mention the authors and titles right now, and you can find full citations in the episode show notes. First up is a literature review titled Supporting Cognition with Modern Technology, colon, Distributed Cognition Today and in an AI-Enhanced Future. That's by Grinshigel and Neubauer. Second is Be Friendly, Not Friends, colon, How LLM Sycophancy Shapes User Trust by Sun and Wang. And the third article is Impact of AI Assistance on Student Agency by Darvishy et al. Welcome to the first Study Hall panel here on the Intentional Teaching Podcast. I'm very excited to have Michelle, Dave, and Lance with us today to dive into some really interesting articles. Thanks to the three of you for being here today. I'm excited to have this conversation. So we're going to start first with this article by Grinch School and Neubauer, maybe. We were having some discussions about how to pronounce these names. But this is an article that Michelle is going to tell us about. Michelle, do you want to tell us about this article and share some of your thoughts on it?

SPEAKER_00:

Oh, hey there. You know, I love a good review and wrap-up kind of article. And I know that the empirical articles are meat and potatoes sometimes. But I came back to this one. I'd read it a while back, and it was really on my mind. Because it's really all about a concept that's turned out to be useful in some other contexts as we're looking at understanding not really the psychology of how we interact with technology. And that's something that maybe we've danced around a little bit in education or maybe is a little bit grandiose. But people who make it through anything that I write know that I love to keep coming back to those kind of bigger questions in a way of like what goes on in our minds when we pick up a smartphone or now when we enter into a conversation with an LLM, which our students are going to be doing. We're all doing more frequently. So kind of looking at this offloading concept, cognitive offloading, because I think that is going to be an important framework going forward. So what they're talking about in this brief review article, of course, since it is a review, they're coming back to some of the research and trying to kind of synthesize what do we know and what do we not know. So they really do start with kind of running down some of the basics of, again, what we know and what we don't know about when people are most likely to, as we call it, cognitively offload. And I mean, just to really kind of think about this as well, I mean, cognitive offloading, we are hearing this term more a lot these days, but I really encourage people to not think of it just as like, oh, I use tech to do something or I chose not to do something and I put it on tech instead instead of doing it by hand or by brain, however you want to think about it. And it's not just the same as saying, well, did you offload it to people? There's a lot more kind of granular processes that are going on when we truly share our thinking and our remembering and now reasoning and other conversation with a machine. So that's what they're talking about. So some of the basics of when we're more likely to do that. Not a lot of surprises initially, though, in this article they talk about. We do it when we think it's going to be easy and fast, right? So if that tech will respond to us instantaneously, then we're right there. So it turns out that we pretty much don't stick around and wait to say, oh, do I want to put this calendar invite in here? Do I want to try to remember it myself? So that's That's a big one. When they think the tech is going to do a better job than they will, of course. Age. They are no surprise there too. Younger people tended to start to, we're starting to see these trends where younger people have more trust in AI and technology. And that trust factor too. So that's something they also unpacked in this article of trust in technology. And this can be, I mean, it's kind of neat to think about this continuum. This can be self-driving, for example. That's another big technology that's coming out. And some of us are like, yeah, that sounds great. And some of us go, That makes my skin crawl and sounds very, very dangerous. And so maybe is that all that different than, oh, hey, do I want this AI to summarize this article for me? And do I trust it to do that? And there's some suggestion, they say there's some suggestion out there that people with certain personality characteristics who are more trusting of other human beings are going to be more trusting of something like an LLM. But they say that it's definitely not... a clear thing in the literature just yet, but they say that this individual differences thing is going to be big. And so that's what I like too about this is kind of mapping out a little bit in the future. What are some areas we need to delve into? What are some frameworks that we already have that have already proven to be useful in learning about technologies? Getting offloading right, again, not kind of getting some of the misinterpretations that are out there and saying, we're going to see some differences. There's going to be this sort of trust and trust nots out there. And that's what we need to know more about. So those are some of the highlights there. And I guess last of all, they do use one of my favorite illustrations of offloading and how it's not always a universally negative thing. Because that's what I worry about is people say offloading means that we sort of are atrophying cognitively or something like that. And it's not that simple. But yeah, It's true. Our intuitions about, say, using GPS to navigate are correct in that, yeah, if I use GPS to navigate around a particular city or region, I'm not going to form a mental map of that. However, it doesn't mean that my ability to make mental maps in general is degraded. So that's some of the things when I say there's some subtleties about offloading and that's the sort of processes we may see as we increasingly use AI and our students do as well.

SPEAKER_02:

okay all right so um i definitely am interested in this idea that offloading has kind of pros and cons to it yeah um well uh dave what was your reaction to this article what what jumped out to you

SPEAKER_01:

similar to michelle i found it very helpful as a review of what was happening and and trust is a significant notion in student engagement how they are likely or unlikely to use tools. I think that's such an enormous mediating factor. And again, I liked also how they approach the big five traits, which tend to get slapped all over the place, and in this case, really had no difference. So it seems to be much less about personality than your aggregate experiences. So I found that really, really interesting. I think one of the things that I was wishing for was a clearer definition of trust from the authors. So they kind of went through a litany of six other people saying trust is important or influential in decision making, but they didn't really define it. And that, I think it's assumed or presumed that we kind of just, it's like a binary. But I think having a better definition of this would be helpful. The one that my students really appreciated in my class last semester was C.T. Wynn's Trust as an Unquestioning Attitude. And he goes very much into the epistemology of technology and trust, the point at which we distrust reliability and what that does to us. And my students started thinking a lot differently about the ways they were interacting with technology after reading it and discussing it. So I'm curious to know how we can best take the conclusions that are in here, which are really interesting and helpful, and then kind of apply them or extend them to students who we know are likely to use some of these tools in an uncritical way.

SPEAKER_02:

Yeah. Yes, I think a lot of the trust, however you define it in technology, we don't think hard about it often, right? It's a kind of intuition that we have. I can imagine having some better definitions of that might empower students to think a little more critically about how they trust, when they trust, what they trust. Lance, what was your take on this article?

SPEAKER_03:

I'm going to jump on the trust discussion real quickly and then back up. But I think I appreciated there wasn't a clear... definition of trust because it is i guess to me it feels um i keep thinking about just in different conversations like trust is is contextual um and we can mean different things with with with what trust can be um and so i don't know for me that was relevant because i mean a good example is like google like apparently I trust Google with so much of my life and also I don't trust Google, right? Like I have lived in the Google sphere from my blog to my email to all these different things. And also like I feel very uncomfortable about that. And so there's an interesting question of do I trust? So, I mean, I like the large overview of cognitive offloading. I appreciated it wasn't the like, it wasn't a threat. Like there's concerns about it, but it's not the, like the threat that I see being used. And, you know, I think that's part of what came up in the, I know they didn't use this word, but like the brain rot article from, you know, MIT, like what I appreciate about it is something I grapple with. Like I just big picture grappling with around education, which is like, we increasingly, increasingly live in a more and more complex world. world that like off cognitive offloading like we we have to and like we have to in order to survive and i don't think we we've yet really had good conversations about so what do i guess it's i'm around a lot of educators also in the k through 12 section and i hear about oh we're not doing this or we should be doing more of that and i'm always wondering about but what comes off the plate like the things that we have to learn as we get to a more complex society with a education system that's often structured around the agrarian society, like what do we have to give up? Or what do we decide is no longer important? And like handwriting is a good example of that for me, of like there's cognitive offloading by like moving over to typing. And also like, okay, if we're going to spend all that time, then what don't we get to that is for the world that we live in? So I think for me, just the way that the, the article leans into that offloading and like there's some good reasons to be doing it and that's some of what our goals are to do and figuring out ai uh for me was like yes like we have to recognize as these tools come into being we are going to trade off things and ideally we're doing that intentionally

SPEAKER_00:

yeah i that's such a thought-provoking thing and something that i needs to be laid out and made transparent the way you have. I think we have, it's another thing we've been kind of dancing around in, in education. And I wonder whether we're seeing like almost a, if not weaponization or something like this with desirable difficulty. I mean, it's great that that idea of desirable difficulty, that's kind of sometimes friction is good. Effort is good. Failure is good. All that. That's great that that got off that, that, that, Right. But just because handwriting is harder, does it mean I'm benefiting from it more? And then I think that can take people down some roads that are not productive. Like, you know, we hear about assignments from time to time where we say, well, I'm going to go in and have students like highlight things in different colors because the brain and here's a student. So they're working really hard. But does that produce the development of the skills for the complex world that Lance is talking about?

UNKNOWN:

Yeah.

SPEAKER_03:

Let me just jump in the handwriting piece. I can't let go of myself. I can't like for me educationally, like I can mark a clear time when I went from being perceived as mediocre student to being perceived as like a B student. And it happened when this was in middle school, my teacher's found out I had a computer and I could print, and I was no longer submitting handwritten assignments, but they allowed me to type it up. And because I write left-handedly, because my hand isn't steady, all of those things, that created a barrier for me to how even my work was being understood. And so again, it's not universal, but there's something about that of like, by giving me that option, I mean, there's accessibility conversation there and whatnot, but just by having that avenue for me, like did change my trajectory, educationally speaking. And so that's part of what I think about is like, there's times these technologies have helped me be perceive different than what I would have otherwise. And we know the other direction it's happening too, but I think that's something we sometimes miss.

SPEAKER_02:

Yeah. I'm curious what you all think makes offloading via AI concerning maybe in different ways from other types of offloading.

SPEAKER_00:

Well, I mean... I'll pick the low-hanging fruit here. And it kind of does tie back into that thread of like, when are we going for product and when are we going for process? You know, if I'm unlikely to get better at something like with me, with my spatial navigation, again, it's so bad. I don't think throwing away my GPS makes much sense, but it lets me go out in the world and do things. It gives me the product, gives me access. To that. So, yeah. But AI, that's what has walloped us, right, in education is here's something that can complete products that really do tie into some conceptual processes that we know need to be rehearsed and refined. So, again, that's the low-hanging fruit. So, I don't know what you all think about that.

SPEAKER_01:

Yeah. So I think the difference that I have with the analogy of offloading to a GPS is that this is a very specific task that we are doing to try to get from one point to another. This is not us learning the ambiance of a city or learning how to navigate it in the future. And that is something that we could do without technology much better. I think this is part of a broad process of how do you introduce technology necessary foundational knowledge, disciplinary knowledge that will allow you to use the tools and the lenses of a specific discipline to evaluate information. And I worry that because of the cognitive offloading process that AI poses, there's not going to be those opportunities for feedback for a student for them to think, I actually may not have the best grasp on this. And so that that kind of deeper thinking, but a lot of it depends on what you're trying to do. If you're trying to make sure that everybody is at a similar level and you're spending all your time on tick and flick assignments, then you may not need to go as slowly as we would want you to for evaluating a source of information critically or writing a persuasive essay or conducting a lab to demonstrate a particular principle. So I worry about the cognitive offloading is and I worry about it particularly because of its dynamism and the way that it can respond so clearly as a way of affirming what students are doing. So a part of it depends on what is the task. And I would posit that the four of us have significant lived experiences with plenty of those feedback routes where we say, this is too hard for me, I'm not going there, but we have the professional corpus of knowledge that we needed to do the things that we wanted to do. And I'm very confident that most teenagers and people in their early 20s do not have that significant level of development. So how are we going to get them to think through it? That's where the offloading worries me.

SPEAKER_03:

Yeah, I mean, I do... Yeah, where this fits with AI is... is I think a real challenge both because unlike other technologies which were a little bit more gradually deployed, it's kind of all here now. And that makes it much harder to figure out the good use cases because almost everybody it feels like is using it now at once. And then I think to the other part about trust, and again, this is even back to David's point about like, More often adults will have developed a more layered approach of trust in technology and the entities behind them that, again, not saying youth don't, but the amount, the scale is, I think, different. And so it's the offloading with the trust, and that's the trust that it will always be there, the trust that it's giving the right answers, the trust, as we'll step into David's piece, of just like how it treats us, the relational dynamics. And so I think that's where it does create problems sometimes. that like take the revamping of much of education to like the educational cake through university to address as like, how do we rethink and build more of that metacognitive approach to learning versus the, again, the, I love that. Was it the tick and flick? Was that right? That approach or just, you know, the more, the overwhelmingly structured ways that education is dispersed in large pockets of the of the country and world today

SPEAKER_02:

yeah well i'm going to move us on to that next article i will add one one inflection i think um

SPEAKER_03:

uh

SPEAKER_02:

I might think about expertise within a domain as opposed to age or life experience as a kind of predictive piece here. The deeper I am in a particular domain, the easier it is for me to certainly assess the output of an AI, right? And so I think it plays into that trust piece and knowing when to kind of use my own cognitive processes and when to when to put the training wheels on or the GPS on, I guess.

SPEAKER_01:

Yeah, I will take the correction. I agree with you completely. It's a broad generalization, but I think part of it is the presumption because I know the three of you and I know the work that you have provided. So I'm a pretty good proxy for some of your knowledge. And yeah, thinking about how we perceive an LLM and how they are designed around us. That was one of the reasons I was interested in exploring this article. So this is Sun and Wang, Be Friendly, Not Friends, How LLM Sycophancy Shapes User's Trust. And I want to just provide a small context as to why I am looking at this particular topic. So I taught a course in spring, which was learning at Purdue in the AI era. And the students were discussing topics each week with the goal of creating a deliverable set of guidelines to their peers. So we had six groups and they came up with different kind of guidelines. And they were reading scholarly articles and preprints and all kinds of things. And one of the things I got to do is I got to pilot a syllabus bot. So this is a bot that went into our learning management system and I fed it all of my data. So it controlled for retrieval, augmented generation, and it gave me system level prompt control. So I could basically talk about what its attitude was. And I made two bots, the Ted Lasso bot and the Severus Snape bot. And Ted Lasso, and I'm happy to give you the commands if you want to put in the show notes, but basically Ted Lasso was overly positive. He had to reinforce how great they were, how great it was to be at Purdue, how they could do anything. And Severus Snape was caustic and refused to answer a question directly the first time and would always redirect them back to material. And so they I had them ask questions and I have great transcripts of what they came up with, but they liked the lasso bots tone of, but it really didn't do anything for them. It could not give them strong answers. Snape was much better at strong answers, but it really rubbed them the wrong way, even though they knew this was a fictional character and they had all this context in advance. So I'm interested in that interplay, just like Michelle is with the psychology and the motivation of how we are crafting the environment in a way that might increase the likelihood that students are engaging in material the way that discipline is wanting us to or in the way that would promote more critical thinking. And then part of this I'm also bringing up because there were a couple of preprints coming out that I can reference and have in the show notes, but basically one of them posited that the models that were more empathetic made them less reliable. on standard measures. And the other one showed how sycophancy was baked into these models. And if you actually started doing small things like first person prompts rather than third person, you would have higher degrees of sycophancy. So I think this is a feature, not a bug, meaning we're not going to engineer our way out of this. So I was interested in the interplay of sycophancy and trust. And so we have an experiment from our two authors. And so basically what they did was they have several different hypotheses, but they're essentially asking the research question of how do sycophancy and friendliness jointly affect users' trust? That's the study. And so they got 224, they call them students, but it was solicited online through a response service. And they put them into two by two groups. So they tuned models. And basically, it was sycophancy with high friendliness, sycophancy with low friendliness. No sycophancy with high friendliness, no sycophancy with low friendliness. And let me jump in just

SPEAKER_02:

for a second, because it took me a second to understand what they meant by sycophancy.

SPEAKER_01:

Yes.

SPEAKER_02:

Could you give us a version of their definition?

SPEAKER_01:

Yeah, the best definition they have is aligning the output in response to the user's dialogue so that it more closely agrees with the user regardless of the actual factual nature of their response. So it is that kind of companies have an incentive to make the bots make you feel good about using them, and they have done that. But I think in the training, it's hardwired in.

UNKNOWN:

Yeah.

SPEAKER_02:

And so what did Sun and Wang find?

SPEAKER_01:

What was their main result? Yeah, so they found out that if a bot was friendly, then sycophancy was inauthentic. They thought it was inauthentic right away. If the bot was unfriendly, sycophancy was deemed more authentic. But most interestingly... That combination didn't alter their beliefs about the topic. They were asking the students to go through a dialogue on automated driving because they said it's kind of an evenly perceived thing. We're about 50-50 on pro or con. And it didn't modify their beliefs. And just by itself, sycophancy doesn't make the students think that it's authentic, but it's that friendliness and then the sycophancy that could change could modify the perceptions of authenticity, which for a lot of my students when they went through this exercise was a litmus test. I will engage with this, I will not engage with this. And so, you know, when they have some recommendations for designing AI agents, but I would encourage people to read the paper to find those out.

SPEAKER_02:

Yeah, yeah. Lance, I'll go to you. What stood out to you from this article?

SPEAKER_03:

So I'm going to be the honest student here and say this is one I didn't get as detailed into. There's a lot there. Yeah, there was a lot there. I did find just that question of sycophancy in trust or in friendliness, right? Or authenticity. I feel like there is some... Some big questions for, or just brought up big questions about even what it means in our courses. Like I keep thinking about, again, the translation for me was like, it moved into online asynchronous courses where all you have is text and wonder, like it led me to think about historically, even students have perceived that as being a bot. Like I remember working with faculty in the 2010s And that would be sometimes like, is this a robot as part of an asynchronous course? And so this is where it's leading me. It's where it's led me was just to think about that human element and how and where it shows up and how it shows up in text, particularly in these contexts, because I don't know, I keep... we see in our classrooms, there continues to be this different type of digital divide, a divide of like how much you engage with a live person versus how much you engage with a mediated person. And sometimes that's, you know, we get close to it in virtual spaces, but in a lot of these asynchronous spaces, So it kind of led me into that direction, or at least it started making me think about that. And when I start to think about some of the large, massive higher ed institutions that are all online and have this very cookie cutter structure, I don't know, it led me into darker places, I should be clear.

UNKNOWN:

Right.

SPEAKER_03:

like how something like this could be the findings of it are helpful in thinking about how to build it well but the findings of it are also interesting if you want to if you want to build that like if you want to build that air quotes, authentic relationship or authentic interaction without actually with an AI for the benefit of profit. And

SPEAKER_02:

that's an education. You wanted to perhaps replace a Lance with a robot Lance. How would you design that robot so students were still trusting of this new AI instructor?

SPEAKER_03:

Yeah.

SPEAKER_02:

I don't know that we want to help people do that well.

UNKNOWN:

Yeah.

SPEAKER_02:

Yeah. Michelle, what about you? What are your thoughts on this article?

SPEAKER_00:

Oh, gosh. Well, because Ted Lasso I feel this gives me an opening. I have to say, when I thought of that combination, that interesting combination of low friendliness or likability and high sycophancy, the big Lebowski, fans of this finest of all contributions to cinema, I'm like, oh my gosh, that's what happens in the film. There's a character who's like... it was not very nice at all, but he's, he kind of is the yes man to the character. And that's why all these things blow up in the plot. But, you know, but that really does actually lead me back to what, what struck me as I kind of went over is like, here, there's some real continuity to the rest of like, to some, classical social psychology. And I like where it went with the psychology of it that we aren't just talking about. And I mean, superficially, I think most of us would look at me like, okay, either the bot is being nice or it's not. It's like, no, there are these very distinct underlying threads and we're going to have to start thinking about these in a much more precise way as we do try to construct things like even the simplest of chatbots and have them be for good and not for sort of evil as you're talking about. And I think about in particular like attribution theory. So it's on my mind because I'm teaching intro again this coming semester. But that's like, we're always, as we interact with other human beings in social context, we're always not just taking actions at face value and out of context. We're actually really running these very sophisticated, almost algorithms to look at actions and context to say, why was this done? So like, even like looking online, if you look on my Goodreads and I gave a book, Two Stars, right? Is that good or bad? You go back and you say, well, does she usually give these a lot of starts? Or if we're walking down the hallway and we have those experiences where I pass my colleague and maybe I give you a sort of a glare or I don't smile at you, you're going to start thinking like, oh, well, she never smiles at anybody, so that's okay. Or she's so friendly usually. What's going on? And that we may be doing a similar calculus of like, okay, well, it made a statement. It's agreeing with me. What does that really mean? And it is really a testament, too, to how much we are projecting human-like characteristics and call it unsettling, call it natural, I don't know. But here, too, I wonder if there's going to be some interesting individual differences. Like, We know that there's some individual differences. Like I uncovered some in some research I did a while back. Nothing huge, but people who are high in something called suggestibility, it's a weird term for something. It's basically being able to project yourself into a sensory experience or forget that you're watching a movie and really get into it as if it were real. People respond a little bit differently in educational VR. So that's another road I would like for us to go down. And I guess last of all, that maybe just to hammer home that they did come up with this idea because they were working with like different kinds of positions like what do you think about EVs or what do you think about this particular issue that people didn't generally persuade they had these important effects but changing your views on a something you already had an opinion on was really not one of them. So correct me if I'm wrong, but I think that that's something that, that the authors talk about and kudos to them for, for pointing that out that, that, yeah, just because all this cool stuff is going on as a function of sycophancy doesn't necessarily mean I wake up and now I have a totally new view on the world. And I think that's going to be a problem. I think that's, you know, one of those superficial threats that probably we don't like is like, okay, it's just going to program people to think whatever. No, it's not as simple as that.

SPEAKER_02:

Yeah. And it was this notion of sycophancy that I was really keen into. Yeah, it's not about persuading. I think my worry is more about our students and maybe the general public and how they react to AI chatbots and what trust they put into chatbots that are telling them things that are not true. This is part of the sycophancy, is that it's going to start to tell you you're right, even if the facts go against that. If I'm going to design a chatbot that people trust, I would rather it be neutral on the facts and friendly, right? Then grumpy and a sycophant, right? Both of those were categories that the users saw trust in the chatbots, but I'm worried about the chatbots telling things to people that just aren't true. It's what we have the rest of

SPEAKER_03:

the internet for.

SPEAKER_02:

That's true. All right. So we're going to take this squarely into the educational space with this last study that we're going to discuss. Lance, this is yours to summarize for the

SPEAKER_03:

group. Sure. So the paper I was looking at was Impact of AI Assistance on Student Agency. It's by Ali Darvisha, Hassan Khosravi, Shazia Sadiq, Jiragan, and I did not look this up, pronunciation, but Kazafik, and then also George Siemens. And, you know, this study is looking at kind of how ai's role around student agency and self-regulated learning within peer review feedback so i appreciated that they tried to emphasize this part of like what they're looking at is how ai is used in this particular area because it's very easy to start to read this in some of the findings and as we've seen with many of the other studies, like extrapolate from there to all these other places. So what they did is they had this program, this peer review program that they had over 1600 undergraduates across 10 courses participate in. And In that study, they had kind of four different experimental groups. They had the experimental group that, as they were giving feedback in this peer review process, they would get prompts and support from an AI. They had a group that had no AI, self-monitoring, a group that had basically checklists in a group that would have both. I should back up. The first phase was that they all received AI prompts. And then the second phase was they moved into this kind of breaking up to see what would happen, what would be the changes in feedback and quality and such. What they found was students without that AI support, they produce shorter, more repetitive, and less relevant feedback, indicating this lack of internalizing from the prior AI assistance. So the first four weeks they were using it, and then after that, without it, they were just kind of producing lesser quality feedback. The checklist group, in other words, in the second half, the phase two, they provided partial support. And so there was a little bit better performance. There was better performance, but not as strong as when they were just using the AI. The hybrid group didn't outperform the AI-only group, possibly in This was, I thought one of the interesting things possibly due to cognitive overload and support redundancy. So that's a piece I really like, that's another thing we're just navigating is we're wanting people, like we're wanting to give people both and also might that contribute to just feeling like, oh, there's now even more. And so, I think some of the things I thought were interesting from this thinking about how and where AI is is introduced either as a scaffold on or how we scaffold off AI in certain contexts, kind of that both scaffolding on and off. And then the recognizing that too many supports, and this makes sense, especially when we think about UDL and how we might structure stuff, like we don't want so many things because there's just decision fatigue or cognitive overload. And then also thinking about within this, which is kind of back to the corner of like how and where does this support agency for students? How does this help them really figure out like, their level, as I mentioned, self-regulation in their learning. So I thought it was a really good piece. I'm curious. The tool that they use is called Ripple. And I'm like, I'd love to dig in and play around with that myself just as a tool for exploring or peer review feedback. But that's what we've got.

SPEAKER_02:

Thanks, Lance. I'll throw it over to Dave. What

SPEAKER_01:

was your reaction to the study? Yeah, so I had used this before. I'm a co-author on a draft about Purdue's own writing feedback bot experiment, and we cited Darvish. This has been a pretty... resonant message and i think there's there are many other disciplinary studies that complement the same sort of thing when you give the ai and then take it away the students are faltering and it becomes a crutch so i think that that is pretty well demonstrated um the thing i'm curious about i got two questions for this one um i think they were using an sbert which I think in 2020 when they were doing this study was fairly state-of-the-art, but it was not a frontier thought. And so I believe that this is one of the challenges we have with articles. If your premise is dependent on a model several generations ago, does that mean the premise doesn't hold for quite a while? And I don't know, but it was one that I've not delved into this kind of model. And then the thing about peer feedback was interesting because my experience with peer feedback scholarship at Purdue and several of my colleagues is that students don't necessarily view peer review as an authentic task. And so if the AI is, is a way for them to outsource the authentic task. They outsource the authentic task, and then all of a sudden they're asked to do it on their own. I'm wondering how motivating that is and if there's any sort of confounding factors in it. So does that influence their perceptions and actions? But again, I think the framework of what they did, how they did the four groups, that is a wonderful model for any localized disciplinary study where you're looking at AI aiding and then AI taking a wet versus self-monitoring.

SPEAKER_02:

That's a good point about the students' views on the task that they were given. That was my critique of the MIT study that you mentioned earlier, is that participants were brought in to write SAT essays, and that just felt like a very unauthentic task that any sensible person would outsource in that environment. And so I do think that's an important thing to explore in this. Michelle, thoughts on this article?

SPEAKER_00:

Oh, yeah. You know, and I'm glad I read it, but boy, complicated thoughts and feelings here. I mean, the point is well taken. And I love this that y'all have articulated so well so far is just because you scaffold something and then take the scaffold away doesn't mean that students are just going to leave. I mean, that's the way I pictured in my mind, but now I'm like questioning a lot of scaffolding practices, not just ones that I might have with AI. So yeah, once again, maybe transparency and what would happen if we deliberately said, okay, this is changing. Now do this differently. And then doing some kind of check to say, did that actually take with them? I think is important, especially as the newer tools make scaffolding something we can do and much more, you know, it's going to be everywhere and we have all these new options. The biggest issue I have with this article is that concept of agency. So I read it as like, okay, the quality of the work there doing in the context of peer review yeah absolutely and that could have been plenty for this article but then we have to say students agency over learning and I didn't see that as a I didn't make that leap I really didn't and you know that that's an interpretation thing we can kind of get into it but the problem here wider problem as perhaps the study gets out there is that dynamic again, that people are going to look at this and read the headline and go like, Oh, students are going to turn into these agency less zombies because they use chat GPT. And that's a great point that you pointed. As Dave said, I'm going to assume that we're talking about a chat GPT that I'm familiar with. And no, it's this one specialized tool that isn't that at all. So that's the, you know, that the headline of the, that gives me the nightmare that, okay, AI destroys agency. And it is a dense technical article. That's not a criticism. This should be a dense technical article, but I can see people's eyes glazing over when they see the four groups and they see the thousands of data points and go, oh, well, they used AI and now they can't do things on their own. And that's not what they're saying. There's this great educational message that could get lost in that.

UNKNOWN:

Right.

SPEAKER_03:

I'll jump on that because that was for me some of the technical stuff. I should also clarify, I've often said math and science are important, but also they're not my strong suits. Once you get started to talk about the quantitative stuff, it does take a lot of time for me to process. It's just not something I am naturally good at, and I've been practicing on and off for years, and I don't know that that's just gonna be something I settle into. So I did find that to be my own little bit of a struggle through it. And to that point, I'm gonna suggest this, and this was part of our pre-conversation because I think it's important. I do increasingly think there, and abstracts are not it, there needs to be a reasonable cheat sheet with research. And I think in this case, we're talking about AI research and AI in education research. And something I had mentioned in our previous conversations is I've been, so back in January, 2023, I started to collect articles around generative AI in education. Like I had a Google search, several Google search alerts, and every time they popped up, I started to download them and download them, at least the ones that I could. There's a handful I couldn't get to because of access. But I mean, I probably stopped after about a year and a half and I had already had like 2000 articles. Now, again, there's plenty of those articles are probably not great or mediocre or what have you. But right now, I'm just, you know, I went to Google Scholar. I did the quotation generative AI and then education. And just from 2023, they have 26,000 articles.

UNKNOWN:

Wow.

SPEAKER_03:

Right. It is in many ways awful because there's just so much to get through and sort through and find. And because like the practical, the practicality, the applicability of this stuff is going to get lost or, you know, to your, to your point, Michelle, like, are you going to dig into that or you're going to miss the, the like really useful, here's what I do with it. And I don't know that, uh, SOTL folks as a whole or, or educational scholars as a whole have the means to unpack just 26,000 articles alone, nevermind anything else.

SPEAKER_02:

That's a lot of podcast episodes to cover all of that. That's a lot. Yeah. Well, and I wonder, you know, just that, um, The first article, well, our conversation today has gotten me thinking about what do we mean by trust? What are the ways we think about or make decisions about what technologies we trust and what that means? This article got me thinking about if students get AI help with something and then you take the AI away and the learning wasn't sticky, I'm like, but what about me? When I help students, is that learning sticky? I don't know. How do we know these things? How do we understand these things? How can we figure that out? And that's why I think getting into the details of these studies is really important because it sheds light on these nuances. even if sometimes the practical applications aren't immediately obvious, it gives me a lens to think about what's happening in my own teaching a little bit more concretely and deeply. Well, thank you all. That's a good place to end it, I think. There's certainly more articles and more we could say about each of these, but I enjoyed today's conversation. I hope our listeners did too. And I want to thank our panelists for being here today. Thank you all. This

SPEAKER_00:

is a great opportunity. Thank you so much.

SPEAKER_02:

Thank you. Really appreciate the conversation. Thanks so much to the panelists for this first edition of Study Hall. Lance Eaton is the Senior Associate Director of AI and Teaching and Learning at Northeastern University. Michelle D. Miller is a Professor of Psychological Sciences at Northern Arizona University. And David Nelson is Associate Director at the Center for Instructional Excellence at Purdue University. I invited each of them to be part of this episode because each of them regularly shares interesting articles on AI and education on their various platforms. See the show notes for links to Lance's blog, Michelle's newsletter, and Dave's LinkedIn page. Now it's your turn. What are your thoughts on the studies we discussed today? What do you take away from our discussion that you can use in your teaching? And how do you like this new format? Would you want to hear more study hall episodes in the future? You can contact me by email at derrick at derrickbruff.org or click the link in the show notes to send me a text message with your thoughts. Just be sure to include your name if you use the text message option. Intentional Teaching is sponsored by Upsia, the online and professional education association. In the show notes, you'll find a link to the Upsia website, where you can find out about their research, networking opportunities, and professional development offerings. This episode of Intentional Teaching was produced and edited by me, Derek Brough. See the show notes for links to my website and socials, and to the Intentional Teaching newsletter, which goes out most weeks on Thursday or Friday. If you found this or any episode of Intentional Teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Tea for Teaching Artwork

Tea for Teaching

John Kane and Rebecca Mushtare
Teaching in Higher Ed Artwork

Teaching in Higher Ed

Bonni Stachowiak
Future U Podcast - The Pulse of Higher Ed Artwork

Future U Podcast - The Pulse of Higher Ed

Jeff Selingo, Michael Horn
Dead Ideas in Teaching and Learning Artwork

Dead Ideas in Teaching and Learning

Columbia University Center for Teaching and Learning
First Player Token Artwork

First Player Token

Derek Bruff
The American Birding Podcast Artwork

The American Birding Podcast

American Birding Association