
Intentional Teaching
Intentional Teaching is a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. Hosted by educator and author Derek Bruff, the podcast features interviews with educators throughout higher ed.
Intentional Teaching is sponsored by UPCEA, the online and professional education association.
Intentional Teaching
Peer and AI Review of Student Writing with Marit MacArthur and Anna Mills
Questions or comments about this episode? Send us a text message.
Today on the podcast, we learn about one initiative that offers a path forward for AI and writing instruction. It’s called the PAIRR Project, where PAIRR stands for peer and AI review and reflection. This approach takes the well-established peer review pedagogy used in writing instruction and adds a layer of AI-generated feedback on student writing. PAIRR has been developed and tested by dozens of faculty at public colleges and universities in California, and I’m excited to have two of those faculty on the podcast today to tell us about it.
Marit MacArthur is a continuing lecturer in writing at the University of California at Davis and one of the principal investigators on the PAIRR Project. Anna Mills teaches writing at College of Marin, a community college, and brings her experience with open educational resources to the project. Marit and Anna and I talk about student voice, AI literacy, metacognition, the importance of prompt testing, linguistic justice, and more.
Episode Resources
The PAIRR Packet, https://pairr.short.gy/packet
The PAIRR Project, https://writing.ucdavis.edu/pairr
Marit MacArthur’s faculty page
“Peer and AI Review + Reflection (PAIRR): A Human-Centered Approach to Formative Assessments,” Lisa Sperber, Marit MacArthur, Sophia Minnillo, Nicholas Stillman, and Carl Whithaus, Computers and Composition, June 2025
“Comparing the Quality of Human and ChatGPT Feedback of Students’ Writing,” Jacob Steiss et al, Learning and Instruction, June 2024
“What Past Education Technology Failures Can Teach Us about the Future of AI in Schools,” Justin Reich, The Conversation, October 2025
Podcast Links:
Intentional Teaching is sponsored by UPCEA, the online and professional education association.
Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe
Subscribe to Intentional Teaching bonus episodes:
https://www.buzzsprout.com/2069949/supporters/new
Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching
Find me on LinkedIn and Bluesky.
See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.
Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. I'm your host, Derek Bruff.
Derek Bruff:Writing instructors have been on the front lines of generative AI in education since ChatGPT was released in late 2022. The ability of students to use AI as a ghostwriter has led not only to hard questions about academic integrity, but some deep discussions in the field about what it means to teach writing. The pressure to respond to generative AI has led to a lot of innovation in writing instruction, and I think it's wise for those of us in other parts of Higher Ed to look to the field to see what they've figured out.
Derek Bruff:Today on the podcast, we're going to learn about one initiative that offers a path forward for AI and writing instruction. It's called the PAIRR Project, where PAIRR has two R's and stands for Peer and AI Review and Reflection. This approach takes the well-established peer review pedagogy used in writing instruction and adds a layer of AI-generated feedback on student writing. Pair has been developed and tested by dozens of faculty at public colleges and universities in California, and I'm excited to have two of those faculty on the podcast today to tell us about it.
Derek Bruff:Marit MacArthur is a continuing lecturer in writing at the University of California, Davis, and one of the principal investigators on the PAIRR project. Anna Mills teaches writing at College of Marin, a community college, and brings her experience with open educational resources to the project. As they share about the PAIRR approach and the impact it's having on student writing, please know that Anna and the team have shared so many resources from the project that other instructors can adapt and use for free. See the show notes for links to these resources.
Derek Bruff:Marit and Anna and I talk about student voice, AI literacy, metacognition, the importance of prompt testing, linguistic justice, and more. Here's our conversation.
Derek Bruff:Anna and Marit, thank you so much for joining the Intentional Teaching Podcast. I'm glad to have you on the show today, and I'm glad to talk about this uh this pair project you have cooking. Thanks for being here.
Marit MacArthur:Thank you.
Anna Mills:Thanks for having us.
Derek Bruff:Before we talk about AI and all those things, um, I'm gonna ask my usual opening question, which is can each of you tell us about a time when you realized you wanted to be an educator? And uh I'll start with Anna.
Anna Mills:I love this question and I spent a lot of time thinking about it. Um I think it would be in first year in college in my modernism class, when my professor actually seemed interested in uh the things I was reflecting on in response to the reading. Um you know, it wasn't for a grade, it was really like, how is this meaningful to me in my life and how do I want to push it further? And it was like he actually thought it was interesting. And so there was this intrinsic meaning and actual meeting of humans there that um it just felt so uh satisfying. And uh that's something I I could see that I would want to do do that as a teacher too.
Derek Bruff:Okay. You enjoyed being on the student side of that, and you you could imagine yourself being on the teacher side of that. Yeah, yes. Yeah. Oh, that's lovely. Yeah, that that human connection when we we were invested in each other. I love that. How about you, Marit?
Marit MacArthur:Yeah, so for me it was a lot later. Uh I was in graduate school, uh, getting a PhD in English, and uh I had gone to graduate school because I liked reading and writing, and I didn't want to work. So it just was a kind of a deferral. And I I honestly had no idea that I might end up a professor. Like it just I really didn't, which is kind of amazing because my parents, both of my parents did some work in grad school. Anyway, uh, but you know, at that point you're in a lot of seminars, and I found, I mean, you know, I I kept myself from talking all of the time, but I just started to feel like I have a lot to contribute, I think, or I'm feeling that I have a fair amount to contribute. And I was also TA at the same time, and you know, I was listening to professors give their lectures, and and it started to feel like, okay, I think this might be a natural transition. I think it's time for me to be the one running the class, not because I wanted to like take over or I didn't respect my professors. I did, I learned a lot from them, but I was like, I think, I think I I am ready. I think I have plenty to say, and it's and I need to transition from the student role to being a teacher.
Derek Bruff:Wow. Yeah. Yeah. I I think I probably had a similar moment where I I really I figured out I I was able to explain. I I'm a mathematician by training, so my field is very different. But I found these moments where I was able to explain a hard concept to someone and the light bulb went off, and I was like, oh, like I I have something here that I can offer.
Marit MacArthur:Yeah. Yeah.
Derek Bruff:No. Well, um, let's flash forward to today. And I want to talk about the PAIRR project. And I want to start fairly concretely so we can kind of maybe work through an example or two. But what does the PAIRR process look like in a writing-focused course? What is the PAIRR process and maybe what would it look like for a particular assignment?
Marit MacArthur:Sure. So the PAIRR process itself is pretty simple. Uh it has five steps, but most of them, maybe like four to the five, most people are already doing when they teach writing. So uh step one, students write a draft. Step two, they go through peer review. So step three is new. That is the point when they get AI feedback. And it's not just like throwing your draft like in any old chatbot with no privacy protections and no guidance, just like, how can I make this better? It's using um criteria for that assignment, assigning the chat bot an appropriate role for giving the feedback. And ideally, it's you know a closed system where the student's writing is not shared as training data, and if they are using, you know, a chatbot in the wild, the private privacy protections are on, right? So draft, peer review, AI review. And then before they revise, they reflect on and compare both the peer and AI feedback. Uh, and we feel that this is a really crucial step uh for building AI literacy. You know, I'm sure we'll talk more about what that is, uh, but in order to collaborate with AI tools, you need to critically assess the output rather than just blindly trust it or blindly implement it, right? And then they revise. So those are the five steps. The fifth step is revision. I would say though, that there's a really important step zero that I'm sure a lot of people are doing in different contexts, which is having students read, reflect on, and discuss readings about AI to kind of give them some background on how these tools work, what are some of their serious limitations and possible benefits and opportunities they provide. And that conversation and is is really crucial before they start using pair. And basically, I mean, in my classes, we for the students who opt in to participating, you know, we do that process, the pair process for every major assignment that they that they write. Uh, and then the students who don't want AI feedback, which is completely legitimate and fine. Okay, we are a lot of us are asking them to get one additional form of human feedback, and they still need to reflect on and compare the different feedback they got before they revise. So that's kind of that's it.
Derek Bruff:Okay. Okay. Um, I already have questions, but uh Anna, is there anything you want to add to that?
Anna Mills:Um no, I think that's that's a great description. Um and you know, we'll kind of get into the philosophy of that a little bit more. Um but it essentially means you're adding one process assignment where um where they're uh reflecting, and they're reflecting not just on the AI feedback, but on the comparison with the peer feedback. Um and I feel like that's the heart of it.
Derek Bruff:And is that something students write up and turn in as part of the process?
Anna Mills:Yes.
Derek Bruff:Okay. This notion that we should ask students to reflect on the feedback they get from AI and essentially decide kind of do I take this, do I leave this, right? Is this helpful, is this not helpful, how is this working? Um it feels uh so I hear a lot of folks have some element of that in their assignments because of AI, because AI is new and different, and we're we're all trying to figure it out, right? Like we don't even know the answers to that sometimes. Um but it sounds like you're also asking them to reflect on their peer feedback.
Marit MacArthur:Yeah.
Derek Bruff:Or for the the students who opt out of the AI, uh some other human source of feedback. Um and so I'm curious, is that is a reflection on the peer feedback? Is that something that is traditionally done in writing instruction?
Marit MacArthur:Yes, at least in my experience. And what about you, Anna?
Anna Mills:Yeah, I think it's definitely seen as a best practice to encourage sort of metacognitive reflection all along the way. So thinking about where am I in my drafting process, what do I think about this feedback I got from peers, um, and then sometimes a reflection on how I revised that they turn in and when they turn in the final draft. Um, so it's you know, understood that awareness of their process and their strategies and their reactions is is an important part of teaching writing. Um, because that's that's part of teaching more flexible thinking processes and um and growth in in how they write. Yeah.
Derek Bruff:Well, and I like that because sometimes I found that when I am using a new technology or I'm asking my students to do, you know, a new type of assignment, right? The first time I had my students make a podcast, I had to be a lot more intentional about the scaffolding and the structure and the process because I had never given a podcast assignment and most of them had never made a podcast episode. And so I couldn't just assume that they were kind of approaching it with a useful tool tool set. And and then after I did that a couple of times, I thought, you know what, all of my assignments need scaffolding and structure and process. Right? Not just the new, not just the novel ones, right? Because they're often novel for students. And so um is there um what about instructor feedback? Is that part of this process? Does that come later in the sequence?
Marit MacArthur:I think that depends on the instructor. So some instructors do give feedback on drafts, formative feedback, and some instructors don't until the you know evaluation or credit phase. So that kind of depends.
Anna Mills:But I think the idea is that um we're not taking anything away, um, that we are emphasizing the value of both the peer and the instructor feedback in a human-centered writing process. That's what gives the writing meaning, is that you have a human audience that you're communicating with. Um so that the AI feedback really is supplemental to that. Um and so it's kind of in my mind, it's kind of uh writing teachers trying to take the helm and say, let's let's put AI in its place within this human-centered writing process and um and and sort of keep the values that we uh that we have and invite AI in where it can supplement. Um in kind of not an authoritative role, but in a role of you know, sparking student thinking and um and authorial decisions. Um it's not the oracle. It's um, you know, we want them to question it.
Derek Bruff:Yeah. I say that a lot about AI. It's not an oracle. It puts words together in interesting ways, but that doesn't mean it knows things. Um because I'm also thinking, I had on the podcast a couple of months ago Matthew Clemson, who teaches biochemistry at the University of Sydney. So very different field. He was using a tool to create a custom chat bot that would answer student questions about biochemistry. Um, and I asked him, Are you worried that it's gonna get things wrong? And he said, Well, yeah, it's gonna get stuff wrong. I get stuff wrong sometimes. Like I want all of my students to always be having a little bit of skepticism about the information they receive and maybe develop some tools for vetting biochemical information wherever the source is. Um and so I like positioning AI as it is one of these sources of feedback. It is not more authoritative than others. Although sometimes I think our students see authority in the AI that perhaps it doesn't have. Yeah.
Marit MacArthur:Yeah, and I should say that the reflection questions are really important, you know, and we do remind them AI can be wrong. Uh, it can make mistakes, you should not automatically trust it. And you need to think about whether the feedback both from peers and AI aligns with your purpose and audience and so on. So uh from the very beginning, we encourage them to look at it skeptically.
Anna Mills:I mean, we even have sample kind of like phrases that they can use to chat back to push it and to say, you know, that's not resonating, or I'm not sure I agree with that. Um, you know, can you help me explore my ambivalence about it? You know, so that, you know, yeah, we want them to feel like the authority on what it is they want to communicate and um you know, in relation to peers and teachers and AI. Um, so I think giving them examples of that, um, really trying to encourage that in concrete ways. Um we hope that builds a certain kind of AI literacy as well.
Derek Bruff:Yeah, yeah. Well, I'm I'm thinking of someone else I had on the podcast, one of my University of Virginia colleagues, um, uh Spyros Samotas, who was having his students get feedback on their writing from an AI, and he found that they weren't going back and forth with the AI as much as he would like. They weren't pushing back. Um, and so he actually it's one of the reasons he's developing a custom chat bot so that the AI would behave a little bit differently and try to engage in more conversation and less, you know, question answering. Right.
Anna Mills:Um Yeah, we knew about requiring two follow-up chat responses as part of the assessment. Yeah.
Derek Bruff:Okay. You can't just say give me this feedback and then I walk away, but we're gonna, we're gonna, we're gonna, we're gonna interact with the feedback. So what's the origin of this project? Where where where and how did it get started?
Marit MacArthur:So it did start at UC Davis. Uh my good friend and colleague Lisa Sperber and I were thinking about doing something together, partly because, you know, a lot of we're in a large writing program at that point, the university writing program, and uh people were freaking out. And, you know, it was really like, what can we do? And I had a kind of unusual perspective because I worked very intimately with a large language model for speech recognition for the last 10 years in my voice studies research and just kind of understood what was going on under the hood from doing that. It was an open source tool. Anyway, and so I guess I wasn't as um maybe bowled over by LLMs the way some people were who were, you know, who were less familiar with them. Lisa was reading something about like pairing peer and AI review, and um we had this really crucial conversation, I remember, where we were like, okay, because we're thinking about AI and the writing process in different ways and kind of experimenting in our classes. And we were like, okay, when should they get the AI feedback? And I felt very strongly that they should get peer feedback first. Um, and you know, it doesn't always have to be that way, you know, but I'll just tell you my rationale for why, and then I'll tell you about how it, because you also asked, like, how did it get so big? Um I feel that like, you know, commercial AI is designed for experts, not for novices and students. And an expert can collaborate with AI in their field very effectively as long as they're not in a hurry, right? You know, you can assess the output, see if it's doing what you want, and then you can tweak it and so on. And that requires, you know, high-level reading and writing skills and editing skills and expertise in your field. So our students are developing expertise, they don't have it already. So if they've tried to write something, they've seen other examples of that genre, you know, put that you provide in the class, and then also seen their peers try to write that thing and then gotten feedback from their peers on their attempts to write that thing. At that point, I feel like it they're in a stronger position to assess AI output, you know, and AI feedback. Um, and they can push back against it a little bit more. And in our first study from 2024, so this was published in Computers and Composition 2025, uh, 25% of the reflections we coded, you know, because we did a subsample of the 654 students, I think it was 131 students, uh, 25% of them said something, like basically pushed back against the AI feedback that just were skeptical about it or said it didn't align with their goals or whatever. And like whenever I saw that, it made my heart sing. I was like, that is developing AI literacy, you know, to not just trust it. Anyway, so we came up with that sequence. We were also, you know, with this larger group, like curating readings, developing AI policies. Uh, and we just felt that it really wasn't fair that, you know, big tech had just dropped this like a bomb on schools, like, okay, guys, I know you're really busy teaching and everything.
Derek Bruff:In late November, no less.
Marit MacArthur:Yeah. And um and just figure it out. And we're like, this isn't fair. Educators really need support. Um, and we had done a lot of work in our uh writing across the curriculum program, which has since been defunded, but anyway. Um, and so we had a lot of faculty contacts, and so we reached out and and found some uh instructors who teach professors who teach large writing-intensive courses across the curriculum. So we chose three in STEM fields and then seven uh writing courses, so lower division and upper division, composition and professional writing. And we ended up working with 654 students. That was our pilot project, which is big. And we really didn't have any funding to support it, we just did it. Uh, and then the learning lab, the California Education Learning Lab, um, put out these calls for uh some large grants to you know fund uh work around AI and education. And I was like, I think we should go for one. And I think some other members of our team were like, oh my God, that looks like a lot of work. I was like, I mean, and you know, everybody helped, but it was a lot of work uh to be the lead writer for that. But we got it, and now we are working with um uh four community colleges and three California State University campuses, and then more students at UC Davis. So uh, but the the impetus for broadening it was, you know, we had some pretty good results from our initial study, and we again felt that educators need more support uh around this issue. And if we have a good model, you know, that we want to keep studying and adapting for different contexts and adapting over time because we're in a different moment with AI now. Uh, let's find some funding to do that.
Derek Bruff:Yeah. When did you get involved, Anna?
Anna Mills:Um I first I saw them give a presentation and I was like, I want to be on that team. Um, because I was already working with AI feedback, but more on my own and with um kind of a nonprofit sort of ed tech um company that was was developing an infrastructure for you know inviting students to engage with AI feedback. And um I was sort of a volunteer pedagogical advisor on that. So I had been working on this and I felt like it was a very similar approach and values. And um I love the idea of um, you know, working on it with colleagues and um and sharing it more broadly. So um, because I have a you know background in open educational resources, and my my impulse is always just like, let's put it out there. People might be interested, they might adapt it, they might try it. Um and so, you know, we kind of added in this piece where we're we're working intensively with faculty at these different campuses, but we're also sharing the materials publicly. And we did a public webinar, and we're sort of inviting broader discussion of this approach, and you know, if people want to try it or adapt it. Um and so that's really exciting to me because there's been such warm response on curiosity about it. Um, it does feel like we're meeting a need. Like people want something concrete, that's a that's something they can try, that's a form of guidance, and they want to be experimenting with it in their own on their own terms as well. Um so I feel like we're we're sort of supporting that.
Marit MacArthur:And I should say that I it was Lisa Sperber who suggested um inviting Anna to join our team, which was brilliant. I just didn't know Anna yet, but now I know how brilliant she is, and how uh generous and helpful. So um, you know, we kind of developed the team uh from the different uh campuses, like through, you know, just our professional networks and um and it's a great team.
Derek Bruff:Yeah. So not all faculty that I talk to about AI and writing are curious and interested. Many are skeptical and resistant. Um and so so what would you say to someone who really maybe doesn't believe AI has a place in this process? Why why bring AI into this? Why not just stick with some tried and true peer review and reflection methods that are already in the field?
Marit MacArthur:Why not just stick with peer review?
Derek Bruff:Yeah.
Marit MacArthur:So I think a lot about equity, and that was a concern from the start. So and before so I want to separate academic integrity versus AI literacy. So there is quite naturally a ton of concern about academic integrity and you know, cheating with AI. And I have thoughts about that. But if we do not develop students' uh AI literacy, we really risk deepening the digital divide, right? So I was like, okay, um AI is impacting writing instruction. So let's find a way to integrate AI in a way that builds AI literacy. That was that was really important. Uh and the other thing in terms of equity, um, I mean, we we do think we should keep peer review, right? But if there's a way that you can integrate AI in the writing process and build AI literacy and increase writing support, you know, support in the writing process, that to me does seem like a win-win. So peer review is great. In an ideal world, um, you know, this is something that Sal Khan for Khanmigo has like been all excited about. In an ideal world, like everybody could have a one-on-one tutor, right? And it's like, oh, Khanmigo can do that. But it's the human relationship with a tutor or an instructor, you know, or a helpful peer that is really motivational. I'm not saying that chatbots can't be motivational and helpful, but um, the fantasy that somehow we can just like get rid of human relationships and uh use AI for writing support is just is just nuts. And like eventually, who would write them letters of recommendation? Like it's just crazy. Um so equity is a big motivation. So there are many, you know, I'm in a re relatively privileged position. Most of the time I have two courses of 25 students each. Like, you know, academic integrity isn't a big issue in those courses. Um, nor is being overwhelmed by having to give feedback. So if I gave all of my students formative feedback um all the time, if only I have 50 or 75 students, I could do it. It would be a ton of time, but you know, I could figure out a way to do it. When you're a community college instructor with, you know, or a CSU instructor with, you know, whatever, 150 students a quarter or in three, you know, five different sections, you cannot give them formative feedback. You, I mean, you can't give them, or not all the time, right? And uh students really benefit from feedback on their drafts. And yes, you can teach peers to do that. Um and they can benefit a great deal from giving peer review feedback and give and getting it. Nevertheless, you know, they are students. And there is research that we relied on out of uh UC Irvine that studied AI feedback and human feedback on writing, and it found that human feedback is always better, except when criteria are used, AI feedback uh is comparable to human feedback. So given it if it's a if it's a free and safe source of additional formative feedback in the writing process, why not provide that to students as long as it's not replacing humans? And if it has the plus as well of building AI literacy, great. You know. And then also, you know, we have brought this in more in the second and third years, but linguistic justice and equity is a good reason to bring it in as well. Because if students are using AI, and I'm not trying to be like some techno-determinist where it's like everywhere, but you know, at least some students are using AI and using it as a ghostwriter, well, it's erasing their voices. They never develop their voices. So integrating in this way where they like, where it's not writing for them, but it's giving them feedback on their writing and they can push back against it, I think that creates a little a little bit more possibility that they might still retain and develop their own voices instead of just using it as a ghostwriter. So I have some other thoughts, but I babbled. So let Anna weigh in here too.
Anna Mills:Yeah. Um I think that, you know, it is important to push back against the idea that it should be everywhere in the writing classroom. Um, and to recognize that it doesn't write the way that we do, that it's not an authority. Um, it's not a true audience. But that doesn't mean that it can't stimulate the writer's thinking and even stimulate how they try to connect with humans. Um and I think what we've tried to do as we've worked on the feedback prompt is to um is to shape it so that it complements um, you know, thinking about what will humans get out of my piece. And that it sometimes even says, readers might wonder this or readers might um be confused about this, um, you know, kind of directing their attention to to those human readers and also asking questions about their purpose, about what they care about, what they're thinking. Um so that it it supports the the human author's development, their confidence, um, you know, their questioning of their own thinking. Um so it's not, you know, it's playing a very different role from what people associate with AI. Um it's it's not replacing that thinking. It's um and and we want to train students to to use it in that way as part of AI literacy. Um so providing, you know, another model for what AI can be and how it can, how you can engage with it in the writing process. Um, you know, it sort of channels that interest in AI in a better direction, um, is the part of the thinking too.
Marit MacArthur:And I I would also say it's close to it's closer to how experts would use AI, you know, in the workplace, because you know, you're working on a project, if you're using if you bring in AI, you're gonna be like, uh, well, I was using it this way, I'm not really sure. It might be helpful in this, you know, you're talking about it with other people and pushing back against it and collaborating with your colleagues. Like that's just fairly, I would say it's fairly realistic workplace preparation, even though I don't want everything to be, you know, professional.
Derek Bruff:Yeah, but you're giving um, Anna, I heard you give a uh presentation recently where you use the phrase ethical tutor, right? A tutor who's not doing the work for the tutee, but is asking really useful questions to help the tutee develop. And so, you know, if students come in thinking AI is a ghostwriter, this gives them a different model, a different mental model for what role AI could play as a as one of those ethical tutors. Um I want to ask, because I I gather it it it's taken a while to get a prompt together that will coax an AI chatbot to behave in these ways. Could you say a little bit about kind of the effort required to do that?
Anna Mills:Fascinating and challenging and exciting. Um and I think Marit and I were kind of leading that effort, which was an intensive collaborative effort among like probably eight faculty over maybe maybe uh almost six-month period, where we were testing prompts, um, trying out different versions. We each had a different feedback prompt, and then we were testing it and rating the results and um meeting to talk about it and really thinking deeply about our pedagogical philosophies as we did that and our priorities and about linguistic justice with the input of linguistic justice and equity consultants. Um and then we realized we figured out that we couldn't have one prompt to rule them all as we had sort of intended to, like that we still disagreed. Um, and so there was this, you know, moment where we said, okay, now we're gonna have multiple prompts and we're gonna let faculty decide. Um and that's actually good for faculty AI literacy and sense of agency. Um and then there was some competition between our prompts. So it's like um we kept refining them, revising them. They actually got more similar to each other in the process. Um so, you know, it was both it was systematic, but it was also really about um faculty talking about teaching together. Um, it wasn't as much about prompt engineering. We brought in some prompt, you know, engineering principles. Um, but it, you know, it wasn't so much a technical process as sort of a deeply collaborative um discussion process, I would say. And like I'm really happy with, you know, how far the results have come for our prompts. I I, you know, keep thinking, okay, we got a better model, we got a better prompt, and we really are seeing the the fruits of that. Um, I don't know, Marit, how you're feeling about it now or what you would add.
Marit MacArthur:No, no, I am feeling good about it. And I think what I would emphasize, like it's been a really important collaborative process. And the reason you might need different prompts, you know, is you're you're teaching different students at different levels to do different things. And I guess, you know, there are two pieces here. One is the feedback prompt you give the chat bug, and the other piece, uh, there's a feedback prompt. And then there's the criteria, and partly because of that um early research, you know, and ongoing research about um James Purdy's doing some of this about the importance of criteria uh driving the feedback. So I think I'm a little bit more obsessed with the criteria, partly because I had a experience like you did, Derek, you know, with your scaffolding the podcast assignment, where I was like, oh, I really need to make my expectations more explicit in my, in my, in my rubrics or criteria. And that has uh improved the feedback that that we get. But there's the other problem, just like the character or you know, so-called personality of chatbots. Like if they're super sycophantic, like making them even nicer and gentler, you know, I was just finding that they could sometimes be encouraging of areas of draft that were pretty weak. And I was like, ah, trying to help my students in law school or medical school. Like they need some, you know, honest, compassionate feedback, like when a draft is is weak and needs a lot of development. But I also remember when I'm teaching first year writing in the spring quarter at CSU Bakersfield, is sometimes there were students who had failed the class previously, a writing and research course. And I just felt like half of the work just was convincing them that they belonged in the classroom. And so, like the you know, encouragement was like, I would say more important. Whereas, you know, when you're a senior like 4.0 political science major at UC Davis, you're like, I'm great, everything's gonna be fine, even those are scary applying to law school. So, you know, students need different things, and that can the the quality and and approach of the feedback they get can be tweaked by the feedback prompt, and it can also be tweaked by the criteria. Um and those two elements are really important because you're talking to a computer program, like it doesn't know your context.
Derek Bruff:Right. Right.
Anna Mills:You can give it the context as part of the prompt. And so that's also been, you know, really helpful because the students have that concern about AI feedback that, oh, it doesn't understand the assignment or the context. But when I've looked at the feedback, um, you know, I'll I'll tell them, well, we gave it the assignment and the criteria. And actually the feedback it's giving you, like I would agree with. And it does fit a purpose of the assignment. So I can reassure them of that. Um and the system can do that uh pretty well.
Derek Bruff:Um I want to circle back to one term that you both used that I don't hear always part of the conversations around AI literacy and AI ethics, and that's linguistic justice. Um could you say a little bit more about what you mean by that and and why it's important in the writing context?
Anna Mills:Yeah, I think there's a lot of discussion and interest around this in um among teachers of writing and concern about the sense that like standard English is superior, and um that that ends up sort of squashing students' authentic voices and um making them feel like the classroom is is a hierarchy that reinforces outside power structures. Um and there's been a lot of concern that AI would only reinforce that with language because it is trained more on standard English from richer countries. And um so, you know, by default, it does tend, if you just say revise this, it will take out all the African American vernacular English, right? By default. Um, so there's the there's reason for that concern. Um and we wanted to see if the feedback could work against that concern as well, if it could support a more authentic voice and more varieties of English. Um, so we gave it explicit directions um to do that. And we tested it on some writers like Gloria Ansaldua, uh Varshana Shanti Young, who, you know, choose to write in an in a non-standard English. And um, you know, we found that it could be pretty affirming of that and under, you know, talk about how it was a rhetorical choice to reach certain audiences. Um and so, you know, I think again, it was a way to try to turn the expectations of AI around in how we in how we use it, in how we direct it, um, and to have it work against these kind of power dynamics that that we don't want to perpetuate.
Marit MacArthur:Yeah, everything she said. I mean, I said some stuff about this before. Um uh I think that students discovering and developing their voices is not just like a frufy creative writing thing, you know. Um it's like the way we would talk about, you know, an interest, a new interest that we have, like to a friend or you know, partner, someone you're really comfortable with, it's like freeing, you know. You're relaxed, you can just, you're not worried about like how you sound, you can just, you know, and you're gonna get out not just enthusiasm, but like some rich detail, you know, and it kind of helps you explore like this new direction or something. Whereas if you're like in a straitjacket of like standard English and like afraid somebody's gonna, you know, correct your punctuation, like you may not even explore it in the same way or or have the same, you know, expansive thoughts. And so like there has to be a safe way for students to uh use language comfortably and freely, you know, and and you know, I think Anna in in one of our linguistic justice um and equity uh workshops was talking about like, you know, but you talked about giving them access to the language of power, you know, and then giving them choices. Like, okay, we talk a lot about code switching, but um, you know, with code meshing, you may be able to bring some of more of your voice into, you know, your professional or academic writing and still and not not just get away with it, but maybe change things so that it's not, you know, flagged as incorrect or or awkward or whatever, you know. I think having students not assume, okay, I have to put on the academic English strait jacket every time I write anything in school, I don't think that's helpful. And because, you know, I what everything Anna said about like maybe we can help use AI to help uh empower students to explore all of this instead of just be like, okay, I'm gonna use it as ghostwriter and I'm gonna erase my voice and sound like corporate America.
Derek Bruff:Right. I think that's interesting that there's um you're trying to free the students to speak in to write in their own voice, and you're trying to put guardrails around the chatbot so that it says more specific things, right? You're you're putting it in more of a straitjacket so that it behaves in the ways they're gonna be.
Marit MacArthur:Well, yeah, because it otherwise it's just like you have to be very directive to chatbots, right? Because uh, you know, they don't know the context or your purpose unless you spell it out because they're designed to sound, you know, still, you know, pretty sycophantic and corporate. And but they're but that's not their only capability.
Derek Bruff:Yeah, yeah. Because you are often working against the choices that other designers have made in crafting how these chatbots work.
Marit MacArthur:Exactly. Yeah, and you're working against hegemonic training data.
Derek Bruff:Yeah, yeah, yeah. Well, there's I know there's a lot more we could say about the pair project, but you did mention that it's OER. And so um, Anna, where can listeners go to find out more about this project and and access some of these great resources?
Anna Mills:Um we have a packet, which is a Google Doc with a lot of different tabs, and those tabs tell you our prompt and the readings we assign and the reflection questions and how to scaffold the follow-up chat, um, examples of feedback. It's basically all in there, and maybe we can put that in the show notes. Um and um so we also invite you to join a listserv. Um, you can share ideas about pair on our Padlet. Um, and we hope to do further public webinars. Um, there's a recording of our webinar and the slides for that. Um, so we invite you to kind of join the larger community of discussion around it if you're doing anything with feedback or interested in it or want to push back. Um you know, we we also have you know posts on social media, we have a Substack. Uh, we'd love you to engage with that. Um and um so we're offering lots of different options, but I would say start with our packet and see what you think and um and get in touch with us.
Derek Bruff:Yeah. Absolutely. Yes, and I'll put all those links in the show notes to make it easy to access. Thank you both for being here and for sharing about this. This has been really fascinating. Um, and I can't wait to share this with my listeners. Thanks, thanks for doing this.
Anna Mills:Thank you so much for having us and for the wonderful questions.
Derek Bruff:That was Marit MacArthur, a continuing lecturer in writing at UC Davis, and Anna Mills, an English instructor at College of Marin. Thanks to both of them for taking time to come on the show and share about the Pair project. As Anna mentioned, there are lots of ways to learn more about the project and get involved. See the show notes for lots of links.
Derek Bruff:Justin Reich recently published a piece in The Conversation comparing education's current response to generative AI with its response a generation ago to web searching. Most of what we taught students in the early 2000s about evaluating the credibility of a website, like trusting a website with a.org or EDU address, or one with an about page, or one that was updated recently, was perhaps well-intentioned, but turned out to be very bad advice. We just didn't know that until years later when rigorous peer-reviewed studies started hitting education journals, and we learned that other techniques, like lateral reading, were far more effective at evaluating online claims. Reich argues that we're in a similar spot now with generative AI, where we're mostly just guessing how education should respond. He writes, quote, there is a better approach than making overconfident guesses, rigorously testing new practices and strategies, and only widely advocating for the ones that have robust evidence of effectiveness. End quote. It might be years before we have the kind of research that Reich is talking about, but I think the PAIRR Project is a big step in that direction.
Derek Bruff:Intentional Teaching is sponsored by UPCEA, the online and professional education association. In the show notes, you’ll find a link to the UPCEA website, where you can find out about their research, networking opportunities, and professional development offerings.
Derek Bruff:This episode of Intentional Teaching was produced and edited by me, Derek Bruff.
Derek Bruff:If you found this or any episode of Intentional Teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.