
Intentional Teaching
Intentional Teaching is a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. Hosted by educator and author Derek Bruff, the podcast features interviews with educators throughout higher ed.
Intentional Teaching is sponsored by UPCEA, the online and professional education association.
Intentional Teaching
AI Teaching Fellows with Christopher McVey and Neeza Singh
Questions or comments about this episode? Send us a text massage.
Christopher McVey is a master lecturer in the writing program at Boston University. Neeza Singh is a senior at BU majoring in data science. Last year, the two were partnered through the BU writing program's AI Affiliate Fellowship program, giving Neeza a role in Christopher's class supporting both Christopher and his students in responsible and effective use of generative AI in writing.
On this episode, I talk with Chris and Neeza about this innovative, AI-focused students-as-partners program. They share about Neeza's role in Chris' writing course, how her work as an AI affiliate benefitted both Chris and his students, and the potential for this kind of program to work in other disciplines. Chris and Neeza have lots to say about the role of AI in learning.
Episode Resources
· Christopher McVey’s faculty page
· Undergraduate AI Writing Affiliate Fellowship, Boston University Writing Program
· Syllabus for The Philosophy and Ethics of Artificial Intelligence
· AI Mini-Games for Peer Review, an activity by Neeza Singh and Christopher McVey
· The Case for Slowing Down, by Christopher McVey
· AI-Enhanced Learning with Pary Fassihi, Intentional Teaching episode 35
Podcast Links:
Intentional Teaching is sponsored by UPCEA, the online and professional education association.
Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe
Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching
Find me on LinkedIn and Bluesky.
See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.
Derek Bruff (00:05):
Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas and teaching. I'm your host, Derek Bruff. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time. Last year on the podcast, I talked with Pary Fassihi about the ways she was exploring and integrating the use of generative AI in the writing courses she teaches at Boston University. During that interview, Pary mentioned an AI affiliate program running out of the writing program at Boston. This program involved matching undergraduate students, the AI affiliates with writing instructors, giving the AI affiliate a role in supporting both the instructor and their students in responsible and effective use of AI in writing. I was intrigued by this idea. I've seen a lot of different kinds of initiatives in the last two years to help faculty wrap their head around AI's impact on teaching and learning.
(01:03):
But this was the first time I had heard about a student as partner approach like this. I asked Pary to connect me with an instructor and their AI affiliate so I could learn more. Pary introduced me to Christopher McVey, master lecturer in the writing program at Boston University in the spring of 2024. Chris was matched with an AI affiliate, Neeza Singh, a data science major minor in anthropology, who's currently a senior. I had a great conversation with Chris and Neeza back in December of 2024. Sorry, Chris and Neeza for taking a few months to get this on the air. We talked about Neeza's role in Chris's writing course, how her work as an AI affiliate benefited both Chris and his students and the potential for this kind of student as partner program focused on AI to work in other disciplines.
(01:52):
Chris and Neeza, thank you so much for coming on the podcast today. I'm excited to hear from you and get to know about your experiences with AI and teaching and learning. So thanks for being here.
Neeza Singh (02:02):
Great to be here. Yeah.
Derek Bruff (02:04):
So let me start by asking, what is AI writing affiliate and how does this role differ from other instructional roles that may be associated with a course like a teaching assistant or a learning assistant?
Christopher McVey (02:18):
So the AI affiliate role was imagined as something that would be like a teaching assistant, but wouldn't have any of the annoying, painful work of a teaching assistant in that the affiliates, they're not involved in grading, they're not involved in assessment or proctoring and exams. The main role of the affiliate was to help instructors understand how to integrate AI in their teaching, in their assignments, and to help the students learn how to use AI responsibly and well in completing those assignments. So it's not a content driven role in the sense that they're not tutoring the content of the class. It's more of a tools mentorship role in which they're mentoring both the instructor and the students in how to use AI tools to complete the coursework.
Derek Bruff (03:11):
And so how did each of you come to be involved with this program at Boston?
Christopher McVey (03:16):
I taught a class in fall of 23. That was AI philosophy and ethics, and Neeza was a student in that class, and I was experimenting with allowing students to use generative AI in their submitted work. But then that semester we also got a funding grant for the following year to have ChatGPT Plus licenses for all students in these piloted classes as well as to provide funding for AI affiliates. And so after that fall, I encouraged Neeza to apply for that role because she was one of my best students and she did, and I was fortunate enough to have her as my affiliate in the spring.
Neeza Singh (04:02):
Yeah, it was one of my favorite classes actually. I mean, I took it because it's the requirement by the university to have a writing class, but I chose that topic of class, specifically the AI ethics because I'm a data science major, so a lot of my coursework is kind of the technological background behind how AI is created, but not as much application and daily use. How do you test it in the real world and the ethics that come with it? So this class was super interesting for me. I always loved writing kind of a good blend of STEM and humanities if I was to self-describe. So writing the topic. And so I guess that showed in my work output because I actually enjoyed what I was doing. And that whole semester I was using AI in the writing class, and I know Professor McVey encouraged it, but not in the sense where we're talking about how you're using it from the very beginning of the writing process, more of just like, okay, you can use it to revise and edit and reword your sentences. So when this role came out, I was like, oh, perfect. I already know some tips and tricks I can pass along. And I just teaching in general, it's always been a passion for me. I've been leadership roles, student government, and it also helps me kind of get better that subject. So whenever there's an exam, I'll be teaching the content to my friends essentially, and they'll be doing the same thing because that's just how you learn is interaction. I learned a lot about AI and using ChatGPT in academia way more by teaching that class.
Christopher McVey (05:44):
I mean, I felt a little bit ahead of the game in the sense that I had been thinking about AI's implications for writing, I think for a little while, but I didn't have an accurate understanding of how students were thinking about it and how students were using it. And then also my background in literary studies and composition and rhetoric, and Neeza's background is in data science and cs. And so she had kind of two forms of knowledge that I didn't, and the first was just that student orientation toward it in how students are using it, how students are thinking about it. And the second is just the mechanics of how it works. And I think that led to her being more literate in its use in terms of effective prompting and so on, and creatively engaging with it in ways that I couldn't. We always got annoyed at our parents because they didn't seem to understand how to use technology in the ways that we did, and we weren't afraid to push buttons, but they were. And I really saw that sort of myself on the other side of table because I'd see Neeza interacting with chat GPT in ways that I'd never think to. And so she's been really helpful in both those regards.
Derek Bruff (07:04):
Neeza, can you say a little bit more about your academic background and your experience with AI as you were going into this experience?
Neeza Singh (07:13):
Yeah, I mean, I started college as a CS major, so it wasn't directly related to ai. I hated it. So then I switched to data science because AI was the big new thing. And so it was like we learn about all CS concepts, but with the added emphasis on data. So for the first two years, I did this class with a professor for the beginning of my sophomore year. So fall of sophomore year. Before that, all I knew was this is what an algorithm looks like, zero one, byte numbers. I was just thinking mathematically about it. And it was ChatGPT was also very fresh then. So even the people who with the background, they had a big learning curve too, because it's like we're used to just typing things in Google and searching for them. But ChatGPT is like, it's supposed to mimic human interaction, so you have to keep interacting with it to get what you want.
(08:08):
So in the beginning it was like all these students are copy and pasting the essay prompts and just copy and pasting what ChatGPT outputs, and there was no back and forth. So I learned I was kind of exposed to that for the first time in that class. So I was learning about it too, and I was comparing the quality of my essays I wrote when I was a completely independently thinking human being without ChatGPT, and I'm like, I'm better than what ChatGPT is outputting. How do I get it to do that? So I'm saving time. So really the only way you can get accustomed to this is by using it so much.
Derek Bruff (08:45):
Maybe let's talk a little bit about some of the work that you did in that spring course in terms of the assignments and the activities that it sounds like you kind of co-designed them. Can you share an example or two and then we'll talk a little bit more about the co-design process, but what did it look like in practice with the students in the spring?
Neeza Singh (09:03):
Yeah, so I was really lucky I had full freedom in designing that 30 minute lecture I would give, and then my professor would always be receptive of it. But the goal was to stay within the writing using AI for writing field and not necessarily integrating what we're learning about in the content. But since the content of the class was AI focused, a lot of the activities I did was more broad experimenting with ai. So when it came to just the writing process, I would have presentations of different ways to prompt engineering to get what you want and start at the first stage of writing. So I would be finding sources. Here's a lecture one week, I'm just talking about how to use it to find sources, when to be cautious, when it's giving you sources, that kind of thing. And then another whole topic was using AI to actually read and understand the material even before you get to that stage, which you wouldn't expect because ChatGPT people go to get answers, not be the tool that allows you to come up with those questions.
(10:13):
You know what I'm saying? So yeah, so a lot of the instruction was about every single step of the writing process and then a lot of live demos. So I would be, in some weeks I would just have ChatGPT open, the screen projected, and I would just be typing different ways they could get what they wanted out of their essay writing process. Sometimes I would preload the conversations to show them a compare and contrast. So this is what happens when you don't use the prompt engineering techniques, look at the difference in its output, and then they would have time to experiment in themselves. So it was very open-ended. I had some classes where I did actual activities with it. I think a really popular one was the AI mini games, which I used for our peer review class. And just personally, I always hated peer reviews. I always dreaded them. So I was like, there has to be a way to make this
Derek Bruff (11:10):
More fun. Why is that? Did you not like getting feedback or giving feedback?
Neeza Singh (11:14):
No, it's not that. I just think it wasn't as, there wasn't enough motivation for both partners involved to give them deep constructive feedback criticism. It wasn't about the criticism part. It was actually that they would end up not giving you enough criticism just based off of how social dynamics are.
Derek Bruff (11:32):
Sure. Yeah.
Neeza Singh (11:34):
I was introducing AI as a third partner to kind of guide that so they're not just sitting there and perhaps there's like, okay, peer review, talk about your essays. Kind of awkward being like, oh, I didn't like that you wrote this. So I was like, okay, this would be a great class to use AI in. So an example of an AI mini game would be, I had a devil's advocate mini game. So you would use AI and put each of those essay and ask it to come up with potential counter arguments and rebuttals, and then your partner would be doing that for theirs. So your partner would be giving you their feedback, but then AI would be kind of matching it or not matching it so you kind of think further. So there was kind of a buffer there. And I think another one was the elevator pitch.
(12:23):
So a tool that I really like to use is after I write the essay is ask ChatGPT to summarize my essay in 50 words, so I know that I hit all the points that I was intending to. Same thing you can do with comparing it to the rubric of the assignment or the essay prompt. So kind of backtracking, working backwards, making sure that you're kind of in the audience's eye as the writer. So that's kind of the stuff I encourage, and it's allowed for more interaction. I remember that class was really loud that one day.
Derek Bruff (12:59):
Yeah, I was wondering about that because, so I've taught writing classes in the past. I'm a mathematician by training, but I taught a writing seminar in the math department for many years at Vanderbilt. And the peer review process, it can be a bit like pulling teeth. Students are either, they're not engaged, I'm having enough trouble wrapping my mind around my own writing, now I have to figure yours out and try to make constructive criticism. That's a pretty big cognitive load, or they're just hesitant to say anything too critical of a peer polite and nice people. And so do you feel like that these, by bringing in chat GPT with these targeted prompting strategies that were not just what's wrong with this essay or make it better, but they were really focused on particular aspects of the writing process, do you think that kind of catalyzed some better conversations among the peers?
Neeza Singh (13:52):
Yeah. So I'm going to bring up one of the activities. So that elevator pitch activity, the way we made it interactive was that each partner used ChatGPT to condense their partner's essay. So then you would be giving the elevator pitch to the writer, and then they would tell you if it aligns or not. So it kind of takes the blame off the person. It's like, oh, I'm not criticizing you what ChatGPT is. And then so you're still taking into account their own personal comments, but then there's an added activity you can do together that catalyzes more conversation based on what an external party said. So you're still involved in your partner's essay, but your spokesperson is ChatGPT,
Derek Bruff (14:35):
Right? You're not having to do all the heavy lifting yourself. And I would imagine on some level, just having that third voice in there gives you more to play with to talk about.
Neeza Singh (14:45):
Yeah, because ChatGPT, once you ask it a question, it goes on and on and on. So it gave them a lot of content.
Derek Bruff (14:53):
Yeah.
Christopher McVey (14:54):
Could I share an observation?
Derek Bruff (14:56):
Please do.
Christopher McVey (15:00):
So Neeza came to class once a week. The class met three times a week. She would come every Friday. And a lot of the work for changing the class from the fall to the spring when we had the affiliate was clearing out the schedule and finding a lot of time. And it was so challenging for me because I really had to drop a lot of content, not just one thing here or one thing there, but a lot of content to make space for this time to play around with chat GPT and to think about chat GPT and writing, and the kinds of activities that Neeza created or co-created with me take a lot of time. And they can be messy, they can be playful, and you're not always really sure of exactly how they're going to end or where you're going to end up.
(15:51):
That sort of pushed against my usual inclinations in teaching, which is to have everything mapped out to have a very discreet lesson plan with a clear beginning and end. I had to sort of open up space to be uncomfortable with not being quite sure exactly how things were going to work or where they were going to go. But it was such a joy to see the students playing with the technology and thinking about it and learning from it. And in many ways, it was less efficient in the sense that some of these tasks, like a traditional peer review could probably be done more efficiently just by doing it the traditional way, but bringing in something like chat GPT or AI and trying out different things. Some of the games that Neeza has described, I think changed everyone's orientation to it and made us feel playful. Right. I'd say for me, it was one of the most exciting semesters I've ever taught in my 10 years of teaching at BU.
Derek Bruff (16:58):
Wow. Neeza, how would you describe your relationship with the students in the class?
Neeza Singh (17:03):
I think it doesn't even have to do with the generation gap between professor and student. It was more that since I was using this already and all students are using it, I was able to relate to them about what they would want to know basically. So coming into that class, I didn't really have a plan of, okay, this is how I'm going to, I didn't make a syllabus or anything in my head or even in a document, I was just like, okay, what is something I did in this class that helped me find the best sources that helped me read the best way? And so it was mainly just personal experience. And I guess me being a student with the students especially made it easier for them to, or more reliable because I'm a student using this. I was actually younger than some of the students in class standing, so it was just the relatability factor I feel.
(18:02):
And I held office hours, but a lot of students would just email me questions too throughout the week about, oh, how can I use ChatGPT for this purpose? Not even related to writing in general, but there's no taboo when you're talking to another student about it. When you're talking to a professor about it, it's like, oh, I don't want him to know my exact process of how I'm using AI to finesse his class. Even though Professor Chris is super cool. He's always been super cool, especially in my class too, when we didn't have an AI affiliate, but generally in the overall university, professors aren't cool about it. So having not only the professor have an AI affiliate, but letting her take the stage and be her own teacher in a way, I think took that stress out of the student's mind about using AI and be asking questions about it openly.
Derek Bruff (18:58):
Yeah. Chris, how... do you also see that difference between how your students talk to you about AI versus what you're hearing from your peers who may not have had the same kind of context to make that okay?
Christopher McVey (19:10):
Yeah, I would say there's a lot of stigma around AI use that students are reluctant to be transparent about how they're using it because they're worried that their professors will regard them or disregard them as a result. I think a lot of when ChatGPT first publicly released, there was this sort of obsession with the crisis of assessment. And if we use writing as one of the primary tools to assess student learning, what would chat GPT or AI mean for the role of writing in how we assess whether or not students learn? But that kind of took all the oxygen out of the room and it sort of prevented us from thinking about how we could actually use AI to improve learning.
(20:08):
But the result of that was that we, I think as a whole took a very hard line approach toward AI and a lot of places of moved to complete bans, and it was very much about policing AI use, and that's kind of what began the stigma. And so it's understandable that students are reluctant to be transparent about their use, but the problem with that is that they don't use it well and they don't take the time to learn how to use it responsibly, and they want to, I really believe that students are thirsty, are desperate to know about how to use these tools well, but they face this challenge of what it might mean to be transparent about that use. And if they feel like they can't be transparent, then they're not going to ask the questions we want them to ask. So it's a hurdle that I think we're still facing, but I think having affiliates like Neeza and having classes that try to de-stigmatize it and really help students think closely and carefully about what the tools do well and what they don't do well, I think that's the best path forward.
Neeza Singh (21:25):
Yeah, I think it was common that there would be a stigma on it because yes, you can very much cheat. Just like in my parents' generation in math class, it was forbidden to use a calculator, but now everybody uses a calculator in math class, but it's just the content of the way we do math has the direction is steered for it to be more higher order thinking application based. Similar to what the AI affiliate program is doing is saying, okay, you can use ChatGPT for the nitty gritty grammar sentence structure, whatever, but being able to use it takes that extra thinking step to know what answer you want it to give you. So it's kind of like education steers around this technology in every given generation. So I think just the way that we learn things change, there's going to be cheaters inevitably, but there's no point to not embrace it for the good purposes earlier rather than later for it's so taboo. It's like you can't touch it.
Christopher McVey (22:30):
At the beginning of the class students, the first writing assignment, the first major writing assignment that they do is they write a position paper on whether or not they believe students should be allowed to use chat GPT for their submitted coursework. And I joke with students that when I grade their essays, I'm going to use chat GPT to grade them, and then we all laugh and I say, no, I won't. It's actually, I take this as the most important part of my job and what's meaningful to me as a teacher. But imagine a situation in which students use AI to write their essays and professors use AI to grade them, and then why do any of us need to get up in the morning? And everyone kind of laughs, but I think it's a joke with a purpose of really forcing us to ask why we're all here and what we want to get out of this experience together.
(23:30):
Students pay a very high tuition to attend BU, and they want to learn, they want to learn how to be better writers. They don't want to take shortcuts. The class allows them to use generative AI for up to 50% their submitted coursework. And never have I seen a student submit 50% of their essay as generated by ai. The number is much closer to about 20% I would estimate. And we know that because we ask 'em to highlight any AI that they generated and they know there's no penalty for using a higher amount or a lower amount. And when we ask students, why did you use AI in this way and why didn't you use ai? The response is almost always because I like my voice better than the way that AI writes, and I can use AI to write a mediocre essay, but I don't want to write a mediocre essay. I want to write my essay. And so I think students really want to learn and do the right thing. And that's part of been what's been so fun about teaching these classes.
Derek Bruff (24:39):
Neeza, can you maybe share a couple more of examples of maybe uses of AI that the students were surprised at? You mentioned them kind of wanting to use it more thoughtfully and more effectively. What are maybe some uses that the students didn't come up with on their own, but you were able to say, here's the thing that you might not have thought of that's really useful.
Neeza Singh (25:02):
So whenever I do those live sessions where I would just be talking to ChatGPT, the screen would be shared, just kind of the way they worded the questions, I would tell them that it was a unique thing, role playing, basically asking the AI to assume a role that you want it to portray and guide you or write. So whatever their topic of their essay would be. Our big topic for the whole class was ai, but essays could be interdisciplinary. So we would be like, okay, if you were a doctor, how would you describe the ethics of AI according to this source? So kind of asking it to step outside of just being in large language model and framing it so that you get the tone and desire of output as kind of a personified way. So personified in ChatGPT for instance, and then ChatGPT in the plus version, they have this kind of custom instructions tab, and now they still have that, but now ChatGPT works differently and it kind of just remembers every single conversation you've had across chats.
(26:12):
So it becomes accustomed to your likes and dislikes, even for assignments that aren't in that classroom for completely different purposes, it remembers. So teaching them to frame their questions and just how they talk with the AI very strategically to get what they want out of it, especially with that custom instructions tab, you can just tell them, I'm a student. I mainly use ChatGPT to write essays. My tone is like this. And then in order for ChatGPT to keep reinforcing and reiterating that, another big kind of shocking suggestion I had that the students didn't know was importing your previous essays that you've written completely by yourself, and then asking it to mimic that structure tone, voice, the word usage, if that's something they struggle with. So they kind of keep the most of themselves in the work they're producing, but also take way more authority and deeper thinking when it comes to talking with ai, because you really have to know what you want from it in order to use it purposefully. And that takes enough awareness and research on yourself to know. So yeah, it was kind of reverse psychology and psychology everyone, because I'm teaching them how to basically be the AI's teacher, which is not generally the role you would see.
Christopher McVey (27:46):
Neeza taught that to me as well. I remember one class she was modeling for students how to interact with ChatGPT, and at one point the little sort of thumbs up thumbs down came up after the generated response and Neeza said, oh, I'm going to hit the thumbs up because I like its response and I want to keep giving that to me. And I thought, oh my God, I've never done that. And I dunno why I haven't guess. I've always associated that thumbs up with kind of a social media and I'm like, ChatGPT doesn't need me to like its responsive, but I realized that it does, and that part of tuning it or getting it to work well is to give it that sort of feedback. And I can could see a lot of students kind of noting that down. So she was really wonderful in helping the students think not just about how to interact with it, but how it works and so how you can get it to work for you.
(28:45):
I also think the coming years will be especially interesting because until recently, most college students discovered AI when they were in college. But now I think we're about to enter a time when students will have begun using AI in high school and then they're going to walk into their university classes with some AI proficiency, but it's not necessarily good. So we're going to really have to understand where students are in their use of ai, how have they used it, why have they used it in the way that they have, and what does it mean to use it well and responsibly at a college level. I do think we're going to be dealing with a different kind of student in the coming years, but I'm looking forward to it. I mean, I do. I really am. I think it's an exciting time to be a teacher in that as many challenges as AI has given us, it's brought us back to some really fundamental basics about what we're doing as teachers and how we're using writing to learn.
Derek Bruff (29:55):
I'm glad you're excited. I think there are faculty who have some kind of change fatigue after the last four years, but this AI affiliate program was focused on writing courses, correct? Yes. Can you imagine a similar kind of program in other disciplines, and would that be helpful? Should every class have an AI affiliate?
Neeza Singh (30:15):
Honestly, I think yes, because honestly, I'm seeing this already in my other classes in biology anthropology classes, which you wouldn't really anticipate them to be talking about ai the whole first week this semester was about AI and not just about like, oh, don't use it to output all your work, but kind of using it in the right way to get the answers that you want as a study tool and to find sources. So there's already a shift in sectioning out class time in the beginning of the semester to address it, which I think is really cool.
Christopher McVey (30:55):
I would completely agree with Neeza, students aren't just using AI primarily for writing. They're using it to study. They're using it to explain an idea. They're using it as a reference tool, and it's going to be increasingly relevant in disciplinary specific ways. So I almost think every discipline needs to think about the implications of AI in its field and incorporate into undergraduate curricula what that means because students want to learn about it. I know I have students who are juniors and seniors now who feel that some of the skills they learned are already obsolete because they learned them before the general advent or spread of ai, and that in order to be competitive to get a job, they want to be able to talk about what AI means for whatever career they're seeking. And so I think it's something that everyone needs to pitch in everyone from every department. If there was ever a moment to be interdisciplinary or transdisciplinary, this is it. I think every program and department has some perspective to share. We can't put the work of teaching AI ethics on CS or DS programs. Maybe those should be the responsibility of philosophy or the humanities. And I think if the humanities simply reject ai, we're not going to be able to share the perspectives that we have. And same thing for social sciences, for anthropology, for everything psychology. I think this can be a really big tent, and it kind of takes all of us, and that's what makes me excited. I love that. Yeah.
Neeza Singh (32:53):
I think even now, most teachers kind of introduce AI as just an external object they have to deal with, and the focus isn't on using AI as a tool in the learning process of the content itself and how the way we learn information is going to change because of the ai, because having memorization based homework assignments or quizzes are no longer going to be fulfilling. You can have ChatGPT in another tab open, but what I was talking about having that higher cognition to ask ChatGPT, what you want it to give you is honestly more advanced step in the learning and thinking process because although maybe people will forget how to do grammar and spell things, at least they'll know how to think about an idea in multiple content streams. Think about the counter argument of it, because ChatGPT is good at identifying those things. So being your own devil's advocate, for instance, is kind of one of the ways that it's going to force a different type of learning.
Derek Bruff (34:04):
I love your optimism. I'm also remembering the days when we were like, oh, we don't have to do memorization quizzes anymore, because now we have Google and some disciplines in some courses made a shift to different types of learning objectives, certainly, and some didn't. Right?
Christopher McVey (34:21):
I think one challenge for educators is that in many ways, these products are designed to be efficiency tools, but efficiency is the opposite of learning in the sense that learning happens in moments of friction and inefficiency. So I think it requires a little bit of practice and thinking about how to work with the tools in a way to potentially slow down or interrupt what might be otherwise more efficient traditional learning process. And so it runs kind of counter to the way that the tools are designed, but I think we can do it and they can be used in these ways, even for reading, for example, to ask AI about a concept that was difficult or that you don't understand, or to use it as a reference tool to learn more about the author and so on. We could previously do these things, I guess, with Google or with an encyclopedia, but you can do more with some of these models and go further and it'll take a little work. But I'm excited about it.
Derek Bruff (35:36):
Yeah. Well, what's next for both of you in your continued explorations? Neeza, you said you're getting to serve as an AI affiliate again, is that with Professor McVey or you have a new partner?
Neeza Singh (35:46):
No, no. I have a new partner. I'm a little bittersweet about it, but it's going to be really different because this is not an AI content class. I'm not sure what the content is yet, but I'll have to find unique ways to blend that topic of class with ai. So I think that will be a new challenge for me is forcing me to be more interdisciplinary with it. And I mean, I've learned a lot since a year ago when I did it, so I already know some of the tricks I'll show them.
Derek Bruff (36:20):
Sure. Yeah. Well, the tools have changed a lot too in the last few years.
(36:24):
What about you, Chris? What is your spring semester going to look like?
Christopher McVey (36:28):
In the spring I'm still teaching the same AI philosophy and ethics course down the line. However, I'm hoping that because we have so many teachers and educators at BU who are really interested in integrating AI that maybe in the summer of 2026, we can offer some kind of week long seminar institute for instructors, both within BU and also kind of locally in our region, just to talk about what we've learned from teaching ai, teaching writing with ai. I think instructors are now really interested in figuring it out and learning how to do it, and I think we're kind of building talent and experience at BU. So for me, I'm hoping to sort of plan out what that might be and maybe begin that week long seminar in summer 2026.
Derek Bruff (37:28):
That sounds great. Well, thank you to both of you. I do feel like in the past couple of years, the field of writing instruction has had to lead the way in making sense of ai, and I know those of us in other disciplines are very thankful for you all, figuring some stuff out and helping us figure out what we need, need to learn as well. So thank you both for coming on and sharing your experiences. I really appreciate it.
Neeza Singh (37:51):
Thank you. I had a lot of fun.
Derek Bruff (37:52):
Thank you. Yeah, this was great. That was Christopher McVey, master lecturer in the writing program at Boston University and Neeza Singh, a senior at BU, majoring in data science. Thank you to both Chris and Neeza for taking the time to share their experiences as part of the AI affiliate initiative. And thanks to Pary Fassihi for connecting me with Chris and Neeza. In the show notes, you'll find more information about the AI affiliate program, as well as a link to that AI mini Games activity for peer review that Neeza mentioned. I've also linked to a recent essay by Chris titled A Case for Slowing Down in which he elaborates on a couple of the points he made during our interview.
(38:34):
Regular listeners that I have a Patreon that supports the podcast and that I often share on Patreon, a few excerpts from my interviews that didn't make the final cut. This time, I have a fun clip from my interview with Chris and Neeza in which Neeza teaches me something I didn't know about ChatGPT. You can find that at patreon.com/intentional teaching or check the show notes for a link.
(38:56):
I would love to hear what you think about this kind of AI affiliate program. Do you have something similar at your institution? How do you go about fostering productive conversations between students and faculty about generative ai? In the show notes, you'll find a link to text me your thoughts. Be sure to mention your name and your text message, and thanks to my podcast host Buzzsprout for making that possible.
(39:16):
Intentional Teaching is sponsored by UPCEA, the Online and Professional Education Association. In the show notes, you'll find a link to the UPCEA website where you can find out about their research, networking opportunities and professional development offerings. This episode of Intentional Teaching was produced and edited by me, Derek Bruff. See the show notes for links to my website, the Intentional Teaching Newsletter, and my Patreon, where you can help support the show for just a few bucks a month. If you found this or any episode of Intentional teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.