Intentional Teaching

Teaching with AI in Technical Courses with Jingjing Li

Derek Bruff Episode 54

Questions or comments about this episode? Send us a text massage.

In my new job at the University of Virginia, I recently met Jingjing Li, Andersen Alumni associate professor of commerce. Jingjing teaches business intelligence at both the undergraduate and Master’s levels, and her research interests include artificial intelligence and data analytics. She has conducted some very thoughtful experiments in her courses in using generative artificial intelligence to teach about machine learning in business analysis. 

In our interview, we talk about her scaffolded assignments, the metaphors her students use to describe working with generative AI, and the relationships between conceptual understanding and AI literacy.

Episode Resources

·       Jingjing Li’s faculty page, https://www.commerce.virginia.edu/faculty/jl9rf

·       ChatGPT in Technical Courses, a Teaching Hub collection curated by Jingjing Li, https://teaching.virginia.edu/collections/chatgpt-in-technical-courses 

·       UVA’s Faculty AI Guides program, https://cte.virginia.edu/programs/faculty-ai-guides/ 

Podcast Links:

Intentional Teaching is sponsored by UPCEA, the online and professional education association.

Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe

Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching

Find me on LinkedIn and Bluesky.

See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.

Derek Bruff (00:05):
Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas and teaching. I'm your host, Derek Bruff. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time. One of the perks of working at the Center for Teaching Excellence at the University of Virginia this year is getting to meet UVA faculty who are doing really interesting work in their teaching. Back in August, I had the chance to spend a couple of days on grounds, as they say there, working with UVA's faculty AI guides. These are faculty fellows who are experimenting with the use of generative AI in their own teaching and serving as resources for colleagues in their departments and programs who have questions about generative ai. The goal for this program is to have all UVA faculty make informed intentional choices about their AI policies, even if those policies are red light policies that prohibit AI use.

(01:05):
During the Faculty AI Guides Institute in August, I met Jingjing Li, Anderson alumni Associate Professor of Commerce. Jingjing teaches business intelligence at both the undergraduate and master's levels, and her research interests include artificial intelligence and data analytics. When she described her thoughtful use of generative AI in her courses, I knew I wanted to invite her on the podcast. I'm excited to share that interview today. Before I do, however, I should make one clarification, Jingjing teaches a course about the use of machine learning, which is a branch of artificial intelligence. Beginning in the fall of 2023, she started using generative ai, which is a different branch of artificial intelligence to help her students learn to use machine learning. You, listener, probably don't teach a course on any branch of ai, but I think you'll still benefit from hearing how Jingjing uses generative AI tools like ChatGPT and Copilot to teach her course's topic. Jingjing, thank you so much for being on the podcast today. I'm excited to talk to you and get to know some of the ways that you've been experimenting with AI in your teaching. Thanks for being here.

Jingjing Li (02:16):
It's my pleasure.

Derek Bruff (02:18):
So I'll start with my usual opening question, which is this. Can you tell us about a time when you realized you wanted to be an educator?

Jingjing Li (02:26):
I think it's dated back to even when I was a child because my mom's family, almost all my aunts and uncle, they are high school teachers, and then my dad used to be a university professor. Later on he joined a private sector. So essentially even when I was born and then they have determined that I need to be on this journey. However, I think one thing, the moment when I decided to be a university professor is when I actually studied in the United States to pursue my PhD degree. So it is a business administration PhD degree, and usually during this five year program, they require some teaching experiences. And I still remember vividly my first time teaching when I was a first year PhD student, and it was my second semester and my PhD school is a University of Colorado Boulder. And then I still remember the first day when I actually was standing at the podium and announced to the entire class say, I will be your professor for this semester.

(03:48):
But actually I was 22 years old. I was one year older than my senior students. And then, I mean, I think this post a challenge to me because on the one hand I need to maintain my authority, say like, okay, even though I'm only one year older than you, I'm still have the knowledge that we can share, share and you're probably going to find something useful from me. On the other hand, I was still kind of a student. Sometimes I don't know how to keep the distance from my students. So I had to be honest that my first trimester teaching evaluation was average, right? It's not exceptional. I think it's okay. It's not exceptionally bad. It was just the average in my department. But then my personality is like whenever I found my weakness, I need to overcome them. So my model is never give up.

(04:52):
So then the second year I actually went through the teaching excellence training at the CU Boulder, and then they had sent some people to sit in my classroom, make observations and share with me. And I also was sitting in my PhD advisor Kai Larsen's class for the entire semester. So I'm kind of doing this a qualitative study, make this observation, say what works, and then later on, so then when I was teaching the same subject the second time, I actually got a very good rating, student loved me, and then I actually got nominated to be the finalist for teaching award for PhD students. And then I just keep on working on my teaching pedagogical approach. And my third time I got the teaching award finally for the PhD students. Then they'll say, maybe I had a talent for teaching to be educators. So that's kind of the moment.

Derek Bruff (05:55):
Yeah, I love that story in part because it sounds like at Boulder you had a good support system to develop those skills. You were able to observe your advisor and learn from them. You were able to go through this program and get advice and strategies and observation and feedback. And so I love a good graduate program that provides those types of supports for the doctoral students who are planning faculty careers. That's great. Yeah.

(06:22):
Well, I know you've been teaching with and about AI as part of your courses, and I'd love to hear about some of your experiments, some of your assignments. What are some of the ways that you've tried to incorporate this changing technology, particularly generative AI in your courses?

Jingjing Li (06:42):
I started, I really paid attention on gen AI for teaching around 2023. That's when the open OpenAI suddenly released a product that wowed the entire world. But I actually has been doing research on AI for more than 10 years and 15 years because my research interest is natural language processing and their applications in the business and societal areas. So essentially last year, actually fall of 2023, and I finally decided to say I need to change my pedagogical approach. I want to do some pilot studies. So essentially what I did back then in the fall semester is that I did this field experiment. So I started to identify some assignments and then I break them into two parts. So those assignment are machine learning related assignment. So essentially I give them a dataset and then they need to build a predictive model to predict a certain outcome such as future customer churn or wine qualities.

(07:56):
And then for my courses, I actually ask my students to conduct a cost benefit analysis after they create a prediction model. So for example, a canonical example is that if you are a retailer, you are going to suffer from some customer churn problems. Essentially means that when a customer made their first purchases, but after 12 months, they never come back and make second. So they basically go to other retailers or other platforms to make their purchases. And then there are many ways or many different types of machine learning models or predicting models can solve this problem. One direct one is to you leverage a historical customer behavior data and then you build a predictive model to predict whether the customer going to come back or not in the next 12 months. So that ended that basically the predictive models task is ended. However, next thing is that how can you act upon that?

(08:59):
So there are many ways you can act upon. For example, if you predict a certain customer will not come back, then you have two choices. One is that you can aggressively market to that consumer, send a lot of recommendations or emails and hope to bring the customer back. Another choice is that you probably want to save your marketing expenses on that customers. So that's kind of a cost of saving. You don't because you feel like maybe their probability to come back is very slim. So then you can reallocate your marketing expenses to acquire other more promising customers. There are many different types of actions for these predictive models outcomes. So then we need to conduct some cost of benefit analysis or what if analysis to see which action is more promising or beneficial or embed less risks. So that's where I ask my students to do is that you conduct a cost of benefit analysis.

(10:02):
Not only you can decide actions, but you can also decide which predictive model to use. We have a lot of predictive models. Some models are very transparent, like logistic regression or decision tree. You can actually derive some human readable rules from those models. But some models are very opaque, such as deep learning, you only get a bunch of numbers and you don't know how they make the magic. And then some models are prone to biases, for example, if they only focus on the predicted performance, but care less about the equality or equity across different demographics or different populations. So essentially you can use cost of benefit in US because if you make biases, that's also part of the cost to the company because reputation, cost, reputation damage, et cetera. So essentially I asked my students to do a comprehensive evaluation for what they have done.

Derek Bruff (11:02):
How did you bring generative AI into that type of assignment?

Jingjing Li (11:06):
So I actually break down my teaching journey into course preparations. And the second step is teaching during the sessions. And finally as a post teaching, you need to grade and you need to reflect on your teaching. So during the course preparation stage, so I start to use gen AI to give me some ideas how to organize my sessions, what are some interesting in-class exercises or activities I can do. So then I basically say talk to gen ai. Say I'm a professor and I want to teach this course. Let's say about data preparation, how to handle missing values in your dataset and how to remove duplicate features is a very dry content. You can definitely throw a 75 minute lecture, but I want to make it interactive. So give me some ideas how to make that interactive. And the gen AI going to give some ideas. How about you do a poll, or how about you do some interactive exercise and you show some data sets to the students and the students can identify the problems and you foster a group discussion. So I have those things in my original syllabus, but sometimes I just want to stay out the normal Jing Jing to say what others can I do differently? So that's what I do for the course preparation stage is about ideal generation refine my pedagogical approach.

(12:41):
So during teaching, and then also I learned this from my colleague Sarah Lebovitz, and she said that she actually used gen AI for between session troubleshooting. So essentially when you are teaching, and maybe some students ask you a question you are not quite sure about the answers to that question. And then the classical answer to that students is, I will get back to you. So what I found is that usually gen AI is very good at understand my question, the context, my intention rather than Google. So Google, you have to specify some queries, but they don't tolerate longer queries. But essentially those longer queries going to embed contextual information. So I found that Gen AI is very good at that. However, gen AI usually give you a very routine, mundane, boring answers. So for example, I ask Gen ai, can you give me some examples or cases about machine learning biases and their potential damage?

(13:48):
And then they always give this answer, say there's hiring bias. And then there's some companies, they have adopted ai. So every time you ask, they always give you the right same answer like hiring bias. There are several other things, several things. I feel like Gen AI gave me a very good established the baseline, it's not perfect, but it's at least like 70 out of a hundred. And that's going to save me a lot of time to really pinpoint the answers. So that's between session troubleshooting, but then I also embed gen AI into my learning, but I'm going to talk about later. And the post teaching is that in my class, I also have some writing tasks because I just assign some readings and for students to write their reflections on me. And sometimes I just ask the same question to Gen AI and let gen AI to tell me the answer. And then I know, okay, this is a typical gen AI answer. So when I grade a students' essay and can compare a contract, but I know it is not valid, but at least I know this is basically the average answer from gen ai. Usually my students got much deeper, better personalized answers to that. But sometimes when I'm not sure, I am just kind of creating a greater baseline, I would say. So that's going to be the minimum for my grading. So that's kind of post teaching phase how I use gen ai.

Derek Bruff (15:19):
Okay. Alright. Yeah. Well thanks for sharing that, right? Not just how we have our students use ai, but there's also a lot of applications to figure out for us as instructors. Yeah. Okay. Let's talk about the activities then. How do you build AI into the teaching that you do with your students?

Jingjing Li (15:39):
I just introduced the learning objectives of my courses, and then I was thinking that how can I use gen AI to augment students' learning? Because my course is faster paced, I need to cover a lot of technical details, but as well as the business knowledge and communication. However, I don't want them to use gen AI at the very beginning without establishing any foundational knowledge. So essentially I first will announce my Gen AI policy to my students, say for every single deliverable, I'm going to explicitly specify whether or not this assignment can use gen AI as an external consulting tools. And another thing I always talk to my students say, I suggest you even if later I say you can use gen ai, but I told my students say, you actually need to make sure you have the enough knowledge to judge the output or judge the quality of the output before you can use gen ai.

(16:43):
And then I kind of roll out my assignment plan. So I usually have assignment that gives students a specific data set, ask them to build a predictive model, and then the goal is to optimize the predictive model. And then I can break them into two parts. The first part is they going to build the codes or the software to create the model and try a little bit on tuning the parameters to optimize the predictive models performance. So that's part one. I say that you should not use gene ai. The only sources you can refer to is my lab notebooks or slides. Then the student's going to turn in, they are predictive model. Then I say, let's continue to do these exercises. And now you can actually use gen AI to help you understand what are other model parameters you can potentially play with. Because during the class, I can only cover two or three, but there could be 10 more parameters.

(17:55):
And I also say that you can actually ask gen AI to optimize your code. I intentionally didn't teach them what are the right, the functions or codes to quickly efficiently do the parameter tuning. So then I expect them to use gen AI to find the magic function grid search CV to do that. And then the students will turn in their assignments back. But I also ask them say, can you provide a debrief? One paragraph talk about how you use genai, what is the time allocation between genai facilitated time versus no genai in this open-ended assignment? And then finally, I'll ask the students, so if you can give gen AI a metaphor, how would you describe gen ai? And then the students turn in the assignment, oh, by the way, I also ask them to turn in their conversation histories to see how they prompt gen ai, what responses, and then I have a detailed log about what they have done.

(19:05):
Then after that, I do some qualitative analysis. I'm trying to compare the students who benefit the most from gen AI versus those students benefit relatively less. Then I look at the prompt differences and I look at how they describe gen ai and I also look at the metaphors. And then after that I was shocked there's a huge difference. So this experiment was done last fall, fall of 2023. And then so we can talk about the findings later, but then let me finish the process. So when I finish my qualitative analysis and then I'm going to share with my students say, this is the good prompt, or this is not so good prompts. This is a recommended processes and this is the metaphor. And then I let students to discuss, say, what did you learn? And then I give them a second chance say, okay, so now we have another assignment.

(20:09):
So now I will let you use gen ai. And then I also collect the same data. But before that I teach them some prompt engineering techniques like a chain of thoughts or a contrastive chain of thought and contrastive learning. And I told them, give them some example about good prompts and then they're going to do that again. So this time I found their performance significantly improved. I think almost all the students, they have adopted some prompt engineering strategies. And then later on, actually, it was really interesting that at the end of the semester, some students actually told me that I actually don't want to use gen ai. And I asked why. And students say that actually, I found the time to write a good prompt is as much as the time that I do it by myself. So maybe I just rely on my knowledge to complete assignment. I feel like writing a good prompt going to take a longer time. So that's kind of an interesting process.

Derek Bruff (21:19):
Yeah. Well, I'd love to hear some of the metaphors that your students used to describe working with AI on this type of Assignment.

Jingjing Li (21:30):
So for the students who benefit more, they usually refer gen AI as a close friend or family as a person. So some quotes I collected, some students say, this reminds me of Professor Lee, but with less humor. So that's it. And then another student say, gen AI would be my dad when I need to know how to do my taxes. And then yeah, another student say it will be my friend Jackson. He's very knowledgeable about data. However, sometimes I catch mistakes or find a better approaches. For example, I outperformed him in the test accuracy for this assignment. When I'm struggling with something data related at work, I ask him for advice, I trust him. However, I always double check his output. So those are the students who benefit the most from gen ai.

Derek Bruff (22:33):
Wow. I'm curious, what kinds of metaphors did the other students use? The ones who didn't benefit as much, how are they Thinking about?

Jingjing Li (22:40):
Yeah, they actually use an object or they use a generic like a person instead of a close friend or family. So some students say it will be a faster and a more direct Google, but it is nowhere near the quality of an actual professor or human. And then another student say, so this is basically referred to certain types of people, but without a specific name. So I envision ChatGBT as an absent minded professor who knows a lot of the theory behind what is teaching, but has little real world experience with what it is teaching. Okay. And then another student said it's like a student who only listens to the lectures and reads the textbook, but does not think outside of what the teachers teach or information fed to them.

Derek Bruff (23:39):
And so those were students who didn't find using the chatbot as helpful.

Jingjing Li (23:48):
Those are the students. They didn't find the chatbot as useful, and they also didn't obtain this huge performance scan after using gen ai.

Derek Bruff (23:58):
Okay. So it wasn't just their perceptions, which it would make sense that if they found the chat bot less useful that they also used metaphors that described it as less useful.

(24:10):
But You also saw some performance differences between these types of students in terms of the quality of the work that they turned in.

Jingjing Li (24:17):
Yes. So I want to say back to the metaphor, I feel like those are the post use experiences. So it's basically because they didn't get a lot from them and then they assign different metaphors. And in terms of performance difference, so I have two types of sessions. One is students use low-code technologies. So those like a graphical interface to build a predictive model, they don't need to write a single line of code. And I have another section is a Python. They need to use Python to build predictive model. What I found that is a Python section benefit more than the low-code section then. But it's very interesting because in the first part when I didn't ask them when I prohibit gen AI use, and then actually the low code student section has a better performance than Python. And it is due to I think a task complexity because for Python section, it's going to take more time or it's more challenging for them to write the code from scratch.

(25:29):
So if you think a student only allocate a limited amount of time for an assignment, Python section is going to spend maybe 60% of the time writing Python code and only 40% time to tune the parameter to optimize the performance. Whereas low code only spend 10% of the time and then they can spend 90% of time optimize the code. And then after introduce gen ai, the gap between the low code and the Python section actually decreased. Actually, there's no gaps. So that's why I say the Python section benefit a lot. But if you look at the outcome is that they now finally are equal those two sections in terms their predictive performance

Derek Bruff (26:12):
Because in the one case, they essentially were using tools that had the coding already built into it, right through the graphical user interface and the Python section. Once they had the AI's help, they essentially had most of the code written for them so they could also focus on the business side of the analysis.

Jingjing Li (26:34):
And I think that's a really good observation. And I also look into the debrief paragraphs and I found the same pattern because for some Python section students who benefit the most, they say that I actually know this concept inside out. When you want to build a predictive model, you divide our data into training and testing, and then you build a decision tree classifier, et cetera. So they know the concept inside out. However, when they try to write a code, they're always blocked by some bugs or error message. Then, so they said that I actually only use gen AI to generate code because for the well understood concept, I already know what to do. So therefore I can write a very clear task instruction, say, give me the code to divide the data into training and testing and create a classifier and then report accuracy or position the recall.

(27:35):
So that's why the Python section benefit the most. However, for the same section, the students who didn't benefit a lot are those students who actually don't know the underlying concept. So they still need to understand why, what is the fitting, like why we need to divide the data into training or test or how do I divide? Is training data bigger or testing data bigger? So they actually haven't been clear about the machine learning concept. So therefore their prompts are very direct to say this, help me build a predictive model for this dataset. But they didn't specify what are the process. And sometimes they actually say, can you explain to me what is this maximum depth for decision tree, this parameter? And then because they haven't understood this concept during the class, so when gen AI just give them a text, so it's kind of like a one model rather than in class, like a video audio information, they of course find that less useful because if they couldn't understand that during the class with a lot of practices or interaction by just reading one paragraph a text, it's going to be hard for them to make clear that concept.

(28:57):
So that's why they found the gen AI less useful.

Derek Bruff (29:01):
So one of the analogies I've been making as I talk with faculty about AI is when I was learning photography for the first time, and there's a lot of conceptual understandings in photography, the composition and lighting and focus and those types of things. And Then there's the camera itself and all the knobs and the buttons and the settings. And I found that often in photography, certainly, certainly understanding the concepts help me understand what to do with my camera, but it also worked the other way that the more I was able to play around with the settings on my camera, the more I got a sense of some of the conceptual pieces. And so it sounds like for some of your students at least using the gen AI did not help them refine their understanding of the concepts. Would you agree, and were there cases where the use of the AI actually did help them solidify their understanding of the concepts?

Jingjing Li (30:03):
Yeah, so I would say that gen AI also helped them to strengthen their concept because that part is in the prompt. Because when they write the prompt, they actually need to understand this basic machine learning. And I also tell my students say, the best way to learn is to teach. So imagine that you actually have to understand all the course materials, and then you actually put that in the prompt just like you are teaching the gen AI to be a machine learning experts. But when I look at it, when I go through all the prompts, I can feel like actually even during the same conversation strategy, I see people, students change their prompting strategy. So initially they directly say, I was given a task to build a predictive model on this dataset, right? So that's very direct, didn't show any understanding. But then again, AI provided a response, say, to build a predictive model, you need to go through those steps with a bullet point, the bolded short phrases with the description. And then the students say, oh, they suddenly kind of understand that. And then they say, okay, so I know now I know build a predictive model. I need to do this and these steps, so please generate the code. So essentially I found that initially they're not understanding the concept. Later on, they actually understand the concept better. Then during the later part of the session, they focus on code refinement and code generation.

(31:34):
So I saw different strategy, and during the process, the debrief paragraphs, I also see students adopt a different strategy. Some students say, I just want to rely on my own knowledge to use my knowledge without a gen ai. So let's say I spend two hours, I want to spend the first one and a half hour just to rely on myself. And then for the last 30 minutes, I just want to see whether gen AI can pleasantly surprise me.

(32:05):
And then some students say, initially I jumped into gen ai, just see how Gen AI can do. And finally, I found that Gen AI cannot do something very well. I was very upset. So then I just spent the rest of the time doing it by myself. And then some students say, oh, I feel like Gen AI and I, we are good collaborator. We are good team members. So then I just started to do it by myself. But when I found something, I always go back and forth between Jen and I, and then I found a good performance. So I have a different processes described in students' paragraphs, which was very fascinating.

Derek Bruff (32:45):
It is, I imagine, I mean, what I'm hearing is that their use of gene AI to help solve these problems depends partially on their understanding of how gene AI works, what its strengths are, and how to prompt it well. So there's a kind of gene AI specific domain that students may or may not have or may develop over the course of an assignment, But Then it's also their understanding of the fundamental material as well. And so I imagine some students who understood, well, it's the understanding of the material, but also kind the coding skills. So if you have a student who understands the concepts and can code very well, They Might find ChatGPT's output a little disappointing. It's telling them stuff they already know or it's telling them stuff that's wrong, whereas students who are kind of clueless all around may not be able to get much out of chat GPT, even if their prompting skills are very good. Yeah.

(33:41):
Hi, this is future Derek jumping in. I've had time to think about what Jingjing shared a bit more now, and I want to clarify that for the students who didn't understand the fundamental concepts and didn't have the coding skills, some of them were in fact able to make good progress by using the AI as a kind of tutor. That however depended on them having some practical AI literacy. They didn't just take the generic explanations that the AI provided, but they used those to go back and forth with the AI to clarify their conceptual understanding.

(34:14):
That would be interesting to try to assess what that's happening. I can imagine, again, those students in the middle who are able to learn a lot with enough coaching, if they can use the gene AI as that kind of coach, it's like if my camera, if I could ask my camera why is this picture not coming out the way I wanted it? And it would tell me, have you tried this? Have you tried this? Have you tried that? Right?

Jingjing Li (34:36):
Yeah.

(34:37):
Yes. Yeah, I agree. And I indeed find some students say GenAI didn't help me to improve my performance because they have a very good coding skills and good understanding of the fundamental concepts. I saw some students actually use gen AI to better their visualization. So for example, I asked them to do the plots and the python codes to do that. So they require multiple lines, and the students say, I want to have a specific a color for this bar, and then that bar, and I want to have 45 degree of the labels on the X axis. So those things, I found those very tech savvy students delegate to an AI to do. But then the students in the middle, they actually need, they have a good foundation, but they just need a little bit lift to get to the level that they can do the assignments very well. However, I found a student at the bottom, they don't have a solid foundation to start with. So then the gen ai, and when I look at a prompt, it's mostly they just directly say, give me something. They want to seek the final solutions. Or they just say, can you explain to me what this concept is about? So I feel like the students benefit more is that they actually use Gen AI to extend their horizon, or just like me go out of the normal Jingjing would do this assignment, give me some pleasant surprise.

Derek Bruff (36:05):
Yeah, yeah. Well, I'm curious, you said you started these experiments just over a year ago. Are you doing things any differently in your courses this fall because of what you learned last year?

Jingjing Li (36:19):
Yeah. So what I plan to do is that, so before I do use two assignments to see this effect, but in this fall, I only want to use one assignment, kind of get students get started with their assignments, but then later on I want to put more effort or more sessions into explain the gen ai, some prompt strategy, their underlying technology. And I also thinking about sometimes instead of giving students the codes, because my lab is usually like I give them the code, the notebooks to start with, and then I can explain the code by code during the lab, and then students complete assignment based on the lab session, and I record the lab so that students can replay. So now I'm thinking that instead of just giving them the codes, I can give them some prompt for them to get the Python codes or for the code explanation before I have to write a lot of explanation around one single line of code, tell them this is a function, this is a parameter, but maybe I can just provide a prompt, say, if you want to know more about how this function works, use this prompt and go to Gen AI certain version.

(37:37):
And then just to get the answer from that. So I feel like especially for business school, we're thinking that we need to make our curriculum very connected into the real world. So I think it is fair to say that in the real world, the workplace is heavily impacted by Gen ai. I heard a different versions of survey responses, some surveys that 80% people use Gen ai, 90% people use Gen ai, and I feel like the remaining 10 or 20 are lying. So I feel like if that is the workplace we are facing, so I'd better put that into my curriculum. So I want them to do more practice to actually use Gen AI to get what they want. However, I still going to divide my course maybe into first half or second half. The first half is no Gen ai, and the second half you can explore the

Derek Bruff (38:36):
Yeah. Well, because as I think your data indicates if students have a stronger foundation in the concepts without Gen ai, they're actually going to get more out of using the Gen AI when they do have access to it.

Jingjing Li (38:48):
Yes.

Derek Bruff (38:50):
Yeah. Yeah. Well, thank you Jingjing, we've covered a lot of ground today, and I appreciate you taking this time. I'll be thinking about these different learning objectives and how they intersect with each other for a while. So thank you for coming on the podcast and thanks for sharing.

Jingjing Li (39:05):
No problem. My pleasure. It was a very fun experience, and actually it helps me to really clarify what I have done and then also think more deeply about what I plan to do in the future. So thank you so much.

Derek Bruff (39:22):
That was Jingjing Li, Andersen Alumni Associate Professor of Commerce at the McIntyre School of Commerce at the University of Virginia. Thanks to Jingjing for taking the time to talk with me and for sharing her experiments in teaching with ai. I want to do just a bit of summary here for my own understanding, if nothing else. What I heard from Jingjing was that for students who lacked coding skills, they could compensate for that if they understood the fundamental concepts in her course, because they could use those concepts to write AI prompts that generated the code that they needed. If students lacked the coding skills and the conceptual understanding, but still had some practical AI literacy, they could use the AI as a kind of tutor to build the conceptual skills they needed to generate the code. That wasn't a given. Some students in this category floundered, but it was a possibility.

(40:12):
Finally, students who lacked coding skills and conceptual understanding and AI literacy, well, they were of a creek without a paddle. I'm curious if you've seen similar patterns in your discipline where conceptual understanding enables students to use AI to accomplish particular tasks that might be hard for them without the AI or where students use their AI literacy to deepen their conceptual learning. I'd love to hear from you. In the show notes, you'll find a link to send me a text message. Thanks to my podcast host BuzzSprout for making that possible. You're also welcome to email me derek@derekderek.org. Intentional Teaching is sponsored by UPCEA, the Online and Professional Education Association. In the show notes, you'll find a link to the UPCEAwebsite where you can find out about their research, networking opportunities and professional development offerings. This episode of Intentional Teaching was produced and edited by me, Derek Buff. See the show notes for links to my website, the Intentional Teaching Newsletter, and my Patreon, where you can help support the show for just a few bucks a month. If you found this or any episode of Intentional teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.


People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Tea for Teaching Artwork

Tea for Teaching

John Kane and Rebecca Mushtare
Teaching in Higher Ed Artwork

Teaching in Higher Ed

Bonni Stachowiak
Future U Podcast - The Pulse of Higher Ed Artwork

Future U Podcast - The Pulse of Higher Ed

Jeff Selingo, Michael Horn
Dead Ideas in Teaching and Learning Artwork

Dead Ideas in Teaching and Learning

Columbia University Center for Teaching and Learning