Wouldn't it be interesting to see an analysis of how much time you spent on active learning, right after class ended? DART is a tool created by a multidisciplinary and multi-institutional team of education researchers. DART stands for Decibel Analysis for Research in Teaching. All you have to do is record your class session with your phone and upload the recording to the DART website. DART’s machine learning algorithms will then analyze that audio and let you know how much of your class time was spent on lecturing versus active learning.
I first heard about DART a few years ago, and I’ve been wanting to learn more about it ever since. I reached out to Melinda Owens, assistant teaching professor in neurobiology at the University of California San Diego and one of the lead developers for DART, and she was excited to talk with me about DART. Melinda shares a bit about her journey into education research, the origins of DART, and how college faculty can use DART to better understand and improve their own teaching.
Melinda Owens’ faculty page, https://biology.ucsd.edu/research/faculty/mtowens.html
DART website, https://sepaldart.herokuapp.com/
“Classroom sound be used to classify teaching practices in college science courses,” Melinda Owens et al., Proceedings of the National Academy of Sciences 114:12, https://www.pnas.org/doi/abs/10.1073/pnas.1618693114
"The Weekend" by chillmore, via Pixabay
Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe
Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching
Find me on LinkedIn, Twitter, and Mastodon, among other places.
See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.
[00:00:00] Derek Bruff: Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. I'm your host, Derek Bruff. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time.
Back in my thirties, I picked up running as a hobby. I'm not sure what motivated me other than a desire for physical fitness and a need to get out of the house, but I've really enjoyed running over the years. The hobby really took off for me, however, when I started using Runkeeper, an app on my phone I can use to track my runs.
At the end of each run, I can quickly see how long I ran, what distance I covered and what my pace was, and I can track my runs over time, which is great for setting monthly or yearly mileage goals. Which in turn keeps me motivated to get outside and log some more miles. To my knowledge the closest thing to a Runkeeper app for teaching is something called DART, a tool created by a multidisciplinary and multi-institutional team of educational researchers.
DART stands for Decibel Analysis for Research and Teaching. It's not an app, but it is an easy to use website. All you have to do is record your class session with your phone and upload the recording to the DART website. Machine learning algorithms will then analyze that audio and let you know how much of your class time was spent on lecturing versus active learning.
I first heard about DART a few years ago, and I've been wanting to learn more about it ever since. I reached out to Melinda Owens, assistant teaching professor in neurobiology at the University of California San Diego, and one of the lead developers. And she was excited to talk with me about dart.
Melinda shares a bit about her journey into educational research, the origins of dart, and how college faculty can use DART to better understand and improve their own teaching.
Thank you Melinda, for being on our podcast here today.
[00:02:12] Melinda Owens: Thanks so much for inviting me.
[00:02:14] Derek Bruff: to have you on and glad to get to know you and your work a little bit. Let's start with the really big picture though. What is Dart and what does it do?
[00:02:24] Melinda Owens: So DART stands for decibel analysis for Research in Teaching.
And it's meant to be a quick quick and dirty tool for estimating how much active learning is going on in a classroom. The idea is that we want to to encourage folks to use active learning and to for research purposes and also for instructors and other folks to know what's going on in classrooms.
We want to have. A sense of how much active learning is going on, right? Because active learning is so important for helping students learn and helping them have a sense of community and be retained in stem. So the idea is that you would audio record a class, and it could be an audio recorder, it could be just your phone.
And upload it to our Dart website, and it would apply an algorithm to estimate how much of the time of the class is spent in three modes. So we have single voice, which is just a single person talking. Usually lecture. Multiple voice, which is multiple people talking at the same time, which is usually like peer discussion small or small group discussion.
And we have no voice, which is nobody talking. And it's those last two modalities. The Multiple voice and the no voice that are if you add them up together, those are the things that we usually think of as active learning. Now this isn't perfect, right? It's not as good as a human observer, cuz sometimes, noise is noise.
And it sounds like multiple people talking. Sometimes you have a gap, not cuz people are like thinking or writing or something, just cuz there's a technology break or something. Sure. So it's not perfect, but it's a pretty good estimate of how much of this stuff is going on in classes.
[00:04:33] Derek Bruff: So that makes sense.
And I think this is really fascinating. Yeah. I could say a lot more but let's say I wanna ask a little bit about you. Okay. What's your background and how did you get involved in this?
[00:04:45] Melinda Owens: I'm actually by background in neuroscientist. My undergrad's in biology and my PhD is in neuroscience.
And I studied actually the development of a visual system, how the nerve cells from your eye grow to your brain and how they know where to grow. But as I was doing my PhD, I got super interested in teaching and actually I'd always been interested in teaching and tutoring and stuff like that.
But I took a few, like professional development workshops. I was reading science articles about teaching, and I got really fascinated by this idea that you could approach teaching scientifically. And that just really vibed with my interest in teaching and my background in science. And so after I graduated with my PhD, I decided to go into teaching.
So I was adjuncting for a while and then I was doing that at various schools. And then I decided I wanted to get back into research. But not basic science research. I thought, like my PhD stuff, I thought it was super cool, but it just wasn't speaking to me anymore. I wanted to do something more meaningful, so I decided to go into bio education research.
I just at the time it happened to be that Kimberly Tanner at San Francisco State was looking for a postdoc and I joined her lab. That was a fantastic experience. She's a fantastic mentor and I was involved in all sorts of projects.
[00:06:28] Derek Bruff: She's a
[00:06:29] Melinda Owens: Yeah. Yeah. No, it was great. It was great.
And so I got involved in faculty development type research projects and then I also got involved with this DART project and it was really appealing to me. To work on this project working on taking these kind of squishy concepts like active learning and stuff like that and boiling it down to an algorithm.
And a lot of people had. Were working on it, had worked on it before I got there. So our computer scientist partner, Mike Wong was already there and a previous postdoc Shannon Seidel had done some of the initial collecting and troubleshooting and starting some of the validity work.
And so I came in and we were doing kind of a larger scale validation, so I headed that up and I worked very closely with Mike and Shannon to turn the more computer science aspects, algorithm aspects, and be able to express that to an education or in a bio education audience and overall. And also now that I'm at uc, San Diego I've been using it in my own research too.
So that all has been a really great experience. Yeah.
[00:08:01] Derek Bruff: Do you know where the idea for DART came from?
[00:08:04] Melinda Owens: Yeah it came from Kimberly's observation that she had a long time ago. and that she had been sharing for a while in her own professional development about, okay, how do you know when to stop clicker questions.
Okay. And her observation was that when you start a clicker question or any kind of like pair discussion, it starts off loud and then it starts getting quiet and then it starts getting loud again as people talk about off topic things.
[00:08:41] Derek Bruff: I have observed this .
[00:08:42] Melinda Owens: Yeah, I think we all have. And then it particularly works well in her class because, and I've adopted this cuz it works well.
One way she signals students to start talking is to say, it should get loud in here with this hand signal.
[00:08:57] Derek Bruff: Okay, . Yeah, I like that.
[00:09:00] Melinda Owens: And they respond well. So that's where it came from this idea that maybe you could tell about what was going on in the classroom by looking at the noise and how loud it is.
And that observation didn't quite pan out in that way. It didn't quite pan out that okay, you could look to see you could follow the noise levels that Exactly. Cuz as much as it sounds like that to our ears, if you actually look at the noise levels it during a pair discussion, it doesn't
always follow that pattern. But it turned out to be a really great tool. Otherwise, just to look at, at the class session level what kinds of stuff is going on. And definitely you can pick out at the level of a think, pair share. You can you can pick out okay, it's quiet. Loud. And then it goes back to single voice as the think the pair and then the single person speaking is the share.
[00:10:09] Derek Bruff: Gotcha.
Because during a share out in that case, usually you're hearing from one student at a time. Maybe the instructor takes turns, but there's not two people talking at once. There's one person talking
[00:10:19] Melinda Owens: That's right. That's right. So our algorithm now, and it's not only just noise is the other thing. Our algorithm
looks at both just the raw decibel level and also the variation in decibel level. Cuz it turns out when a single person is speaking like I'm speaking now, right? There's times when sound is coming out of my mouth. and then times when it isn't. So if you look at it, it's, it goes like this.
It's highly variable. But when you,
[00:10:47] Derek Bruff: I
can actually see that right now under your screen here on the recording tools. Oh yeah. I see your wave form .
[00:10:56] Melinda Owens: Yeah. But the thing is when during a pair discussion when multiple, like when a whole class is speaking, all the gaps get filled in, right?
Cause, cause this pair is talking, this pair is talking, that pair is talking all over and they it just it gets this even wave form. Okay. And it's actually the variability more, more than the noise level that, that's a better distinguishing between single voice and multiple voice.
[00:11:26] Derek Bruff: When I do polling or clicker questions in the classroom and the students were talking in pairs, there's this kind of buzz. , which is a very different flavor of noise than one person speaking, pausing between sentences, things like that. Yeah, that's right. Okay. That's right. Now, so you have these three categories and Say more about the three categories and what they might mean.
When you have single voice or when you have, no one talking, what are the things that might map into that category in terms of classroom activities?
[00:11:58] Melinda Owens: So we actually had folks, listen to some recordings and write down what was happening. And when I say folks, I mean our undergrad researchers.
And then we did compare that to what the DART assignments were. And so for single voice, it's a single person speaking. For the most part, in college classrooms, that's the instructor. Okay. And the instructor is the dominant voice. And they're mostly lecturing. It, but it doesn't necessarily have to be our algorithm doesn't distinguish between different speakers, cuz so it could be students speaking.
The thing is I don't know the way I think about that, I don't necessarily think that's active learning per se, because. One person is speaking, what are the other 300 students doing? . Yeah. Yeah. Yeah. But generally sometimes it could be a video, right? Cause we don't have we don't know where the sound is coming from.
So sometimes videos. By and large it's a single person, like speaking.
[00:13:12] Derek Bruff: I guess it
could be. Cuz you did this all in science classrooms, right?
[00:13:16] Melinda Owens: Yes. This actually Okay. Mostly biology actually.
[00:13:20] Derek Bruff: Yeah, mostly biology. Okay. Yeah, so it could be, if I were in a humanities classroom and we had 20 students and it was a class discussion and it was one, one student speaking after another one, that might turn up as single voice,
[00:13:34] Melinda Owens: Yes, that's true. So this works best in situations where you have at least 30 students. So many of our initial recordings many of our test recordings were in community college classrooms that were like 30, 40 students, and it was working fine. What we were finding is anything like 20 or lower, even during the pair discussions, it wasn't quite filling in the gaps enough to
generate the multi voice signal. Yeah. But the thing is, like when you look into stem, there are plenty of places where you have these enormous classes. Like I'm at uc, San Diego. I teach Intro bio. There's I don't know, 14 sections of intro Bio, like every year at least. And they're all like 200 students.
[00:14:31] Derek Bruff: Yeah. There's a lot of that
out there, . Yeah.
[00:14:33] Melinda Owens: Yeah. And then going back to your other question from multi voice what we found it's mostly like pair discussions or small group discussions over a clicker, over a worksheet, something like that. We found a little bit of music oh, sorry, a video with music sometimes. Okay. A false positive. Sure. Yeah. But for the most part it's the behavior that we wanna capture, which is students talking to each other sometimes, sometimes it's oh, the instructor's computer broke down so the students are chatting. Gets counted as multiple voice, but Sure. Sure. But most of the time it really. , at least when they're supposed to be doing a task. The student's talking to each other. And then finally the no voice. Actually the algorithm has trouble detecting this cuz quiet in a classroom is rarely actually quiet.
. Usually somebody's mumbling something or a door open, something like that. But,
[00:15:44] Derek Bruff: Air conditioning is running.
[00:15:46] Melinda Owens: Yeah. The thing is, there is a normalization procedure. So some of that background noise does get normalized out. But when it is captured, it almost always is
actually silent. And then it's usually students writing like long writing assignments where they're just doing like a minute paper or something, right? For a minute or sometimes. What we found is that a lot of people for their think pair shares, they drop the think . A lot of people do that.
That's just informally. But like long thinks. Also get captured as no. .
[00:16:31] Derek Bruff: That's really interesting you mentioned that. Yeah. Cause I certainly get the temptation to do that. Sometimes you feel like you need to get moving. The silence is awkward. The, it's a hard question and students need to talk it out anyway, but, I do think we underestimate how valuable that 30 seconds or 60 seconds is for students to get ready to engage in something. Say more If I'm, just an instructor and I'm teaching a class that has, 30 or more students I'm using these modalities in my own teaching. That seems to be your target instructor.
What what, how would I get started using Dart and, what can I learn from it? How do I interpret the results? Would I likely be surprised by what I find?
[00:17:16] Melinda Owens: I think so. About the surprise, but to back up the beginning, how would you use this? It's pretty easy to record. A lot of folks just use their phone. Some people are using podcasts and stuff like that. It's not, that doesn't necessarily work. So like in, in my lecture halls we have a podcasting system, but during the active learning portions, I turn my mic off
so I can go talk to individual students. So those recordings can't actually be used for Dart, cuz that's what it's trying to analyze and capture. We've also found that cheap microphones are better. There are really nice fancy microphones that are that focus on the speaker.
And don't pick up anything else. And that also doesn't work well for Dart, cuz we want the room noise. So just like phone on the Blackboard pointed at the class that, that seems to like pick everything up. And then you would go to our website, it's dart.sfsu.edu.
[00:18:26] Derek Bruff: Sure. And I can I'll definitely have a link to it in the show notes.
[00:18:30] Melinda Owens: Okay,
cool. And you do need to register, but it's free. And you just log in and upload it. And after a couple of minutes it'll spit out both kind of your raw audio traces, which is really cool to see and percentages how long in single voice, multi voice, no voice. What I would say is that as an instructor, a you might, Be surprised.
So a lot of folks who, you know in were thinking, oh, I do a lot of active learning are surprised at how low sometimes the multi voice is. And that's because some of the things that you might consider as part of active learning and that other things like COPUS do count as active learning is like a setup for the actual, what the students are doing.
So like leading into an activity, like the instructions and stuff like that. And also the debrief and the follow up from the activity that is important for the activity, but in and of itself it's single voice. So I think sometimes people are surprised by how low it is.
So you have to calibrate yourself a little when you see this, but I think it's really interesting to see just how much of the time do you spend talking versus do the students spend talking? Cause I, I think that is the most important thing for, cuz when the students are talking or they're silent, they're engaging, those are the times when their minds are working.
Because ultimately I can explain stuff, but they have to learn it in their own heads and process it for themselves. So I think it's really useful for that purpose. And we've, we, in our initial set of instructors, we did give the data back to them and asked them to reflect. A lot of them were a little bit surprised.
These people were all in a faculty development program where they were supposed to be using these active learning techniques and stuff like that, so they were surprised. Beyond that, they had some really good insights about their teaching and some, sometimes it's like they knew these things, but having it black and white on the paper in the form of graphs and like this decibel thing, like really made it, it's obvious.
Like people would notice things like, oh, I do activities at the beginning and end of class, but I just talk in the middle. Yeah. Or things like, oh gosh by the middle of the term it's all single voice. Or oh, that lecture. I hate that lecture. I talk too much. Dart is so easy that you could easily just record
all your lectures and analyze all your lectures and that's really great for research. Cause we get a really full picture, but it's also great for the instructors to get a full picture and not just selecting here and there, what am I what class sessions am I gonna get feedback on?
And so I think as an instructor just looking at this and thinking about Where are some areas where I could add more engagement and more student talk or student thinking? DART can be really helpful for that.
[00:22:14] Derek Bruff: Yeah. I this idea of doing it every day is interesting. So I, I ran a teaching center for a long time and one of the services we offered was a classroom observation service that some faculty would take us up on and, we might come twice a semester if you sweet talk us. We just didn't have enough time to go more than once, typically. And so then you just get that one class session that you get a little feedback on.
Whereas this is a different kind of feedback, certainly, you can get it every single time you teach.
[00:22:44] Melinda Owens: And because people have in, in various studies have shown that you need to sample like 3, 4, 5 or more times to accurately get a sense of someone's teaching, especially if they're in the middle of trying new things and changing their teaching style.
And yeah, other like traditional ways of getting classroom feedback are people intensive. This is something and again the accuracy rate is, in low nineties, upper eighties, but you get the data every day and you can really see a complete picture of how people teach.
[00:23:26] Derek Bruff: And you mentioned the accuracy, and I think maybe you've implied this already, but part of the study was to use human observers
to compare the results of the DART algorithm to what the humans observed in terms of how Class time was being used. So when you say it's accurate, that's the kind of accuracy
you're looking at.
[00:23:46] Melinda Owens: Yeah. And that was people not sitting in the classes actually, but undergrads who are very familiar with how classes work
listening to the recordings and Gotcha. Saying what they think the activities are.
[00:24:01] Derek Bruff: Okay. Yeah, that makes sense. That makes sense. What is next for Dart?
[00:24:09] Melinda Owens: Couple things on the algorithm side, we there are some things that we wanna improve and new features that has been a little bit slow going.
But things like picking up no voice a little better trying to figure out the quiet. And also people have requested things like, how do you know it's the instructor talking versus the student talking? That's actually hard because Things like, oh, is the first person to talk, always the instructor, no , right?
We check that out. And yeah, so that's one thing that's a slower going project and we are trying to find partners and funding. Another thing is just expanding the use of it. So again, mostly, what's been, like what we showed in our paper that was biology education.
We've had people from other disciplines approach us about using it. And we've seen that there are, there might be some disciplinary differences. For example there's we've been told that in math classrooms, sometimes people do group work very quietly. They're all huddled around a worksheet or something.
Very quiet. We wanna explore those kinds of differences. People have asked, do the room acoustics matter cuz people get these awesome active learning rooms. But, that might affect the sounds. So questions like that we want to explore for now. For now, we're just saying that if you are thinking of using it for research or something do a validation on your own first.
Do find a recording where, multiple people are talking to each other and see if it gets picked up. And that goes for if you suspect your microphone system for your podcast or something might not work. Like just see if it does. But other than that we're just, we're using this to investigate the relationship between active learning and classroom practices and other variables of interest.
So one of the things me and my colleagues at U C S D are working on is looking to see in for example, our intro bio series what's the relationship between DART scores and equity gaps and grades, for example. What's the relationship between DART scores and measures of student community?
So kind of correlate. And then also on the other end, what's the relationship between DART scores and what people say their conceptions of teaching are. So to use this as a tool to help understand how instructors teach and how those practices affect students.
[00:27:19] Derek Bruff: I love it. I love it.
Yeah, and like you, and part of the power is that you can roll this out pretty easily across lots of sections and lots of classes. And so you can take some of these studies that otherwise might have been really hard to have big numbers. And you can scale them up hopefully.
That's really cool.
[00:27:34] Melinda Owens: Yeah, no, and like in our initial study we had dozens and dozens we had something like 60 instructors. And we recorded as many of their classes as we could. Like people just recorded themselves or they had their TAs record and gosh, I just ran them all. I don't know, over a long weekend or something.
[00:28:01] Derek Bruff: Yeah. The algorithm doesn't take that long. Yeah. No, that's great.
[00:28:06] Melinda Owens: No, not at all.
[00:28:08] Derek Bruff: Has your involvement in the DART project changed your own approach to.
[00:28:13] Melinda Owens: Yeah, so I definitely dart myself and look at the recordings. It's definitely pushed me to engage in those activities more. We're at uc, San Diego, where we're a really big school.
A lot of instructors are teaching this material and I know for a fact that a lot of them cover a lot more material than me. And I feel, and they lecture pretty much, and I, so I do feel pressure, right? But I can counteract that by, reminding myself that it's not how much content comes outta my mouth, it's how much they're absorbing and learning.
And so it's helped me ground myself better in these principles of effective teaching and to help me monitor what's actually going on in my classes. Not just my idea or my ideals of what should be going on, but what's actually going on. And I will say, like I've submitted dart traces as part of my Promotion packets.
So I'm not quite at tenure yet, but as part of like my fourth year packet. And had to do some explaining what does this mean? But it was taken pretty positively.
[00:29:49] Derek Bruff: Wow.
That is very interesting. , I'm gonna have to think about the implications of that.
[00:29:56] Melinda Owens: Yeah, we don't wanna like use this to
say that certain patterns of teaching are good and bad, right? Because there's also different ways that people teach, right? People who Incorporate a lot of diversity, equity and inclusion and justice stuff into their material. People who have a lot of cool out of class activities and stuff like that, right?
That is not at all gonna be captured by dart, right? So this isnt meant to be like a ruler. To be like, okay, you're good and you're bad. And right when we publish the paper, some people really were worried that it was gonna be like that. Oh the temptation to just have a number and it's an easy number.
That's a shock temptation. But that hasn't really happened. But I do believe that it can be an effective tool for someone to document their teaching style and to show how their teaching philosophy gets actually expressed in their classroom. And that's how I use it to really like back up when I say, oh, I do active learning to like really demonstrate that I do.
And in that sense, I think it can be a good tool for instructors, but it's not meant for anyone to say okay, you're good and you're bad.
[00:31:23] Derek Bruff: Yeah. Yeah. It's descriptive, not prescriptive.
[00:31:28] Melinda Owens: Yeah. Yeah. And it's just one. One piece of someone's teaching. Yeah.
[00:31:33] Derek Bruff: And that's what a good teaching portfolio has though. It's lots of different pieces, hopefully.
Thank you, Melinda, for being on the podcast and sharing your work. Dart is super interesting and I'm glad someone built this tool so that I can use it
And like I said, I'll put the I'll put the links in the show notes, so if any of our listeners wanna try it out themselves . It sounds pretty easy to get started yeah that's great to know. Yeah.
[00:31:54] Melinda Owens: All right. Thank you so much and I'm happy to talk about
[00:31:58] Derek Bruff: That was Melinda Owens, assistant teaching professor in neurobiology at the University of California San Diego.
Melinda is one of the lead developers of DART the Decibel analysis for Research and Teaching Tool, and the lead author on the 2017 Proceedings of the National Academy of Sciences paper about dart and its use to classify teaching practices. Melinda had get this 82 co-authors on that paper. She included all the faculty who worked to test drive DART in their classrooms.
I asked Melinda what it was like to write a paper with so many co-authors, and I've shared her answer over on the Intentional Teaching Patreon page as a little bonus clip from this episode. DART uses machine learning to produce its teaching practice analysis. Machine learning might not be as flashy as the text and image generators we discussed on the previous episode of this podcast with Robert Cummings, but machine learning is a kind of ai.
When I started this podcast, I didn't set out to explore the use of AI in teaching and learning, but that seems to be a theme that I can't avoid. If you know of other interesting places AI is showing up in higher ed or other quantified self tools like DART that are relevant to teaching, please let me know.
Thanks to Melinda Owens for taking the time to talk with me. See the show notes for links to more information about Melinda and her work, as well as to the DART website, and I'd love to hear what you learn from using Dart to analyze your own teaching. This episode of Intentional Teaching was produced and edited by me, Derek Bruff.
See the show notes for links to my website, the signup form for the Intentional Teaching Newsletter, which goes out most Thursdays. And my Patreon, which helps support the show. For just a few bucks a month, you get access to the occasional bonus episode, Patreon only teaching resources, the archive of past newsletters, and a community of intentional educators to chat with.
As always, thanks for listening.