87 – The Impact of AI on Alternative Grading: Talking with Dr. Robert Talbert

Description

In this episode we welcome back to the pod Dr. Robert Talbert. A mathematics professor at Grand Valley State University, author of books on Flipped Learning and Grading for Growth, Robert joins us to discuss his struggles with the increasing use of AI tools in his alternatively graded classes and how he is revising his courses to adjust. We discuss the threats and opportunities AI poses for an alternatively graded class and consider some thoughts about the future of education in an age of AI.

Links

Please note – any books linked here are likely Amazon Associates links. Clicking on them and purchasing through them helps support the show. Thanks for your support!

Resources

The Center for Grading Reform – seeking to advance education in the United States by supporting effective grading reform at all levels through conferences, educational workshops, professional development, research and scholarship, influencing public policy, and community building.

The Grading Conference – an annual, online conference exploring Alternative Grading in Higher Education & K-12.

Some great resources to educate yourself about Alternative Grading:

Recommended Books on Alternative Grading:

Follow us on Bluesky, Facebook and Instagram – @thegradingpod. To leave us a comment, please go to our website: http://www.thegradingpod.com and leave a comment on this episode’s page.

If you would like to be considered to be a guest on this show, please reach out using the Contact Us form on our website, www.thegradingpod.com.

All content of this podcast and website are solely the opinions of the hosts and guests and do not necessarily represent the views of California State University Los Angeles or the Los Angeles Unified School District.

Music

Country Rock performed by Lite Saturation, licensed under a Attribution-NonCommercial-NoDerivatives 4.0 International License.

Transcript

87 – Robert_Talbert_onAI

===

Robert Talbert: I’m going to do it for myself, with my department, we’re all kind of collectively trying to figure out how to make this work for ourselves. And so far we’re making some pretty good headway on this. And it doesn’t mean just mitigating academic dishonesty, like, we want to do this, but we don’t want to make this about cheating, cheating, cheating, cheating, it’s like, wow, this is really great tools, and it gives us an opportunity to maybe finally flush some of the stuff that’s been clogging our courses this whole time. Cough, cough, cough, trig substitution in Calculus 2.

Boz: Welcome to the Grading Podcast, where we’ll take a critical lens to the methods of assessing students learning, from traditional grading to alternative methods of grading. We’ll look at how grades impact our classrooms and our students success. I’m Robert Bosley, a high school math teacher, instructional coach, intervention specialist, and instructional designer in the Los Angeles Unified School District and with Cal State LA.

Sharona: And I’m Sharona Krinsky, a math instructor at Cal State Los Angeles, faculty coach and instructional designer, whether you work in higher ed or K 12, whatever your discipline is, whether you are a teacher, a coach, or an administrator, this podcast is for you. Each week, you will get the practical, detailed information you need to be able to actually implement effective grading practices in your class and at your institution.

Boz: Hello and welcome back to the podcast. I’m Robert Bosley, one of you two co-hosts, and with me as always, Sharona Krinsky. How are you doing today? Sharona?

Sharona: Well, it’s definitely an interesting day. Those who know me personally know that I tend to be pretty busy. So today I worked on a pre calculus redesign, went to a luncheon for a grant making company, a grant making organization with my theater company. Now I’m recording a podcast and then I’m going to be moving in less than 24 hours. Oh, and I have a board meeting tonight. So yeah, I’ve done a little bit. How are you doing?

Boz: Right now? Glad I’m not you, basically.

Sharona: Well and I do it to myself, that’s what’s terrible. So anyway but I’m excited because we’re getting back to the pod one of the people that is, of course, my whole inspiration for doing this crazy journey that I’ve been on for the last decade, Dr. Robert Talbert coming back on the pod. Welcome back, Robert.

Robert Talbert: Welcome back. It’s good to see you guys again.

Boz: So.

Sharona: Go ahead, Boz.

Boz: So we were, we, I mean, we’d love to have you back anytime, but we specifically asked you back because a couple of months ago you had posted in your blog, grading for growth, which if anyone has that’s listening to this, hasn’t gone and checked out yet, either the blog or the book, you really should. But you wrote a blog that’s really becoming a hot topic here, in the last six months to a year, about AI. So we really wanted to bring you back on to kind of talk about where that blog came from and just what’s happened since you wrote that.

Robert Talbert: Yeah. So thanks. Thanks for again having me talk about this. It’s been it’s been a real learning experience. I mean, we all we all are aware that generative AI generative artificial intelligence has really come on strong in the last couple of years. We all sort of use it. I mean, like recently, that we were asked at my university to make a list of all the A. I. tools that we were using. It was just like, well, it’s all of them. I mean, they’re every tool is an A. I. tool. Now everything has A. I. built into it.

Even like your light, I just ordered a smart light bulb tonight and it’s got A. I. built into the light bulb for goodness sakes. It’s like, you can’t really get away from this. But I have been trying to get away from it until last fall. I mean, I just trying to, I hear all the hype about A. I. and I think uhhhh I’m just going to kind of hang back and see where this thing takes me. Well, so where the post came from that you’re mentioning Bosley was I was teaching a class. It’s a discrete math class for kind of an upper level of discrete math course for upper level of computer science majors. It involves a bit of proof writing, so it’s a pretty writing intensive class, which is unlike some math classes, and I don’t have a sense of exactly how much students were using AI in that class, but it was a lot.

I’m aware of the tools, you know, I use the tools myself for learning, and I want students to use AI tools to as things to help them learn things with. And I think it’s a it’s a really powerful way to think about. So I introduce AI and we talk about it as I hear some of the shortcomings and these are all computer science majors. And so they all know, they’ve tried using copilot in there. Even their basic text editor has copilot built into it, and it’s not great for writing code. So they know the shortcomings and that sort of thing. But still, I was getting a lot, I mean, a lot of proofs that to me were just, I don’t mean to did to denigrate my students on this, but there was there were a lot of them that were AI generated. There just had to be.

And I feel like I’ve got a pretty good spider sense about when something is generated by AI when it isn’t, whether it’s text or images or video or something like that. Sometimes I’ll see somebody write something and I’ll read it and I’ll go off to the side and say, Hey, was that AI generated? And they’re like, yeah, yeah, sorry. No, you don’t have to apologize. I was just, just wondering. And, but my spider sense was just going off like crazy all the time. And it was distressing.

I’ve been teaching the class, taught the class in the fall, more or less as I had done it recently. I mean, I always iterate on my course designs with alternative grading and kind of how things are set up. But I feel like I’ve gotten into a pretty good routine of what I’m going to teach, how I’m going to teach it, how I’m going to assess it, what the standards are. I’ve got a body of materials built up. I feel like I have this thing set up ready to go. And, wow, I really ran into the rocks with the AI generated student work, and I found it hard for to get students to just simply tell me what was going on with AI.

I gave a survey out the end of the semester, totally anonymous survey so I can’t give you credit for filling this out because it’s anonymous, but you know, are you aware that this is the rule for a usage in the class and what constitutes academic honesty? What’s dishonest? Yes or no. And have you did you at any point do work on your assignments that was knowingly in violation of this policy? And I got a lot of yeses and yeses. Like but but only a few students filled the survey. So I’m not really sure what was going. I still do not know the full extent to which AI was used against my academic honesty rules, but I know that it was non zero and it was fairly significant. I don’t know. I got to this point. I’m sorry, Sharona. I go ahead and interrupt me.

Sharona: There’s two points I just want to point out here, because we’ve talked about this before. Number one, one of the pillars is assessing the learning outcomes. And we talk about wanting authentic evidence of learning. So AI is kind of going at the heart of one of the pillars alternative grading. And number two, and I know I’ve talked about it with you, prior to AI, we saw a decrease in what we would call cheating or academic dishonesty. So this must have really just hit where you were at.

Robert Talbert: Yeah, yeah. So I think there’s a number of things. And again, one of the hardest things to talk about in this issue is I don’t really know exactly whether it was like a really widespread situation or not. I really can’t tell. I mean, the AI tools themselves are so good now that they really do look like human writing. It’s back in the good old days when it was just people googling stuff and then going ripping off an induction proof from this site and stick your name on it. I mean you could you could read it like, man, this doesn’t sound like you at all and you could just copy and paste it drop into Google and do a reverse lookup And it’s all good. Yeah, all good but now If I think that a student has used AI to generate a proof or a solution to a problem, I can only suspect, and then I can ask them, it’s like, okay, just be honest with me. What did you use, did you use AI in this or didn’t you?

And some students will say yes and they did. Some students will say no and they did. And some students will say no and they didn’t. And so, now I have to fall back on, do I really trust my students or not? It’s underneath all the four pillars, like you just mentioned, is like the zeroth pillar, like the floor that the pillars sit on, and that’s trusting the students. I have to know that my students’ work is really them. It’s their voice, it’s their thoughts, flaws and everything. We’ve got a whole system built up with the specifications grading approach that I take. It’s a whole system where if you turn in something with flaws, it’s okay. You just kind of keep iterating as long as you’re learning from the feedback, this is gonna be a self correcting problem eventually

What I didn’t really bargain for was there is one thing that cuts across all of the conventional wisdom about alternative grading and normalizing the idea of repetition and initial failures and stuff like that. Like, why would you cheat if you can just redo things? And I always said this, but then I think there’s an answer to that. The answer is time. Cheating saves time. And why go through the process? This is a thought process that any rational person, to be fair, it’s going to go. Through this cost benefit analysis. Why go through the process of iterating through a feedback loop, the hard work of iterating through a feedback loop, when you can just type into the chat GPT and you know it’s going to be good, and you know it’s going to be right, and you can just copy, paste, change a word here and there and turn it in. I think that is the conundrum that a lot of my students really run into.

Sharona: Well, and I would actually make that statement a little bit bigger, because you said the cheating saves time. I’m going to argue that in a lot of cases, AI saves time, period. I have an MBA in marketing. I have to write marketing copy for the theater company, or for the university, more often than I realize sometimes. I can write marketing copy. I’m good at writing marketing copy. It’s not trivial. So I’d much rather open up an AI chat and say, Hey, I need marketing copy for this program. It has this parameters and I need it to save this and it should sound like this. Those thoughts are already in my head. I already know that part of it. And I toss it in there. 12 seconds later, it spits out pretty darn good, 97 percent of the way there, marketing copy. But that would have taken me 45 minutes to write. So who cares? Like, it’s just as good as what I would have written for a lot less time, and with how much pressure I have on my time, why wouldn’t I? Now, my own answer is, because I can tell it’s good, right? That’s the shortcut answer is, it only works because, I’m already an expert. So that’s where its different.

Boz: You’re an expert at what you were trying to write, but you’re also pretty daggum good at writing AI prompts that helps a lot. And I know from us dealing with each other, how many times you’ve done this on some of the grant stuff, and then you go back and revise it.

Sharona: Oh yeah.

Boz: It’s not a copy and paste. It’s a, here’s something that’s gotten me, you know, 80 percent through the very formulaic writing and repetitiveness of grant writing. And then you go back and finish it up, polish it up and make it uniquely yours and better. But someone like myself, who is not as good of a writer as you are, would not be able to use that tool as effectively as you’re using it.

Sharona: And that’s what I worry about. So I’m curious to see what you did, Robert, because I worry about our students who are not experts in what they’re learning. And is that struggle super important? So, I want to know what you did.

Robert Talbert: Yeah, so first of all, I got to chime in and say absolutely, you’re right, Sharona. AI tools certainly save time. For example, I am starting to work on the second edition of my first book, Flipped Learning: A Guide for Higher Education Faculty. It was written 10 years ago, and it’s horribly, horribly out of date. And so I’m starting the process of revamping the entire research review that’s in chapter two of that book, and I’ve used some AI tools, and it is just astounding how much time, and it’s not cutting out work necessarily. It’s like getting me to the front lines of the work that I need to be doing faster versus like if you run if you’re running in a five K race, for example, and you had you want to run in less than 30 minutes, but they start you a hundred feet, a hundred yards behind the start line, and you have to run a hundred yards to get to the start line, and then you’re running the 5k. It’s all this sort of extraneous labor you have to do before you actually run the race.

And I think AI is brilliant for taking that off my plate. And I definitely can neither confirm nor deny that I have written entire committee reports for my university completely using ChatGPT and never felt the slightest bit of guilt about that. Cause that’s like, who cares? Nobody even reads this stuff.

Sharona: Other than the environmental impact of the of the AI itself, right?

Robert Talbert: Yeah, well, that’s, that’s, that’s kind of up in the air. I mean, that’s going to be mitigated. That’s a whole other thing that I don’t know about. So but, what it kind of makes me think of is, what do my students see my assignments as? Do they see it as extraneous labor? Do they see it as the committee report that you’ve got to write at the end of the semester that nobody cares about, nobody’s going to read? Or do they see that as like really meaningful work? And they might use AI, like if I give a proof problem that has to do about Hamiltonian circuits, for example, and like maybe you don’t really know what a Hamiltonian circuit is, so you go to ChatGPT or Perplexity or whatever, and you start asking questions like, what is a Hamiltonian circuit? Can you give me an example of a graph that has a Hamiltonian circuit? Give me an example of a graph that doesn’t have a Hamiltonian circuit.

This is all labor that you’re having the computer do that gets you to the point where you can finally do real work, which is what I’m assigning them. Or do they just see my work as the extraneous labor? And that’s a question that I’m really struggling with. And I think that we all should be probably struggling with that question more than we truly are. Now you asked me what I’m doing currently, so I’m teaching, this class is a two semester sequence, and the one I was mentioning where I kind of got crushed by AI was the second semester of that, and I’m back to teaching the first semester of it, so it’s a much, it’s a lower level course.

Basically, I don’t like necessarily what I ended up with, but I’m going to say it, because it’s what I ended up with. So when I build my courses, I kind of think of them in a sort of a 3d axis sort of configuration. I have on one axis I have basic skills and those are and have always been assessed through in class timed quizzes. So there’s no AI issue with that. In fact, the AI can be very helpful in preparing for these quizzes because like the students want more practice. It’s like, well, guess what? You have this tool that’s basically free on your computer and you can ask it for as many practice problems as you can handle if you know how to prompt it correctly.

So we talk about how to prompt the AI to give you good good practice questions. That’s, that’s pretty helpful. So that’s not something I changed, but, one of the other axes is engagement. Like, do you come to class, do you fill out the surveys I send you, do you do all this stuff? And that’s like, I don’t even, if people are using AI to cheat on engagement tasks, then I think that’s kind of a sort of self defeating behavior. And I don’t really police that or anything.

But the third component is applications. So I really want my students to be able to give me evidence that they can take basic skills that they have been tested on in class and apply them to real things. And that could be a proof or it could be an application in computer science or whatever. In this first semester course, there’s no proof writing. But there’s a lot of problem solving that has to be done. I might give you a proof and you have to critique it. Or I might give you a Python function and you have to tell me what it does and that sort of thing.

What I’ve done there is I continue now to give that as homework that is taken home and done individually by the students. Fully aware that, you know, there is absolutely no way that I can keep them from going to an A. I. tool to have them do this work for them. And that homework is graded as engagement, so it goes into the big pot of engagement points that I use. I call them engagement credits. The same pot that holds things like attending class or filling out a survey. So doing your homework is not exactly graded. It only gets engagement credits. It’s graded on completeness and effort either zero, two or four points. And if it’s a reasonably good effort, I will check it over and I will give feedback and if I spot problems I will point them out and you’ll get the feedback. But what happens when you turn in the homework is you just get engagement credits for it.

Later on I am giving in class timed exams that consist of problems that come from the homework sets So I’m going to excerpt the stuff off the homework and give it back to you as a timed exam two of these throughout the semester and those are graded on specifications. You either get an Excellent, you get a success, or you get a retry. And I have two dates set up during the semester where you get to do a retry, and there’s a third retry at the final exam session. So, you, you have the homework given to you. You have these challenging problems given to you as homework. And you do them, but that’s not the end of it. That’s only for, basically, attendance credit. That’s what it amounts to. And later on, you’re gonna get an exam on it.

Now, what this leads to is, first of all, if a student uses AI to cheat on homework it earns them a tiny, tiny bit of engagement credits that really doesn’t matter. There are so many ways to get engagement credits, it’s crazy. Most people are gonna get double what they need. If you just show up to class, you end up getting a certain amount. You end up making enough to get a B in the class. So use AI all you want on that, but eventually it’s going to be you and a piece of paper with a clock running, and you’ve got to be able to put it on paper correctly.

And you can do that one of two ways. You can either memorize literally everything that you put on your homework, or you can simply know how to reproduce it, because you have dealt with it, you’ve thought about it, you understand it. And that’s how it works. And so the, the course grade is now set up where you, to get a certain without going into the weeds here, you have to have a certain number of the basic skills mastered through timed testing. You have to have two excellent scores on the homework exams to get an A. One excellent, one success to get a B. And two successes to get a C. Anything else is a D.

So far, here is we, we’ve had our first, If you’re a couple of rounds of the basic skills exams and one round of the homework exams. It is a lot of exams. I will say that it’s exams seemingly every couple of weeks And I don’t care for that because timed exams have their own set of problems, right? When the clock is running, all kinds of stuff, people get into fight or flight mode, and some people are more prone to the flight than the fight, for sure.

But I will say that it’s not much different than the way I used to do it in the sense that I had take home homework that was graded completely by itself, no exams. But I would give basic skills exams like every other week or every week sometimes. So there was a lot of quizzing on these basic skills. Now there’s like less quizzing on the basic skills and in its place are these new homework exams that we give. So everything that’s significant towards the course grade is now assessed through a timed exam that is done in person, in class, on a piece of paper, with no technology other than a calculator. And that’s it. I’ll take questions now about that.

I know this is, this is absolutely not the way that I meant to teach the class. However, I will say one last thing, and then I’ll shut up for a second. It’s not terrible. I have to say it’s not as bad because the students have come and said like this exam is really easy if you just do your homework. It’s like, yes. Perfect. Okay. That now we’re getting somewhere. So I’m kind of warming to it honestly. I was sure I was not going to like this setup before we started, but now, it’s kind of old school, but it actually, giving the students the homework and plenty of feedback, I didn’t get as much feedback iteration as they, they are, have the appetite for prior to it. But I, I think it’s kind of working somewhat.

Boz: So you said that you weren’t completely happy with it though or at least at the beginning you did. What aspect of it, other than it just kind of being a little bit old school, what aspect of it were you not really happy with or thrilled with?

Robert Talbert: Yeah, first of all, I should clarify, I have no problem with old school but there are two things, really. First of all, I can’t give as challenging of problems any more as I used to or at least like this setup. Like, I used to give problems that would take, like, you could not really expect a student to reproduce the kind of problem I used to give on an exam. There were pretty tough problems. I mean I, could show some examples, I guess, but these are problems that could take like and I could give problems that difficult, that high level, but I, I’m keeping in mind that at some point in a couple of weeks, they’re going to have to do this on an exam, along with a bunch of other stuff too.

And so these are a little bit more shallow problems than I used to have. And secondly, I really want students, I want all students to be, if I can say this, to be forced in some way to engage with a feedback loop on these challenging problems. They’re challenging, so you do them, and you’re not likely to get perfection or even successful stuff on the first try. And so you have to do it again and over and over and over again. I really want students to engage with that feedback loop. In some sense, if I give a problem that’s just right the first try, that was a failure on my part. I should have given something, you know, more challenging and harder at a higher level.

With the way this is set up, students could do the homework or not do the homework. They could just simply skip the homework and get the engagement points through other means. Or certainly could do that. And that might turn into a one and done situation, or maybe a two and done situation, for if they take the homework exam and they work their problem and maybe they have to redo it and maybe they get a chance like once or maybe twice to redo it. And sometimes I kind of feel like the real problems that are worth students time are going to take more than two iterations to kind of get through. And this is limited in that sense.

Sharona: So I wonder if another drawback, because this is what would be for me, is we’ve shifted most of our quizzing, because we’ve been using timed quizzes asynchronously for a number of years in the statistics class, and we are seeing an increase in the use of AI. That usually shows up in that the writing quality is better than our students would typically do in the class. But we suspect that they’re getting more sophisticated as well. The reason we haven’t switched back yet in, this is a GE course, is we hate to give up the class time for the amount of quizzing that we do. So has that impacted you at all? The need to use so much of your precious.

Robert Talbert: Actually in my case, no, I would say. Because like I said, I, I used to have this set up where the basic skills quizzes that I would give would be super frequent. It used to be every day. week and 25 minutes a week. And then I moved to like 50 minutes every other week. And now I moved it back to basically 50 minutes every four weeks. And in its place, I’m putting these different homework exams and retake days and make up days and that sort of thing. I did cut some material from the class to make room for it, I’ll say that. But it was low hanging fruit and I don’t think anybody, I mean. I’m not going to tell anybody I cut it just to see if anybody notices. That’s what I always say. You should ask for forgiveness, not permission on this.

So I was able to just sort of like reallocate assessment time in the class to do different things with it rather than just the same kind of quiz all the time. Definitely other people may not be able to do that though and so this is not, I’m not here on your podcast advocating for what I’m doing here. I’m just sort of saying this is how I’m handling it. And right now it’s the gold standard for making sure that you’re getting your students voices and not just an AI prompt, is the, the gold standard is oral exams, right? If we could have the chance, the ability to give oral exams to our students, that would be fantastic. I mean virtually nobody has that opportunity, I mean, unless you’re teaching at some prep school or something where you’ve got five students in your class or something. I mean, I’ve got twenty five, I’ve got twenty five in my sections. Look, there is no way I would do oral exams, on the regular, with fifty students. No thanks.

Sharona: I was wondering if you could do, like, one per semester per student. Where they had to do some sort of an oral presentation, whether it’s a cumulative, a final project or something where they could really show what they know. I don’t know.

Robert Talbert: Yeah, that’s, that’s certainly an option, Sharona, and I’ll say this. You originally asked me to be on this podcast back in January to talk about that post I was like, well, the thing is I’m teaching an asynchronous version of this course in May. And for that I am really I asked like why don’t you ask me about this in seven weeks because by then I’m supposed to have some idea of what I’m doing in May. And I will be honest, I still don’t really have that much of an idea what I’m doing in May, but for one of the things that I’m thinking of doing with this asynchronous version is exactly what you just said. One thing at the end of the semester, every student will sign up for a 30 minute cumulative grab bag oral exam and that will be like you get one retake on it or whatever I’ll figure something out.

But that’s that’s definitely one thing that adds some summative assessment there at the end. You could do it on a limited basis for sure. It would be nice if Flipgrid still existed, like, I mean, Flipgrid was this was, because Microsoft just killed it, was this a video tool that was super simple to use, and it changed its name to Flip, and now it no longer exists and students could do asynchronous videos. I used to use Flipgrid for, let students do alternative assessment on basic skills. Like they can make me a video and show me the video. Any more, I mean, I think the AI tools are good enough that you could actually fake the video. So I don’t even know if that would work these days.

Sharona: Well, I guess what I’m hearing, and it kind of speaks to a bigger issue that I feel like we’re facing in higher education, which is what is the purpose of these classes? What is the purpose of all the stuff we’re trying to teach students? If everyone and anyone can merely type any problem into an AI and get a decent enough response, then what’s the purpose? I know I have my answer to that, but do you think your answer might be.

d tech conference way back in:again. When MOOCs came along:

I think one of the questions we have. What are the points we as especially in math? It seems like we all need to start asking ourselves or thinking about us. We need to be open to the idea that the entire way that we have structured mathematics Instruction and the curriculum in higher education, a lot of it just doesn’t make sense anymore. Like we are teaching a lot of stuff that is about a hundred years out of date. And again, I have no problem with being old school. But when something’s obsolete, it’s obsolete. I mean, how much of the stuff that we teach in college algebra even needs to be taught anymore. I mean seriously, I mean we and we can’t get our backs up when we ask this question anymore. You say something like that and you know five years ago or even now and people it’s like the room splits in half. You have the old school people who want everything to be the same Versus the people who are new school and want everything to change. Well, now it’s just like, look, this is a real thing. These rote mechanical computations we have based the entire undergraduate mathematics curriculum on, it seems like. And most of our gen ed courses and everything else, it just may not make sense anymore to teach this stuff. And we need to be open to that being a possibility, and that’s a deep and disturbing possibility because now you got to really change things up.

When I think about what we’re here for just to kind of finally get to Sharona’s answer so much talking for me. I feel like what we’re here for, I go back to, my first career move was in a small liberal arts college. We were very serious about the liberal arts in that institution. About forming an intellectual persona for each person. And that involves human relationships. It involves broad exposure to the to great and beautiful things across the entire spectrum of intellectual experience, mathematics and science and literature and history and philosophy and all these things. And these are things that AI’s cannot do. I mean, these are things that involve virtue, and I think virtue is something that an artificial intelligence can, by definition, never really do.

I don’t know exactly what that looks like on the ground in your statistics course on Monday, Sharona, but next Monday, but it has to get back to being, like, recapturing the human, and I think we have really dehumanized and institutionalized higher education in the last 20 years to the point where it’s much more about cranking out a product. I’m not sure what the product is. I think, I don’t know if the students are the product or the degree is the product. I’m not sure, but it’s very widgety, I would say, if that makes any sense.

Now we’ve put ourselves in the position where we’re like the assembly line that’s being replaced by robots. Back in the 80s, when I was a kid, that was the big disruptive technology was automated assembly lines. Now we’re the assembly line. We have gotten ourselves in the position of being an assembly line, and now we’re being automated, and so we’ve got some culpability in this, and it’s on us to act on it. Now what the answers are, no idea. We’re all kind of in this together pushing forward, doing our thing here, but I, I think whenever there’s technology involved, disruptive technology, like a calculator, or like Wolfram Alpha, 10 15 years ago, it’s best used when it augments what humans do.

That’s that’s what all technologies were. I’m sitting here looking at a video camera that allows me to have a podcast with you three time zones away And this is like an augmentation of conversation. This is a conversation that wouldn’t happen if it weren’t for the technology. It’s not replacing the human conversation that we’re having. It’s making it possible. It’s making it better. It’s getting it out to more people so they can be part of that conversation as well. Somewhere or another, we in the math world, in higher ed generally speaking, have got to figure out how AI is going to be that piece of technology that makes us humans better at everything.

Boz: But see, that’s interesting that you’re bringing up just because this came up with our interview from a couple episodes back with Dr Patrick Morriss. And Sharona, I know this is an ongoing discussion that you’re having in your position as a coordinator is. That we’re teaching in, especially some of the gen ed math classes, what parts of that are really still necessary? I know we had this big discussion with Patrick about notation, about interval notation, and brought up a discussion that you’ve been having about like trig substitutions and trig rules and doing some of these more advanced trig things by hand. And, what you were saying, Robert, is it’s interesting because, and I, Sharonaa, actually, I’m going to credit you with this because I think you’re the one that said this a long time ago, but AI really is the biggest change to education or it’s going to be, since the birth of the internet. I was there, graphic calculators have been around for a while. I was there when the 92 came out. I remember when you were talking about earlier, I remember that calculator. I remember Wolfram Alpha. None of it had the amount of impact that the internet itself did. And I I’m old enough, I was, I had my first email just after I graduated high school. Literally my mom got internet in the house like two months before I graduated high school.

Sharona: And I keep teasing him about how young he is.

Robert Talbert: It was so much fun back then too, right?

Boz: Yeah.

Sharona: What?

Robert Talbert: It was, it was so much fun then too, wasn’t it? Email was like exciting and cool and now it’s like, oh man, email, this sucks. We have like a 14, 000 unread emails in our inboxes. It’s like a drag on our lives, so.

Boz: You open it up, you open up that great AOL, you’ve got mail.

Robert Talbert: Yeah. Well, how many people even know what AOL was?

Sharona: I want to piggyback Boz on something you said though, that I agree. I do think AI is the biggest disruptor since the internet. The internet to some degree, not as much as its proponents pushed, was a bit of a leveler of the playing field. We have managed to get internet access to the bulk, at least in the United States, especially with some of the rural broadband efforts and things like that. I’m very concerned that AI is not a leveler. AI has the potential to be incredibly inequitable. Because using AI well is going to take skill. And to me, that’s where I go with, that’s part of my answer to why we need higher education and why we need education in general.

Bloom, in his Learning for Mastery article, said that the kind of complex technological society that we aspire to live in and that we do live in, requires 95 percent of its populace to have a much more advanced level of education than we did prior to, say, the Industrial Revolution. And I would argue that for those of us that are committed to equitable outcomes that are independent of class or independent of race or demographics, we have an obligation to try to find ways to make students into critical thinkers that have the capacity to use these tools because misuse of these tools is going to be potentially so stunting that it’s going to separate. I mean, like I said, as an expert in the fields that I’m an expert in, and knowing where I’m not an expert, gives me a tremendous advantage in the rapid use of AI. Even more, I’d say, than a lot of my colleagues, because I have the mix of the critical thinking of the mathematics and the business world. So I’ve got the marketing and the language and the writing skills. I’m very concerned for my students that they will rely on it at the wrong time for the wrong reason. But if I’m not providing a value that I can articulate to them, of course, they’re gonna do it.

Robert Talbert: Yeah, that reminds me, there was a great article in The Economist just recently, and I can give that link to you for the show notes, or if you want, that it was a study that was exactly what you just said. That when AI is introduced into knowledge worker situations in the private sector, it has one of two effects. People who come to that tool with a certain level of intellectual training and certain mindsets, productivity mindsets, it accelerates their performance at work. And people who come in below that particular line just actually atrophy. I mean they they lose critical thinking skills. It widens the inequalities as you say.

And so I read this article and I thought well, so there’s this like bifurcation line where if you’re above this line in your, I’ll say education, it’s kind of what it boils down to, if you’re above this line in your education and your skill level then, and again, by saying education, I don’t mean like where you got your degree from or something like that. It’s just like how well educated you are in terms of how well trained your mind is. If you’re above a certain line, you’re going to do great with AI. If not, you’re going to be below.

I read that and I thought, well, I know what my job is now is to get every single last one of my students to be above that line by hook or by crook. And that’s to me, the rest is just sorting out the details, man. You know, my institution’s a lot like yours Sharona. We get a, it’s a pretty diverse institution. We get people from outside the United States, from all over the United States, from inner city Detroit, from the hinterlands of the Upper Peninsula. It’s like we get all kinds here we have a huge and wonderful veteran student body. And so for me, I’m, I’m thinking, I’m thinking about these, situations of inequity too. If you come to A I with an underdeveloped intellect, it’s gonna make it even more underdeveloped. But if you come to it with a certain frame of mind, habits, skills, knowledge, it will take you absolutely to the next level. And you can do some great things with it.

So the role of the human, in higher education, is to get people to the point where they are capable of doing great things with the tools they have with them. I mean, maybe that’s been how it’s, how education has been all along and we just never phrased it, never really saw it so clearly until we had a tool that could replace this if we don’t.

Sharona: How much of that work of getting our students above that line is our responsibility as the instructor versus the student’s responsibility. So I’m thinking of going back to the pillars, right? Obviously, if we don’t design a grading system in a course, that allows for this, then you might as well just stop. But now, if we have the space, because we’ve done the work, we have an alternatively graded course, how much of the authentic evidence of learning is it our responsibility to make sure we’re getting? Versus how much of it is the student’s responsibility to make sure they’re taking advantage of it? Where’s the breakdown?

Robert Talbert: Yeah. I don’t know if there can be a breakdown in that, to be honest, like, Oh, it’s 70 30. I don’t know if I could, that’s possible to sort of answer it like that because it’s a shared responsibility. And I’m thinking about, a lot of my analogies always go back to music. Because I’m a musician, as you guys know. And if I am taking a lesson from somebody, from a teacher, a music teacher, and I’m trying to learn some technique or something, the teacher’s responsibility is, and then our responsibility as teachers, is to create and maintain an environment where the kind of learning activities that we want are possible and likely. Nobody can make the student do it, though, so the teacher will give me, or help me get, the mental framework and will give me perspective that I don’t have as a novice. Will give me good exercises to work with, will ask me to do the right things, will pay attention to what I’m doing, but it’s on me to actually generate the music. To kind of put my hands on the fret board and actually make this happen and make the mistakes that the teacher can then feed off of, which gives me the feedback that I feed off of and the whole thing becomes a virtuous cycle.

Now I’m thinking about, what if I had a music class that was completely asynchronous? So I have the teacher and they give me something to work on and I just kind of, I’m supposed to work it out at home, make a recording of myself and send it in like it’s assessed. Well, who’s to say that I can’t just go find my friend who is a much better player than I am and have him do it. Or ask a sufficiently advanced AI that could generate music to do it. Well I don’t know. I mean, I think it’s, at this point, it becomes very murky. It is the professor’s responsibility, I think, to, if you suspect that you are not getting authenticity from a student, I think it’s the professor’s responsibility to figure out what you are getting and reroute things so that you’re getting real information from the student. I don’t want to, I mean, academic dishonesty is just a terrible thing to even think about. And I have never seen an institution or been in an institution, including now, that really does this in a satisfactory way, as far as I’m concerned. It’s all very punitive. It’s just an awful situation.

Sharona: The problem I’m having is the incentive structure is all wrong. Right? So right now we know that students who don’t complete their college degree have something like a million dollars less in lifetime earnings than students who do complete it. That might be changing, but you have this really weird incentive structure where the knowledge doesn’t matter, but the degree does. In theory. You know, in theory, that’s the message.

And so part of me says, I create the environment and I want my students to authentically participate. But who am I to get in the way if they don’t authentically participate? Why is it me as an individual instructor and not my institution or, or some other body that should be checking to see if they learned it?

There’s some really perverse incentives built in that make me feel like I’m being forced to be a cop, and quite frankly, I’m not paid enough for that. So I’d rather create an environment where I invite students in to learn and spend my attention and focus on the ones that do engage that way and just not worry about the ones that don’t.

Robert Talbert: I mean, we could do exactly what you just said. I mean, we could say, and I’ve taken this approach before, to be honest. I’ve said, like, look, you could probably cheat on a, this is way before AI. I mean, anybody, any student who’s sufficiently motivated to cheat will figure out how to do it. And they’ll do a good job at it, too. And they have been doing so for hundreds and hundreds of years. And if you do that there is a chance that you’ll get caught that you’ll be held responsible. But there’s also a chance that you won’t and that you’ll just kind of go straight through.

Eventually these things have a way of finding you out is what I tell my students. And it’s not necessarily like, oh my gosh, I gotta do a job that requires me to know how to program in machine learning. And I actually cheated all the way through my machine learning degree and I don’t know anything. All right. It’s more than that. I think it’s like something that lives and starts to grow inside you if you engage in that kind of behavior, and people know this. I think people can sense inauthenticity in other people.

And one of my colleagues tells students about this. It’s like, you can cheat with AI, and you’ll get away with it probably, but nobody’s going to want to work with you because they’ll know. They’ll just simply know somehow that you’re this kind of person, and every time you do this, it’s a vote for becoming that kind of person that nobody wants to work with. And sure, that sounds like a bunch of boomers moralizing its students, and maybe it is, but I think it also happens to be true that getting back to the idea of this is this is about virtue, and there’s no such thing. There could be artificial intelligence, but there’s no such thing as artificial virtue. I mean, that is either real or it’s non existent. If you if I anybody chooses to misrepresent themselves and their work and shortcutting the feedback loop in any learning process that inauthenticity now is a part of you as a human being, and it’s only going to get worse from there until You repent and sin no more from this kind of thing.

I, I know that thinking, going back to my, my upcoming asynchronous class where there is no possibility of timed exams, okay because the way we have it set up, like, you really aren’t allowed, I mean, I can set up a, like a, an exam on the LMS that has a one hour time limit on it, but you can start it whenever you need to. It really needs to be a fully asynchronous course, otherwise make it a synchronous course. Why are we messing around?

That’s that’s kind of the pitch that I’m gonna be giving to my students like yeah, you could We’re all really aware of what’s out there in terms of the tools that can be used. And I’m gonna be watching. I’m gonna be looking at this. And I guess one thing I could say is it may be the instructor’s responsibility, but in many cases, I think maybe in most situations, it’s not only a responsibility that’s unfortunately on our shoulders, it’s also a job obligation. I mean, I don’t know, in my contract, in the faculty handbook in my place, it says if a professor suspects academic dishonesty, they must report it.

I mean, it’s like, you don’t get to not report it, okay? So and if nothing else, if a student kind of gets very upset at me for pursuing an academic dishonesty case, I can say, I gotta do this. This is my job. It’s part of my job. It’s just like I can’t choose not to show up for work on a given day. I can’t choose to look the other direction if I feel like there’s just too much going on with with academic dishonesty to do. So I don’t know.

I would also say that I have very little trust in institutions. I think everybody who reads my writing knows this. And it’s like if you’re sitting around waiting for institutions to suddenly reform themselves, don’t. Because it’s never going to happen. So, I mean, unfortunately, for better or worse, it’s kind of on us to figure out how to do all this. I’m not sitting around. I very much appreciate the institution where I work currently, but I am not sitting around waiting for them to figure out AI policies. I’m doing it. I’m going to do it for myself with my department. We’re all kind of collectively trying to figure out how to make this work for ourselves. And so far, we’re making some pretty good headway on this.

And it doesn’t mean just mitigating academic dishonesty. Like, we want to do this, but we don’t want to make this about cheating, cheating, cheating, cheating. It’s like, wow, this is really great tools. And it gives us an opportunity to maybe finally flush some of the stuff that’s been clogging our courses this whole time. Cough, cough, cough, trig substitution in Calculus 2. And maybe actually pull it, maybe, maybe not pull, replace it with anything. Maybe we just had so much, maybe we just use the space to actually breathe, you know, in the classroom and work on real problems that are on the, any smaller set of material with better tools available. I mean, it’s, it’s, I think there’s too much talk about AI is going to make a whole bunch of stuff obsolete. And now we got to teach all these new things to fill the space of the stuff that’s obsolete. So maybe we just, Maybe we should just, like, keep it minimal. And that maybe keeping it, maybe having so much stuff to do is what put pressure on the students and makes them want to cheat in the first place.

Boz: That’s an excellent point that not only giving ourselves a little bit of room to breathe, because otherwise, like you said, we’re putting so much pressure on so much time. Like what both of you guys alluded to earlier, the biggest commodity that both us and our students have lack of is time. So maybe if we give ourselves a little bit of time to breathe a little bit of room, but also getting into those deeper, richer problem.

Instead of, oh, we’re going to take all, you know, some of this hand manipulation of things that no one really needs to use anymore, but it’s how we had to do it a hundred years ago before we had technology, let’s actually dive into some, some ugly, some real problems that take some thinking about and, aren’t these nice, pretty, I like to joke with some of my colleagues that statistics, we deal with the real life. Our mathematicians deal with the perfection and beauty of math, whereas in stats, we deal with the ugliness of reality, because we don’t get nice, pretty lines.

.:

Boz: Yeah, exactly. But getting into, take this time if we’re unclogging with some of this stuff that we were talking about, let’s take the time to really see how this is used in the real world and how someone might actually be asked to use it because that’s the other thing. And I think this came up, I think it came up in Patrick’s interview again, this math has become, especially I think the lower level, the K 12 up to the first freshman, sophomore college classes, it’s become so sterile and just divorced of reality of how we actually use it. This is why students don’t see the importance of it because they don’t see how we use it in real life because we’ve made it so sterile and so pretty. So we can actually do it on paper and do it in a 20, 30, 40 minute time test instead of something that actually isn’t pretty and doesn’t work out to a nice, pretty answers. I think you brought this up early on, Robert. We really need to look at how we do math instruction. And not just in the higher ed, all the way down. We really need to look at how we do this and what we emphasize and what we maybe should be emphasizing.

Robert Talbert: We’ve got to be open to differences. At this point, like maybe 15 years ago you could make a, you could make the case that, well, the way that we have, the way that everything is currently structured, it’s like the Chesterton’s fence thing. You don’t knock down the fence until you know exactly why it was there in the first place. And maybe the way that the curriculum is structured is what it is for a reason. It’s been here for 300, 400, 1, 000 years or whatever. And you don’t want to go just messing with stuff.

Now, it’s no longer a question of whether things are being messed with. I mean, they are. I mean, we have technology now that can easily do almost everything that you teach in a K 12 math curriculum. It can do this stuff, and so the doing of stuff seems like we’ve got to be very picky about this. I don’t know, my son’s a really bright kid and he’s in Algebra 2 right now and wants to be a computer scientist. And he has this, this is an issue where you’re ongoing struggle with unfinished assignments. It’s like he’ll have like like four or five unfinished homework sets per week.

I’m like Harry, what are you doing man finish on words like well, this is so boring. I look at it was like damn you’re right It is boring This is the most boring stuff i’ve ever seen. I mean when I was when I was his age, I I kind of thought that was kind of cool. It’s because it was like playing a video game. I thought this is like before video games actually existed for the most part. Just to show how old I am. It’s like, oh, I can do this stuff in my head. It’s kind of cool. You know, it’s but it is like, deadly dull for a lot of kids.

And just get on an airplane the next time and tell people, tell the person sitting next to you in the seat what you do for a living and see what their response is. Oh, I teach math. Oh, oh, okay, it’s just like, let’s let’s go. Or is that or it’s like, Oh, I really hated math in high school. So why is this? And I think these AI tools, this is a technological revolution that’s taking place. And it gives us there are a lot of issues to navigate and a lot of dangers, equity being one of them, I think. But I’m cautiously optimistic about all this stuff because I think that the tools afford us an ability, an unparalleled ability to just slow everything down. And I think that that’s been, for the last 20 years, it’s been like the pace of what happens in an educational setting has just been getting faster and faster and faster and fuller and fuller and fuller.

I saw it with my own kids and it all kind of crashed during COVID when we couldn’t, we were all locked down and we couldn’t be in the same room and therefore we had to slow down a little bit But a lot of people didn’t and I think it’s where a lot of like COVID related just like intellectual scarring came from was like having to go the same speed. Although you’re only have two wheels on your car

This is a chance for us to minimize and slow some things down and refocus on things that all of us really like better than what we did in school. It was like I don’t know about you, but I didn’t really click with math until I was working on the really hard stuff, like second, third year of college. I mean, it was like, okay, I’ll major in math or whatever. It’s all gotta, because it was easy and there’s only one right answer. Why are you majoring in math? That’s the kind of thing. You ask people why they major in math and half the time it’s because, well, it’s always easy for me and there’s only one right answer, that’s why I’m majoring in math.

And then you hit the, like, abstract algebra or something like that, and you have to start thinking in very different terms about what you always thought was right. That’s when it got interesting for me, and not before then. And now we have a chance to, like, strip away a lot and I think, I haven’t heard a lot of people talk about this, like, we have the ability to, like, strip away inessentials from the entire structure of the K 16 system that we have. And I don’t think that chance is going to come around again anytime soon. And if we don’t act on it and do right with it we’re going to regret it.

Sharona: So we’re coming up on time, but I did want to sort of wrap this back into grading real quick. I felt like AI might be very threatening to alternative grading. Because the way that we’ve structured our courses, this whole evidence of learning thing and gathering this evidence is more difficult than it was, or knowing that it was authentic. But it sounds like maybe alternative grading actually is going to help us get the space we need. And so maybe it fits well in this AI threat environment.

Robert Talbert: I absolutely think so. I think that although what I’m about to say is quite controversial among some. I think that AI could help alternative grading scale up extremely well. Like a lot of folks who teach 500 student classes, some huge universities, and I think they have a point, that it’s hard to do reattempts without penalty when you’ve got 500 students, right? Well, if you had an AI who was trained on your clearly defined standards. That’s the first pillar of the four pillars of grading. That could be like your little exoskeleton to kind of screen things out and send something’s like clearly done well, maybe an AI could just grade that for you and you get to handle the stuff that isn’t done so well.

I mean this, this is an opportunity, I think, navigate the issues, of course, but it’s an opportunity to really flex some muscles and, and again, get us to get all of our students to the point where we’re doing work that is sort of worthy of the feedback loop , the hard problems, the tough applications, the gritty, ugly applications, all the data cleaning and all that stuff faster because there’s less extraneous stuff to think about.

Now just as we’re coming on one idea that I am kicking around too is that AI could be used to kind of lean into this idea of students teaching themselves things a lot of time. So it’s one of my other things that I do a lot of with is flipped learning, as you all probably know. And when you do flipped learning, students get first contact with new ideas outside the classroom, usually by watching a video. And that’s kind of how it goes.

And it leans into this idea of students are teaching themselves certain things before they come to class. The simple stuff. Well, what if you took an AI, one of these little AI notebook things, like Notebook LM, like Google has now, and you feed it a bunch of sources, a bunch of some textbook sources, and I’m gonna give you this notebook, and your job is to teach yourself the material before you come to class. And then in the class, we were going to do work that is live and it’s in class, and maybe that’s where the problem solving happens. Not on homework and sent home, but you’re going to teach yourself more now before you come to class, because now you have this AI tool that makes this a very powerful thing.

What if we just made all of higher education about training students how to teach themselves? I mean, a lot of times if you use flipped learning, you get that pushback. I have to teach myself the material. It’s like, well, yeah, this is lifelong learning. Every institution in the universe has lifelong learning in their mission statement. Nobody is serious about it. I mean well, now we can really be serious about this. And alternative grading and flipped learning kind of fit together in a lot of ways. I’ll, I’ll discuss on the next time I’m on your podcast.

Sharona: So coming back next week, no,

Robert Talbert: I’d be too tired by that point.

Boz: Well, I think that is a great point to kind of end on.

I do want to say though. It’s not just higher ed, every high school mission or vision that I’ve ever seen, and I’ve done a lot of work with helping schools write those, every single one of them also mentions lifelong learners, so.

Robert Talbert: Yeah, I mean, if you didn’t do it, there’s something to be wrong with you, right? I mean, it’s like, you don’t care about lifelong learning?

Boz: Yeah.

Robert Talbert: Just because you do, doesn’t mean you do care.

Boz: But thank you again for coming back on. We always love having having you on. I definitely want to kind of next time we have you on maybe explore this a little bit more of this kind of relationship possibility with AI in flipped learning and how that might be aided with alternative grading set up. Cause I think there’s something there that we should really look at and lean into. And I would love to talk more about flipped learning with you cause that’s, like I said, that’s how I got hooked into the grading conference was the chance to get to work with you.

Yes, because of the flipped learning book.

Robert Talbert: Oh, I was there to meet you.

Boz: Thank you again for coming on. Any last minute things, Sharona, before we sign off?

Sharona: Not for me.

Boz: Alright, so, thank you guys for joining us. If you haven’t already, go check out the Grading Conference website. Everything is up with the keynotes and with all of our deadlines for applications and things, and we’ll see you next time.

Sharona: Please share your thoughts and comments about this episode. By commenting on this episode’s page on our website, www. thegradingpod. com, or you can share with us publicly on Twitter, Facebook, or Instagram. If you would like to suggest a future topic for the show, or would like to be considered as a potential guest for the show, please use the contact us form on our website. The Grading Podcast is created and produced by Robert Bosley and Sharonaa Krinsky. The full transcript of this episode is available on our website.

Boz: The views expressed here are those of the host and our guest. These views are not necessarily endorsed by the Cal State system or by the Los Angeles Unified School District.

Leave a Reply