This episode originally aired on KALX on October 22, 2024. Below is the transcript of this episode.
Paras Sajjan: [00:00:00] You’re listening to KALX Berkeley 90. 7 FM, university and listener-supported radio. And this is Berkeley Brainwaves, coming at you from the Public Affairs Department at KALX. Bringing you stories from the Cal campus. I’m your host, Paras Sajjan, and today with me I have Michael Larkin, a lecturer for the College Writing Program here at Berkeley, who teaches a research and composition course focusing on artificial intelligence.
Welcome to the show.
Michael Larkin: Thank you. Thanks for having me, Paras.
Paras Sajjan: So as someone from a humanities and creative writing background, how did you get into teaching a course on AI?
Michael Larkin: Yeah, well, um, my program, it’s predominantly teaches, well, it’s lots of writing classes, but the biggest ones are the first year reading and composition courses.
And we get to choose, each of us instructors, get to choose kind of what we’d most like to focus on. And in my case, for a little over a decade now, I’ve had my, uh, [00:01:00] research class focus in particular on, uh, technology broadly. This is probably about 2012. I started doing it. Um, and I started it initially because I was interested in it as a citizen, a father, a writer and so forth.
But, uh, yeah. I usually would turn over my themes every few years, but I kept sticking with it because I found that my students, especially CS and data science and similar STEM students, wanted to take it because they thought, oh, if I have to take a writing class, I may as well take one that speak to my interests. But, uh, I stuck with it longer because then I found it was really useful to talk about kind of the ethical questions that were coming up that are that at Cal, students get to do, but they’re still focused on making stuff.
And, and then in the last year or so, I, or two years, I guess I shifted to AI specifically. Um, Just because obviously that was very much in the news, there was a lot to discuss and it gets kind of meta as well because then of course, uh, when we’re talking about writing, students are using stuff like Grammarly and ChatGPT to help with the [00:02:00] writing and in some cases do the writing for them, not usually at Cal, but, uh, yeah, so that’s why I’ve sort of stuck with it.
And even though I’m not a technologist, I’m not an expert, I’m just kind of a lay person who’s interested in it, uh, as a writer and a teacher and so forth. So that’s how I came to, you know, to be focused on that in my writing classes. Yeah.
Paras Sajjan: Yeah. Fascinating how you went from technology in general and then zoning into AI, especially how it’s had a peak in, sort of interest and development as well.
Michael Larkin: Yeah, and it’s really turned over over the years. I mean, I’ve had different readings every couple of years, but it’s always been around technology. And, and even just to see how different it was, I think back in 2012, 2013, there was a little more of that kind of still bit of the utopian technology is going to save us. It’s going to democratize the world, opening up information everywhere.
And then on the other end, people like, Oh my god, you know, everybody’s staring at their phones, we’re gonna stop connecting with each other, it’s terrible, and of course the truth is in the middle somewhere. But those debates keep recurring again and again and again, and, and it’s such a rich subject and so important.
Um, and [00:03:00] even, even if students are in my class who aren’t planning on, you know, doing the pipeline down to Silicon Valley or whatever, uh, they, I can tell that they appreciate it and they do really high quality work, because these are issues they’re concerned about, again, as citizens and just, you know, residents of the world, uh, and not just necessarily as computer programmers or something, too. Yeah
Paras Sajjan: Totally. So how do you go about introducing AI to those students who might not be as STEM focused?
Michael Larkin: Right. Well, I often start with, um, kind of what their own experience of it has been, like what their perceptions are, what they know about it. Some of them, a few have worked in like AI labs on campus or they’ve done some programming themselves, but most of them haven’t.
And, uh, in recent really in the last, three semesters, I suppose. One of the first assignments I have them do, um, in terms of introducing themselves to me and thinking about some of the themes of the course is talk about how they may be, have used things like Chat GPT or, or Grammarly, uh, just to be really open about like, [00:04:00] what is it good for, what, where, where it’s, um, uh, strengths and weaknesses as a tool, uh, and what they’ve noticed other people using it for perhaps in less, less, uh, honorable ways.
Um, and so that’s kind of an entry point for us to start thinking about things that really then speak to, I mean, I am interested in how it works as a writing tool or, or, or problem in that way, but my fundamental interest, and I think a lot of where I, I end up, no matter what my technology, uh, themes and readings have been, are kind of a about what it means to be a human being.
Um, and AI of course speaks very much to that whole thing of human versus machine. If it’s artificial intelligence, what is intelligence? What is human intelligence? So that usually even for non STEM people and for STEM focused majors kind of opens up a whole, you know, discussion about like, here’s what the machines can do.
Here’s what programs might be able to do. Here’s maybe super intelligence in the future, but our, uh, what does that mean for us as human beings? How are we going to [00:05:00] live and. You know, what’s our purpose? Those kinds of questions are really interesting to me.
Paras Sajjan: For sure. Focusing on the human involvement within AI.
Michael Larkin: Yeah, yeah. And I think more concretely in writing classes, of course, there’s still tons I have to learn. I have colleagues in my department, I would say, who are much more knowledgeable about how these different tools work for writing and research and even reading that kind of thing. But, um, I’m also interested among the reasons why I care about it is because, uh, I’m very interested in human expression and creativity like my creative writing classes.
I think my students have used Grammarly and things like that to clean up their writing, but I don’t have the concerns that their using ChatGPT or whatever to write their short stories because this is fundamentally an expression of who they are. Why would you have a machine do it? But to the extent we’re headed in the direction where people will start using them, not again, not necessarily in nefarious ways, using these tools to express themselves.
Um, that’s where I start getting a little worried [00:06:00] and concerned and kind of wanting to keep paying attention to it.
Paras Sajjan: Yeah. So you mentioned how, you see some aspects of your students using ChatGPT and AI. Are there any strong similarities or differences between how you are using AI compared to your students that you guys talk about?
Michael Larkin: Yeah, well, I really kind of rely on their expertise. I’m just kind of exploring it. I don’t actually use it to write, you know. Um, but I came up in a different time. I mean, I think of, uh, the classic like, in my day kind of thing, but you know, I was an undergrad at Cal here in the late 80s and I think of some things like how the heck did we do research?
You know, it was pre internet. Very few of us had computers. Um, you couldn’t actually get access to the library, um, Stacks unless you had a special pass and this kind of thing. So I, I’m trying to remember, like, how did we, other than looking at a single text and writing about it, um, how did we do the writing and use those tools?
Um, and my students, you know, as I say, I ask them to, as honestly as they [00:07:00] can, report how they use them. And they talk very openly about, like, here’s how I use it to help me generate ideas or to get an outline started. And here’s where I hit these walls where I realize it can’t help me that much. But it, it kind of helps, um, get them going a little bit.
Um, and so, yeah, I’m really trying to parse as are my colleagues kind of like what’s appropriate use where can it be? I think it’s too easy to say don’t use it. People are gonna cheat. You’re gonna plagiarize like crazy. I mean that’s out there. But I think that’s too dismissive to say because it’s it’s already here and we have to deal with it. And I think it’s it’s actually giving I think what ultimately is a useful challenge to educators and I’ll speak for myself, writing teachers, I think in college and high school about, well, if students are going to want to use a ChatGPT or something like that in an inappropriate way, like literally plug in the question and get the answer you’re going to turn in, then there’s probably something wrong with the assignment or with the way that we’re, uh, asking students to do certain kinds of work.
Um, so it’s not just [00:08:00] on, you know, the, the technology or the students making choices about us as teachers doing that too. So, yeah.
Paras Sajjan: Yeah, so I guess just to follow up to that, as AI continues to develop in school, a lot of the debate surrounds as you said, students are using this to cheat and plagiarize work, like we should totally cut out immediately.
In that debate, how do you see students tackle it versus your co workers, and where do you sort of see it going?
Michael Larkin: Yeah, well I think it’s, it’s, it’s, more or less for the last, what, year and a half, essentially since ChatGPT came on the scene end of what was that 2022, I think? Uh, it’s been more or less a constant kind of refrain discussion, faculty meetings, you know, how are you addressing it in the classroom?
What kind of tools are you doing? I’ve had colleagues tell me that they’re aware it’s there, but they’re just kind of trying to plow ahead like they always do and, and hope their students are being honest as opposed to talking about it [00:09:00] openly. Um, and that, and that, cause I don’t think you need to be reading about or having your students think about AI for this to be an issue.
Um, but it, we’re still, I think not just in my program, but I think across the university faculty are trying to figure out how do we address this with students, work with it. Cause as I said, you can’t outlaw them, ban them outright. The tools are going to be there and be used in some capacity. And most students, in my experience, are being honest and they’re genuinely interested in learning.
They don’t want to like have it, the work done for them. And at the same time, they’re under a lot of deadline pressure. I think it’s sort of a different discussion, but the larger pressure of grades and what kind of, um, system we’re in for evaluating students and the kind of pressure that that puts on students and whether that’s really fostering learning or it’s just giving, creating a rating system for some grad student or uh graduate program or employer to say, Oh, this person has a 3.8 as opposed to a 2. 8.
And so therefore, you’re, [00:10:00] right? Because that’s, I don’t I don’t see that as being what I’m in the business of doing. I mean, I have to evaluate students with a grade but I’m much more interested in their learning and figuring out ways to sort of encourage that. As opposed to just sort of focusing on things that’ll incentivize them to say like, Oh my God, I’ve got to get this assignment turned in. I better do the shortcut. Which, again, most students are not doing.
Paras Sajjan: Mm hmm. Yeah. You’re listening to Michael Larkin, a lecturer for the College Writing Programs on Berkeley Brainwaves on KALX Berkeley 90. 7 FM. I’m your host, Paras Sajjan, and to further the discussion about students and the level of education, they’re receiving or what they choose to focus on in your class, are you seeing any trends of what students tend to research or focus on when it comes to combining AI into a research project?
Michael Larkin: Um, it varies pretty widely because the model that my class [00:11:00] uses that all of the, the class that I’m talking about, College Writing R4B, um, there’s about maybe 30 sections of the class every, every semester.
And the model for each of those, even if we have, some people are doing stuff about the environment, other people doing stuff about monsters and literature, and there’s a whole array of, uh, topics people choose, but we all build towards what every College Writing R4B student, it’s a final research project. We’re around, usually by mid semester, they propose a question, something they want to focus on related to the broad subject area of the class and then dig in.
And so I prompt my students along the way, what are you interested in? And in some cases they, uh, they’re not, you know, they either undeclared or they have a major that’s not STEM and they say, my major plus AI add those together and they just see what’s out there about it.
And that can spark something pretty much in every case, whether it’s specific to their major or not, they find something that connects to them personally. Um, so whether it’s, you know, understandably you’re 20, 21, 19 years [00:12:00] old thinking about what do I do after I’m done at Cal, um, so they might look into AI and its effects on the job market or the economy in certain sectors.
Uh, some students are really technically focused and want to like dig into, I had one student last semester who did a project about, um, AI systems that create misinformation, um, and, and detect misinformation, like competing with each other. And he was really into that. Um, you know, or a design student who just came to my class this semester and she did this great project.
She is an urban design major. And she looked at what does it mean when, uh, human beings are moving through non human design spaces. Cause she’s looking at AI programs that help assist with designing buildings and things of this sort. So it really runs the gamut and I end up feeling, as I say to my students every semester, and it’s true, I end up with 17 or 34 abstracts and, you know, full projects, you know, 3000, 5000 [00:13:00] word papers that I’m reading with all the ancillary materials, and I learn all this stuff about, uh, issues as it relates to psychology or, um, computer science or art history or what have you in these ways.
It’s just really fascinating and it’s both in some ways daunting, like how much AI in this case has specifically seeped into all these areas. But, uh, yeah, I’m always surprised by things. Even, um, this was sort of when I was just focused on technology generally and not AI specifically, but, um, six years ago I had this student who wrote a really good paper about, um, whether or not AI should be granted legal rights. As a person would be, right?
And I was like, what, what did you say? You know, if you’re talking about the human machine divide, it’s like, why would a AI program be entitled to rights? It’s a program, right? And yet that, and again, this is six years ago, but that’s kind of where we are as wrestling with questions like that.
So I just, I think it’s kind of endlessly fascinating. Um, even though, as they say, and I think it’s useful in a way that [00:14:00] I’m not an expert because I feel like maybe It opens me up to being, uh, to, to, you know, embracing that lack of expertise so that I can learn something and my, my students teach me things because it’s, because it’s true, they do. Yeah
Paras Sajjan: Yeah, I think looking, coming from a perspective that isn’t STEM, it gives you this approach to look more about the things that you do sort of understand, like the ethical portion or the human connection between that. And going off that, do you see any other ethical consequences or possible, positives of AI that are going unnoticed in the school setting because of this looming debate about cheating and plagiarism.
Michael Larkin: Yeah um, I think, and again, I can kind of only speak for my little narrow corner, although there are across the country at high school and college level, all sorts of people doing the work I’m doing, writing instructors of different kinds. Um, [00:15:00] and I think even as we’re kind of on the lookout for how do you tell when this student has, you, leaned on AI too much. And it’s like, there’s a flatness to the writing. It doesn’t sound like them, or there’s like gaps in the evidence or it’s hallucinating quotes, things like that.
Even as we’re attendant to that, and we’re paying attention to that at the same time, I think a lot of us are trying to figure out, uh, where, what are some interesting ways of using it to help with planning or writing or researching in ways that are totally legitimate and actually we can use it as this useful tool.
Um, and I think we’re headed in the direction and I think I maybe have a little bit of a blinkered worldview because we’re at Berkeley with these really bright, committed students who are genuinely interested in learning.
They’re not just like, Oh, I, you know, gaming the system to get straight A’s and go out into the world and that kind of thing. People really are genuinely engaged. So I think there’s this opportunity at a place like Cal in particular to see where there, there are legitimate uses of, of these [00:16:00] technologies and, and not going to this extreme of either just, oh, it’s totally positive, embrace it fully or never use it. How dare you? That kind of thing.
I mean, there are, there are, concerns, um, I have, again, particularly in the realm of creative expression or if we get into, um, the, the biases inherent in AI that, or in our world that can be perpetuated in AI that, that a lot of people have written and researched about. Uh, there’s stuff to be concerned about, but I’m, I’m hopeful that we can figure out a way to, to leverage these tools and not have it become some kind of either cheating fest or, or a version of the matrix or something like that where the machine is taking over, you know, that kind of thing. So.
Paras Sajjan: Mhm, yeah. Do you think that the opinion you have is different from a lot of, I guess at Berkeley, like different faculty members or do you think more people are starting to come around to this medium…
Michael Larkin: Yeah.
Paras Sajjan: Balance of. Using AI.
Michael Larkin: Yeah. Well, um, I couldn’t say with certainty across the [00:17:00] faculty. I suspect there are, um, just speaking within my, my own program, I think more people are on the alert for it and or interested in what the possibilities are.
Um, and perhaps the, the people who are more nervous about it or like, what the hell is going on, are being quieter. Um, uh, but it also could just be as I am at times just kind of being quieter, observant, and listening to try and learn what it is out there. Um, again, my sense is that at Cal generally, as well as within my program specifically, people are as clear eyed as they can be about these things being out there and figuring out ways to use them usefully for learning, for teaching, uh, for developing things.
Um, And I think, at the risk of sounding self congratulatory to all of us at Berkeley, I think that’s kind of in the keeping with the spirit of Berkeley, uh, as opposed to it being this kind of punitive crackdown on use it this way but not this way. Um, I’m hopeful of the sort of [00:18:00] openness to experiment and figuring it out.
And at the same time recognizing that if there’s like these massive algorithmic bias on display that’s causing all sorts of problems or access to these kinds of tools where they’re not free, you have to pay money, just like, you know, people have certain privileges of knowing their way around, um, using a computer or having access to a computer.
But they really work hard and really smart and really deserve to be here, but they don’t have those advantages that other students have. Um, so I think as long as we’re on guard for that, as AI develops too, then, then again, I’m hopeful that we can use it in a, in a productive way.
Michael Larkin: Mm hmm.
Paras Sajjan: Yeah.
So you’ve already sort of mentioned how you address AI in your teaching capacity, but do you think that it has altered other parts of how you go about your life?
Michael Larkin: Oh, I think it’s one of those things it’s kind of there the ways that it surprises you, um, that you,um, sometimes notice and sometimes it just kind of happens [00:19:00] and you realize oh, that’s run by AI, you know like one of the things my students and I talk about because it’s relates to research is that thing that’s been there for years on mostly, it’s probably Google, since they have a huge monopoly right when we’re searching but that classic autofill thing. And how you start typing in something and then Google search are like, what about and boom boom it starts filling stuff in. And in some cases it’s something kind of basic and you’re like, oh yeah that is in fact what I was just going to type. But then other cases when you’re typing things in uh and it’s like, oh yeah maybe that’s it, and you click on it, to what extent Is that a tool and an assistance to you as somebody searching or researching, and to what extent is that um, dictating your, the pathway of your thinking in some ways?
Um, so there, there’s things like that where it’s present. There’s, there’s also this example, um, I was just thinking of yesterday. I’m teaching in this class in Dwinelle this semester that I, last time I was in was the semester when, um, the pandemic and there was a woman in that [00:20:00] class who was hard of hearing, um, and she could read my lips and she could hear a bit, but she had a, a person transcribing the discussion, what I was saying, what her fellow students were saying, um, so that she could have a transcript if she needed it of what happened in the class, even though she was there.
And I didn’t see the transcript of what the transcriber was writing, but then we went to Zoom and as lots of people have seen the, the AI powered translation software that was like turning the audio into textual files.
And I’d look at it and like, wow, they, they got like one, they misheard something and misspelled it, but the AI nailed it, and then I saw it next to the human transcription of what was being said and how hard that is for like a court reporter to try and get everything, even a very skilled one. So, there’s those kind of ways where, you know, you do have this much easier, in that case, transcript to look at, and that’s because AI, delivered it.
Now there’s a cost to that. There’s people doing certain kinds of work who’ve been doing certain kinds of work that is either going away or severely affected. So it’s not a [00:21:00] all this wonderful, positive thing, right? It’s going to be, I hate this term. It gets overused, but it is disruptive, right? In various ways.
Um, but I think I noticed it in the ways that probably lots of us do. You know, with, you know, the way searching is changed, they’ve got these kind of AI summaries now before you sort of dig down into the results and, and these things that are powered by AI that we don’t even kind of think about that are powered by AI.
And then unless we have to, then suddenly we go, oh, okay, that’s what that is. And, um, yeah, it’s, it’s a whole funky, weird, hard to, hard to pick apart world sometimes. Yeah
Paras Sajjan: So do you think this development of AI and how we’re seeing it used in every day, as students of Berkeley are trying to figure out what they want to go into in the future, and they’re seeing AI seeping into all of these different, uh, aspects and different jobs. How do you, see from your students how they’re reacting to that. [00:22:00]
Michael Larkin: Well, I think there’s definitely some, I most, I do teach some upper division creative writing courses, which is kind of a different beast. Um, and, uh, so unless students come back to see me or we check in with each other, I don’t always have a sense when I’m, with them in first, their first and second years here, where they’re, where they end up.
But there is even then, some degree of, um, concern, if not anxiety about, uh, hmm, you know, even for a computer programmer, um, you know, gosh, are these programming jobs that are really well paying and that I, I really like computer programming, you know, is AI, uh, going to sort of change the landscape in a way where there won’t be, um, the work that I think there’s going to be.
Um, you know, we see this evidence, I think, what was it just today? I didn’t read the article, but I saw Cisco laid off like 800 people locally. And, um, I don’t know that any of that is related to AI specifically, but I think for students, there’s this kind of sometimes wondering about that. [00:23:00] Um, and again, it may be because of the classes I teach.
I mean, they’re very popular majors, the certain of these like EECS and CS and data science and, um, MCB and so forth. Um, but it may be that I’m seeing a preponderance of those too, because again, when I ask, most of my students said, yeah, I signed up for your class because I saw what we were reading. And I said, okay, I want to do that.
Um, uh, so I wonder with other students who are majoring in art history or English or a lot of environmental science majors and stuff. I think there’s still this interest in hunger in these other non technological fields, but it’s almost I can’t imagine it’d be, we’d be hard pressed to think of a field in which AI isn’t gonna have some effect on the workforce or just how we do it.
Another student yesterday gave her, from last semester gave her talk to my current students about her research project which was looking at how farmers are starting to use AI and looking at places like Germany and how it helps with, managing the livestock or monitoring [00:24:00] crops and how it’s affecting farm laborers and this kind of thing.
So it’s, again, it’s, it’s kind of ubiquitous now for sure.
Paras Sajjan: Mhm. You’re listening to Michael Larkin, a lecturer for the College Writing Programs on Berkeley Brainwaves on KALX Berkeley 90. 7 FM. I’m your host, Paras Sajjan. So do you think that AI will change the student experience at all?
Michael Larkin: Um, Yeah, I think so. I mean, I, I think, uh, in some ways we’re seeing it, I don’t know this is AI based, but I just think of, for instance, the differences in how I comment on student work now versus what I did when I started here 22 years ago as a faculty member.
You know, everything was like handwritten. We didn’t have any, uh, course websites for turning things in. It was like really novel to be doing work online, even though people were searching for things and that kind of thing.
But now, uh, it’s very rare unless students, in, with me in my office where we’re marking up something, I printed out, [00:25:00] all the comments that happening electronically. Um, I don’t use the the translation of like when I’m audio recording and it’ll translate for me, but you can do that. Um, it, in that way it can facilitate, um, things for people with learning disabilities or other ways of learning.
I’ve started to record, this is again, it’s not AI, but I started recording a lot of audio comments for my students who, especially in the summer in the asynchronous creative writing classes that I sometimes teach, they really appreciate it because it’s kind of like this human, speaking of human versus technology, it’s a human presence hearing my voice and like, and they know I’m paying really close attention to their story.
So I think the tools that are there that will be AI powered will just increase. And so it will affect the student experience in terms of how they’re, this isn’t so much AI, I don’t think, but, uh, not all my students, I would say many of my students don’t have like anglicized sounding names, but I don’t know how many times I start typing, Oh, hey, Paras, or hey, so and so, and the autocorrect wants it, you know, hey, Pete, [00:26:00] you know, hey, Sarth, no, Sarah, you mean Sarah, it’s like, no, I mean, Sarth, you know, this, these kind of things over and over again, um, and so this speaks to the, one of those other concerns I was mentioning earlier of like what the input systems are you know, what kind of language, what kind of culture, uh, and, and other things are being kind of privileged in these kind of AI development spaces.
Um, so anyway, that was a long way around to what you actually asked, but, but yeah, the short answer is yes. I think we’re just going to see slowly, but surely, and then maybe in some cases very quickly a turn to certain kind of AI tools that, uh, just as the internet would have been to me 40 years ago or something, you know, uh, seem totally standard to somebody 40 years from now, kind of thing.
Paras Sajjan: Yeah, totally. I mean, we already saw how when ChatGPT first came on the scene and then now it’s in every syllabus I read. and Yeah.
Michael Larkin: Yeah, I mean it was so, so fast. I think that first semester, yeah, fall 2022, one of the really great projects one of my students did was looking at, um, [00:27:00] stable diffusion and, and the gen, like, and her question was essentially, can AI generated art be considered art?
Um, and I think our ultimate conclusion with lots of complexity was, uh, yes, it can. Um, so yeah, it just happened so fast, um, and we’ve been kind of reckoning with it ever since. Yeah.
Paras Sajjan: Yeah. And I think just an overall sort of look to the future, which is always hard with AI because the development can be fast and then slow and pick up and slow down all the time.
But as AI continues to develop, how do you see it being implemented or excluded from education, especially in the humanities, sort of going forward.
Michael Larkin: Yeah, well, I think one of the things I certainly think about is, you know, most directly, we’re talking about different jobs. Um, one wonders how much, uh, certain kinds of, um, teacher roles [00:28:00] will be augmented by, replaced by, um, uh, AI programs like, oh, this AI program can teach you how to write.
This AI program can mark your, your paper, at Grammarly, or whatever can tell you like, oh here’s the, here’s the score it gets in terms of clarity and complexity and so forth. You know, will those, um, programs get to be so good that, uh, that universities like UC Berkeley, which has a very, you know, robust, uh, campus and, and budgets and so forth, but it’s under pressures like lots of universities, uh, state funded or otherwise, um, and looking to manage costs and where will, like in other industries, it, it become uh, more economically efficient, we’ll put it that way, to, uh, have machines and programs doing at least some of the work that faculty, graduate student instructors are doing now.
Um, I think if we get to that point, this is where, you know, we’re not just because of the jobs loss or what have you, [00:29:00] but in terms of the nature of how people are learning, that’s another of those kind of tipping points of like, are we tipping too far from like a supportive, augmentative tool into we’re fundamentally changing what the experiences of being a college student and learning in a way that will be detrimental.
Not just to the people whose jobs might be replaced, but more broadly for the students who are getting college degrees or for the society at large and what that means. Um, so there’s some big questions ahead in that regard and not, not to sort of sound a dystopian note, but I think it’s to sort of be clear eyed about this kind of thing. Um, anyway, yeah, that’s where I am. Yeah.
Paras Sajjan: Totally Yeah. Well, thanks so much for coming in. We’ve been speaking to Michael Larkin, the lecturer for the College Writing Programs here at Berkeley, and this is Berkeley Brainwaves, a 30 minute show dedicated to telling the stories from the Cal campus. I’m your host, Paras Sajjan, and you’re listening to KALX Berkeley 90.7 FM Thanks for [00:30:00] listening.