Chris Goedde
banner
chrisgoedde.bsky.social
Chris Goedde
@chrisgoedde.bsky.social
Husband, Professor, Tennis Player, Cook
This is ostensibly about AI, but it nicely articulates many of the reasons I switched to standards based grading a dozen years ago.

Not coincidentally, a student asked at the beginning of the quarter if my grading system was designed to account for AI. No, I said, that was just a happy accident.
More importantly: what not to do
What to do when student turn in assignments generated by AI?
<p>It feels like university teaching is getting harder and harder. Less institutional support, more scrutiny from anti-intellectual reactionaries, student populations that are increasingly underprepared. And now, AI.</p><p>We've had to deal with plagiarism since the origin of our profession. But with AI, our students have access to a machine that will create bespoke fake-erudite pablum on literally any topic for the mere asking. (You'll need to be a little more creative for it to provide instructions to make a bomb, though these robots are downright encouraging when it comes to self-harm, which is itself an actual <a href="https://www.msn.com/en-us/news/technology/ai-chatbots-are-becoming-a-public-health-threat-especially-for-kids-opinion/ar-AA1QAj05" rel="noreferrer">public health crisis</a>. Anyhow.)</p><p>It wasn't that long ago when I wrote a book <a href="https://press.uchicago.edu/ucp/books/book/chicago/C/bo27808232.html" rel="noreferrer">to advise fellow scientists about navigating common challenges in the classroom</a>, but generative AI was not a thing then. So I am thinking of this post as an addendum for this moment in 2026.</p><p>Teaching over the course of the semester is like one of those choose-your-own-adventure books. Every choice we make – and there are thousands of them that we make consciously and unconsciously – puts us on a trajectory into new territory. There are critical moments where our choices will send us down a totally different pathway. Like when you're on page 18 and choose "Offer the slice of gouda to the dragon," and then it asks you to flip to page 74. That's a whole new branch on the decision tree. </p><p>When the AI issue comes up with our students every semester, how we choose to handle that moment will mark the trajectory of our relationship with our students for the remainder of the course. You could say that's the case for other decision points, but this one is a biggie because there are a lot of big feelings involving AI, and some of the key issues that get in the way of learning in college science classrooms all intersect with AI.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://scienceforeveryone.science/content/images/2026/01/yourveryownrobot-1-1.jpg" class="kg-image" alt="Now we all have our very own robot! " loading="lazy" width="406" height="601" /><figcaption><span style="white-space:pre-wrap">book cover of Choose your own adventure 4: Your very own robot. How about that!</span></figcaption></figure><p>What are those key issues? The lack of kindness. Whether students feel prepared for the work. A lack of mutual respect. An adversarial relationship in which courses are obstacles and professors are the overseers. Far too many of us are quick to adopt that role as adversary standing between our students and the grades they want. What should stand in the middle is the academic challenge of the course, not the perceive caprices of the professor. It's hard to defuse that adversarial relationship when students walk into the classroom expecting things to be that way. But we still need to try.</p><p>What principles should guide our choices? Ultimately, our decision needs to be rooted in our goals for the course, and more broadly, our purpose behind our choice to be in this line of work. When you think about whether you want to be a cop or a coach or a preacher or a buddy or a boss in the classroom, what's your motivation and what do you want for the students? Each one of those roles would handle student use of AI differently.</p><p><strong>We all should be coaches for our students learning</strong>. It's as simple as that. If you can envision yourself in this role, then all of a sudden some of these otherwise difficult choices get easy real quick.</p><p>When I'm wondering what to do in the classroom, I think back to what is, ostensibly, the purpose of my course: to learn stuff. Not just any stuff, but the expected learning outcomes of the course. That's why we are there, and even if that's not the motivation of the students, that's what they signed up for. It would be great if students do other stuff along the way – career preparation, inspiration to go into research or policy work or teaching, a new best friend who happened to be in the same class, perhaps a professor as a valued mentor. That stuff is all gravy. The primary point of being there is to teach the content. I suppose there are some folks at <a href="https://scienceforeveryone.science/when-to-use-the-terms-pui-slac-msi-hsi-rpu-etc/" rel="noreferrer">SLAC</a>s with a more grandiose ideal of what we're doing on a day to day basis. I feel that way too a lot of the times, and frankly it drives my work to support the holistic growth of others (and myself just as much). But when we're deciding what to do in the classroom, let's not forget that we are there in the service of learning.</p><p>We are there to teach, the students are there to learn.</p><p>In other words: we are not there to grade our students like sides of beef, we are there to work with them as fellow human beings who are there to learn with our guidance. As a learning coach. Our employer and the broader game of higher education requires that our students receive a grade for being in the course, but assigning a grade is not the purpose of being there. The purpose is learning. The grade that students earn is the consequence of the learning process.</p><p>If learning is the primary goal of the course, then we need to do things that promote learning, and stay away from things that get in the way of learning.</p><p>You know what gets in the way of learning? When professors act like bosses or cops or judges. This AI situation is bringing out the worst in us.</p><p>There are a lot of things about AI that can be true at the same time. It's a shortcut to getting work done. It might get the job done poorly, and sometimes it does it somewhere between adequate and spectacular. AI can be a cognitive shortcut that prevents us from learning how to do important things, and it also can provide shortcuts that get us right into the heart of what we are trying to learn. AI is a plagiarism machine, it's a masterful bullshitter, it tries to please you instead of challenge you. It also performs a bunch of rote tasks very efficiently so that you can spend your time on the part that requires human creativity and insight. AI is great at evaluation, but horrible at creation.</p><p>Another thing is true: AI is here to stay. The financial bubble will pop and we'll be using it differently in the future, but it's futile to pretend that this isn't a tool that that will be involved in college classrooms. The whole AI-transforming-higher-education bandwagon makes me want to gag, but still I recognize that we have to teach differently now. And think that's a good thing.</p><p>AI can get in the way of learning, and it can promote learning, depending on how it's used. I understand that we're dealing with a lot of cases where we are asking our students to do work independently, without the help of robots, because it is the process of the intellectual struggle that is required for learning. If you don't have to struggle, are you growing? So when students are taking the cognitive shortcuts that are harming their learning, what are we to do?</p><p>First, always lead with kindness. Second, take steps to build and maintain mutual respect. Third, be good to yourself and don't do anything that makes your job harder than it already is.</p><p>A friend of mine who teaches middle-school math has said, "Students don't follow rules. They follow people." (Did he come up with this? I have no idea, but I hadn't heard it before.) While we're teaching adults in college, I think this is a universal truth to a certain extent. I think it's true about professors who are going about their business on campus, for sure.</p><p>Be the instructor who your students want to follow.</p><p>So what does this kind, respectful, and not-additional-work approach to AI look like?</p><p>First, don't go to any special length or effort to try to discern whether students used AI. It's not worth your time, and no matter what anybody might claim, you can't ever know for sure.</p><p>Second, give students the opportunity to disclose whether and specifically how they used AI without any negative repercussions. (Not even a slant look askance.)</p><p>Third, design your assignments so that students have the opportunity to learn what you want them to learn regardless of their use of AI.</p><p>Fourth, design your assessments and your grading rubric to reflect what you want students to do and learn without AI. Build the circumstances, as reasonably as possible, for students to do this in a way that promotes their learning. </p><p>If you need to grade differently because AI exists, then is it possible that your assignments and the way you were grading them weren't closely articulated to the learning objectives as they could have been?</p><p>For example, if you're teaching biology, then it's never been a good idea to take off points for spelling. That's just punitive and makes students afraid and keeps them from focusing on the biology. We're not there to teach students to spell. Really – we are not. If they're in college and they can't spell words the way that they want to, that's on them at this point. There's no moral code or higher power that makes you the judge of proper spelling for fellow adults. You're there to assess learning in your discipline.</p><p>The last thing I want to suggest about teaching in this new world where generative AI exists is that this provides even more incentive to shift towards assigning grades with a contract-based, specifications-based, or ungrading framework. If students can demonstrate certain competencies, or complete certain tasks to a certain level of quality, or provide evidence of the expected growth of learning throughout the course – all in ways that are clearly not done with AI – isn't that all we need to assign a grade?</p><p>Let's not let this evolutionary arms race of avoiding cognitive work and detecting the lack of cognitive work get in the way of what we're doing in the classroom.</p><p>Too often we let enforcement of arbitrary rules get in the way of learning. The more rules we make, the more we are going to have to enforce them, and the less focus that we can put on learning.</p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p>
scienceforeveryone.science
January 27, 2026 at 1:27 PM
This is the role Gilbert Gottfried was born to play.
Bovino: "When politicians, community leaders, & some journalists engage in that heated rhetoric we keep talking about, when they make the choice to vilify law enforcement calling law enforcement 'Gestapo' or using the term 'kidnapping,' that is a choice & there are actions & consequences."
January 26, 2026 at 12:44 AM
Right. If some collected 1000 cars, we would correctly say that they are obsessed with cars, or have an insatiable desire for cars. Same with someone who collects 1000 million dollars.
The trouble, of course, is that the sort of person who is ever going to think "OK, I have enough money now" is not the sort of person who becomes a billionaire.
January 20, 2026 at 6:08 PM
Just your typical Hail Mary from the 14 yard line
What Caleb saw before the throw
January 19, 2026 at 7:23 PM
This season of Bears football in a nutshell.
January 19, 2026 at 3:15 AM
No.
January 15, 2026 at 5:38 PM
Experienced this when staying in a vacation rental this fall.

daringfireball.net/linked/2026/...
‘The Big Regression’
Link to: https://world.hey.com/jason/the-big-regression-da7fc60d
daringfireball.net
January 7, 2026 at 12:49 AM
January 6, 2026 at 7:29 PM
What kind of fluid instability is this? 🧪
December 31, 2025 at 5:35 PM
Reposted by Chris Goedde
If I was in a Pluribus scenario I’d definitely try to save humanity but first I’d make the hive simulate an entire season where Justin Herbert gets competent offensive line play
December 27, 2025 at 9:59 PM
Reposted by Chris Goedde
Speaking as a former Californian with continuing deep ties to the state: Go ahead and be Florida's problem, you greedy fascist fuckwaddles
Billionaires are considering cutting or reducing their ties to California by the end of the year because of a proposed ballot measure that could tax the state’s wealthiest residents.
A Wealth Tax Floated in California Has Billionaires Thinking of Leaving
It’s uncertain whether the proposal will reach the statewide ballot in November, but some billionaires like Peter Thiel and Larry Page may be unwilling to take the risk.
nyti.ms
December 27, 2025 at 1:23 AM
Reposted by Chris Goedde
Evanston mayor Daniel Biss:

“I went to talk to Bovino & he, in a very condescending way explained that he was keeping our city safe & I explained that he was making our city dangerous. He talked about violence & I explained that the violence was being caused by ICE & CBP”
Federal agents, including CBP Commander Bovino, back in Evanston Wednesday - Evanston RoundTable
U.S. Border Patrol Cmdr. Gregory Bovino and federal agents were in at least two places in Evanston on Wednesday morning and reportedly took several
evanstonroundtable.com
December 17, 2025 at 8:20 PM
Reposted by Chris Goedde
And what happened next?
Kyle Monangai has 112 yards rushing, D'Andre Swift has 107. First time the Bears have two backs with 100+ in the same game since Nov. 10, 1985 (Matt Suhey, Walter Payton). (h/t Bears PR)
November 28, 2025 at 10:20 PM
Reposted by Chris Goedde
Grateful to The Verge for publishing my essay on why large-language models are not going to achieve general intelligence nor push the scientific frontier.

www.theverge.com/ai-artificia...
Is language the same as intelligence? The AI industry desperately needs it to be
The AI boom is based on a fundamental mistake.
www.theverge.com
November 25, 2025 at 12:49 PM
Reposted by Chris Goedde
I got this useful bon mot from a middle school teacher recently.

In response to, “I DONT UNDERSTAND,” he calmly said, “okay what steps have you taken to understand?”

And that’s when I realized that a lot of folks have no steps.
November 10, 2025 at 6:05 PM
Reposted by Chris Goedde
The moms of Montgomery County don't like Bethany Mandel because she's a jerk to trans kids, and that tells a lot about America's political realignment.
The “Likable”/“Unlikable” Political Realignment
American conservatism is a home for unlikeable people who’ve turned their unlinkability into a political project.
www.aaronrosspowell.com
November 7, 2025 at 5:51 PM
Reposted by Chris Goedde
things that arent the future of civilization

- chatbots
- crypto
- robot butlers
- cars that sorta drive autonomously as long as youre paying attention for when they fail and almost kill a bunch of people

things that actually are the future of civilization

- free clean energy
November 7, 2025 at 1:40 PM
This was about 4 blocks from my house. So proud of my neighbors.
Video taken by Jay Shefsky at 12:20 p.m. after the crash happened, Oakton Street/Asbury Avenue in Evanston, IL
October 31, 2025 at 11:48 PM
I don’t think I’ve been to San Antonio, but I’ve been to all the others
October 26, 2025 at 9:38 PM
Reposted by Chris Goedde
This is going to be an interesting development in the next couple of years.

For reasons of marketing, practically everything that was called deep learning, machine learning, algorithmic, etc is now AI. Which, fine, marketing.

But there are meaningful differences between genAI and other stuff.
so I can explain this: it's not generative AI: it's usually deep learning models trained on meteorology tasks and it can be quite effective
I know It would only annoy me, but if you are using weather balloons for data, where does the AI come in?

Because it sounds like they’re just doing meteorology and complicating it with AI.
October 24, 2025 at 4:04 PM
Devon
What’s the word where you’re from that, when pronounced exactly as it looks, identifies a tourist immediately?
October 8, 2025 at 11:27 PM
Reposted by Chris Goedde
Evil isn't any less evil because it uses the debate arena to cause harm.

On professor watchlists, political violence, and taking liars seriously even though they are deeply unserious.

scienceforeveryone.science/debating-is-...
Debating is bad. Professor Watchlists are bad.
Disagreement and listening is good. Taking facts seriously is good.
scienceforeveryone.science
September 15, 2025 at 11:34 PM
WTF is happening to the internet? What am I looking at? How did this get generated, and why?
August 24, 2025 at 10:08 PM
Reposted by Chris Goedde
i'm doomed to write a version of this essay every single year. i wrote about Matthew Yglesias' intellectual sophistry, LPE, Uber's business model, the problem with venture capital, economic welfare, "economic analysis," and more. thetechbubble.substack.com/p/ride-shari...
Ride-sharing apps are bad, actually
Or why Matthew Yglesias has no clue what he's talking about
thetechbubble.substack.com
August 8, 2025 at 9:15 PM