David Buck
@dbuckedu.bsky.social
1.9K followers 3K following 780 posts
Professor of English at a community college. Teaching composition (online & hybrid). #TempleMade. Interested in #SDGs, #pedagogy, #ungrading, #AIResistance, & #OER. Ungrading.weebly.com
Posts Media Videos Starter Packs
dbuckedu.bsky.social
As a FYW instructor (mostly online), students seem to research more responsibly & ethically when exploring topics for which they are passionate. Instead of depending on the “answer machine” spitting out AI slop, they are more motivated to use credible, authoritative sources. The kids are alright!
dbuckedu.bsky.social
A talking point to add — Alternative grading approaches de-incentivize students’ use/dependence on AI slop way better than horrible, unreliable surveillance software which only create a culture of suspicion. When we reform traditional grading, we open ourselves to human trust & learning potential.
dbuckedu.bsky.social
Just added. 👍🏼
dbuckedu.bsky.social
Reminds me of a recent news story about a bank that fired its customer service staff & replaced them with an AI-powered chat bot. Of course, the bot performed so poorly that the bank’s management had to scramble to rehire the human staff. If all labor is efficiency & productivity, this is expected.
dbuckedu.bsky.social
Another argument for resisting the myth held by many in higher education that if we don’t teach “AI literacy,” our students will be left behind. When the ultimate goal of our labor should be encouraging our students to develop their critical thinking and learning abilities—basically, being humans.
newyorker.com
An M.I.T. study found that 95% of companies that had invested in A.I. tools were seeing zero return. It jibes with the emerging idea that generative A.I., “in its current incarnation, simply isn’t all it’s been cracked up to be,” johncassidysays.bsky.social writes.
The A.I.-Profits Drought and the Lessons of History
Like the steam engine, electricity, and computers, generative artificial intelligence could take longer than expected to transform the economy.
www.newyorker.com
dbuckedu.bsky.social
Yup—that about sums it up. What’s more sad to admit is that the original creators of Canvas possessed an honest approach to pedagogy and learning (IMHO), but this original intent has definitely been corrupted by profit & money-driven ownership. Though imperfect, the originating vision is long gone.
dbuckedu.bsky.social
My community college was one of the 1st institutions to adopt Canvas in 2012. We were very early adopters. It was pitched as a pedagogical-focused system designed to “get out of the way” of an instructor’s teaching approach. Never imagined that it now is set up to REPLACE that instructor!
shannonmattern.bsky.social
“…faculty members will be able to click an icon that connects them with various AI features…, like a grading tool, a discussion-post summarizer… Canvas’s parent company, Instructure, is also in partnership w/ OpenAI… so instructors can use generative-AI technology as part of their assignments.”
Instructors Will Now See AI Throughout a Widely Used Course Software
New features integrated into Canvas include a grading assistant, a discussion-post summarizer, and even a way to pair assignments with generative AI tools.
www.chronicle.com
Reposted by David Buck
jaustinedu.bsky.social
Since the SAT essay became optional + scored by AI, I can't begin to tell you the number of people who have asked me why they should still teach writing, as if the only reason for kids to write is to score well on a standardized test.

Also, this reveals how much College Board impacts practice.
ralphtheewiggum.bsky.social
One time, teaching middle school English, we had a consultant tell us that students don’t need writing instruction and for a year we did no writing whatsoever. It was ridiculous.
Reposted by David Buck
astrokatie.com
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
Reposted by David Buck
biblioracle.bsky.social
Today's complaint about the AI in education discourse: Too much of it is framed around a future that is going to happen to us, as opposed to seeing the future as something we may have some agency to shape. I reject the deterministic view of AI, particularly genAI. It's a tool, not our master.
Reposted by David Buck
liznorell.bsky.social
The ungrading book club (organized by the indefatiguable @dbuckedu.bsky.social) is reading @biblioracle.bsky.social's fantastic book, More than Words, about exactly why we should be helping students understand the value of critical thinking and other challenging cognitive work.
Reposted by David Buck
emilymbender.bsky.social
I appreciate this piece, but I want to correct the record on one point. I don't talk about LLMs as making "collages" but rather as making papier-mâché, and the difference matters!

>>

www.theguardian.com/technology/2...
Screencap from linked article, with text: "Emily M Bender, professor of linguistics at the University of Washington and co-author of a new book, The AI Con, has many reasons why she doesn’t want to use large language models (LLMs) such as ChatGPT. “But maybe the first one is that I’m not interested in reading something that nobody wrote,” she says. “I read because I want to understand how somebody sees something, and there’s no ‘somebody’ inside the synthetic text-extruding machines.” It’s just a collage made from lots of different people’s words, she says.

Does she feel she is being “left behind”, as AI enthusiasts would say? “No, not at all. My reaction to that is, ‘Where’s everybody going?’” She laughs as if to say: nowhere good."
dbuckedu.bsky.social
Had a great session of the #MoreThanWords virtual book club yesterday!

A wonderful group of caring, compassionate educators hoping to guide their students through the ubiquitous sludge of GenAI.

We focused on writing as thinking, a uniquely human endeavor rather than a transactional experience.
Image of a brain merged with a computer chip. Under the image are the words: More Than Words Virtual Book Club
dbuckedu.bsky.social
It also helps counter the arguments related to efficiency where AI can supposedly free up teachers’ time to focus on “more essential” stuff. But the human activity being replaced by AI is exactly the essential stuff of learning! The friction of placing words on the page/screen IS the learning.
dbuckedu.bsky.social
Right! @emilymbender.bsky.social & @alexhanna.bsky.social have done an amazing job of critically deconstructing the hype surrounding AI in very accessible ways. Their book has encouraged me to continue my #AIResistance for the sake of my students’ humanity! It supports my ethos to “do no harm,” too!
dbuckedu.bsky.social
This is so ironic to me as a teacher of 1st-year writing where I encourage Ss to develop their writing voices, to make meaning w/ language, to communicate w/ other human beings. When they co-opt this freedom to a GenAI tool, we’re just using synthetic language devoid of any human intention.
dbuckedu.bsky.social
An idea from The AI Con that has really helped me answer the Q: Why do some surrender voice, agency, & trust to a GenAI tool? — the human tendency when processing language to engage w/ “the mind behind the text.” When there’s no human behind the probability/plagiarism machine, we tend to invent one!
Though the most basic and fundamental use of language is in face-to-face communication, once we have acquired a linguistic system, we can use it to understand linguistic artitacts even in the absence of co-situatedness, at a distance of space and even time. But we still apply the same techniques of imagining the mind behind the text, constructing a model of common ground with the author, and seeking to guess what the author might have been using the words to get their audience to understand.

Language models, problematically, have no subjectivity with which to perform intersubjectivity. Despite the frequent claims of Al researchers, these models do not learn "just like children do." Simply modeling the distribution of words in text provides no access to meaning, nothing from which to deduce communicative intent. Language models thus represent nothing more than exttensive information about what sets of words are similar and what words are likely to appear in what contexts. While this isn't meaning or understanding, it is enough to produce plausible synthetic text, on just about any topic imaginable, which turns out to be quite dangerous: we encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.
dbuckedu.bsky.social
If you’re interested, we’re doing a free virtual book club for #MoreThanWords by @biblioracle.bsky.social. Discussing writing in the age of AI. #AIResistance ⬇️

bsky.app/profile/dbuc...
dbuckedu.bsky.social
We are 2️⃣ weeks away from the first session of the #MoreThanWords virtual book club on June 3rd (5-6 p.m. EST)!

Join 7️⃣3️⃣ folks exploring how to center authentic writing/thinking in the age of AI.

Author @biblioracle.bsky.social will be joining us!

Register for FREE: forms.gle/AMUKNQc5egaf...
More Than Words Virtual Book Club
Welcome to the More Than Words virtual book club! We'll be reading & discussing John Warner's new book—More Than Words: How to Think About Writing in the Age of AI. We'll meet on Zoom about twice a...
forms.gle
dbuckedu.bsky.social
Yes! I have a clear #AIResistance policy in my college comp courses. It can be done. But it takes a repurposed focus on the writing process rather than the graded product. Plus, Ss appreciate the WHY of my anti-AI stance, especially when they learn abt the environmental & labor exploitation.
dbuckedu.bsky.social
This “capitulation to inevitability” is certainly real, especially when FOMO and the press for innovative “AI literacy” is constantly in teachers’ faces.

Been enjoying The AI Con by Bender & Hanna. The hype around AI serves those in power, not Ss & teachers. The VCs need a return on investment!
jaustinedu.bsky.social
In my work with teachers, two beliefs about AI are becoming "common sense:"

1. AI is inevitable, resistance is futile.
2. Not including AI in a teaching + learning will hold students back in terms of career + college.

Building permission structures for #AIResistance is urgent + critical.
Reposted by David Buck
jaustinedu.bsky.social
In my work with teachers, two beliefs about AI are becoming "common sense:"

1. AI is inevitable, resistance is futile.
2. Not including AI in a teaching + learning will hold students back in terms of career + college.

Building permission structures for #AIResistance is urgent + critical.
Reposted by David Buck
jessifer.bsky.social
"The work of ungrading is to push back on the culture of grades and quantitative assessment that reinforces hierarchies between students and teachers, while reducing both to a set of crude (often inscrutable) data points." www.jessestommel.com/the-practice...
The Practice of Ungrading
Ungrading inspects the inequities of schooling, asks hard questions of the structures of our schools, and offers a critique of the labor conditions for teachers at all levels of education.
www.jessestommel.com
dbuckedu.bsky.social
We are 2️⃣ weeks away from the first session of the #MoreThanWords virtual book club on June 3rd (5-6 p.m. EST)!

Join 7️⃣3️⃣ folks exploring how to center authentic writing/thinking in the age of AI.

Author @biblioracle.bsky.social will be joining us!

Register for FREE: forms.gle/AMUKNQc5egaf...
More Than Words Virtual Book Club
Welcome to the More Than Words virtual book club! We'll be reading & discussing John Warner's new book—More Than Words: How to Think About Writing in the Age of AI. We'll meet on Zoom about twice a...
forms.gle
dbuckedu.bsky.social
"Grades thwart basic psychological needs of students and academic motivation, while narrative evaluations and actionable feedback promote trust between instructors and students and cooperation amongst students." ⎯Chamberlin, et al., “The impact of grades on student motivation”
dbuckedu.bsky.social
2011. I use this quote in a presentation called: "How Compassionate Is Our Assessment . . .or Can We Really Motivate Our Students?"

I pair it with the following quote about grades as part of oppressive systems . . .