Joel Gladd
brehove.bsky.social
Joel Gladd
@brehove.bsky.social
940 followers 130 following 37 posts
Department Chair of Integrated Studies; Writing and Rhetoric, American Lit; Higher-Ed Pedagogy; OER advocate
Posts Media Videos Starter Packs
I increasingly feel the John Warner’s of higher ed are right more than wrong about these things. Although it feels very task-specific.

I’m going on vibes here not trying to argue anything sophisticated.
@biblioracle.bsky.social
In the cycle of enchanted, disenchanted, and re-enchanted with the AI world, I’m briefly stuck in the disenchanted space. Mainly in applications to writing.

I’m relying on these tools for STEM-related tasks, analysis, etc, but becoming very cynical about its use in (non-technical) writing.
It's interesting that this has gone under the radar: OpenAI began adopting a constitutional approach to alignment in late 2024, updated last week. Their "deliberative alignment" specs tell the model to treat a list of rules deontologically and deliberate how they apply to particular examples.
I wonder if it’s a platform issue?

I see a ton of humanities conversation around this, locally and in my online networks.

But AI chatter on X is dominated by tech circles—more so now that AIED seems to have drifted to LinkedIn and here.
My recent article explores how colleges can scale AI readiness in First-Year courses, drawing insights from our FYE program at CWI. It also includes links to the training we developed (CC BY). Some of these resources may work in First-Year Writing courses as well.

www.linkedin.com/pulse/how-co...
How Colleges Can Scale AI Readiness: Lessons from a First-Year Experience Program
I recently presented at the 44th Annual Conference on the First-Year Experience and wanted to share what my amazing team (Liza Long, Ed.D.
www.linkedin.com
I used the DeepSeek R1 reasoning model to prepare for a new course proposal. These screenshots show with and without the "DeepThink" option turned on--strikingly different. R1 does a lot more synthesis and offers clearer suggestions. It also accepts pdf files, o1 doesn't. Crazy this is open source.
haha I just posted something similar before seeing this. ya there's an incredible amount of oversight in that experiment.
What worked so well in this Nigerian experiment with using AI to boost literacy is how carefully each step is overseen by actual teachers. Perhaps the "deskilling" we see in other studies (students losing skills because of too much assistance) is bad strategy. blogs.worldbank.org/en/education...

Prompt engineering with o1:

Interesting to compare this o1 strategy with the CLEAR or RFTC framework (role format task constraints)

I currently find myself relying less on “role” and more on “context dump”
here's the article's summary of what o3 seems to be doing on the backend
This is one of the most elegant definitions of LLMs I’ve seen.

(from this post explaining the new o3 model and the ARC benchmark: arcprize.org/blog/oai-o3-...)
I'm shopping for broccoli sprout seeds on amazon and the "most helpful review" is 100% AI-generated. It's hard for me to read because half the words are completely pointless--but apparently it's helpful to others! 🤷‍♂️
Thanks! I put it on the list
My program is collecting data on this (through surveys) and it somewhat tracks. A small percentage of students are definitely “anti-AI”.

Most students, OTOH, say they’re uncomfortable with faculty using AI to evaluate their work, but they’re comfortable with AI in ed otherwise.
I love seeing health gurus compete like this
Reposted by Joel Gladd
The anecdote in the Hard Fork podcast may have been intended as—and should definitely be interpreted as—an apt and colorful analogy one researcher perceived between Arrival and the simultaneity of processing in Transformers.

It won't bear much weight as a literal claim about historical causality.
OK, so maybe what Kevin Roose bungled here was that one of the paper's authors (Polosukhin) had made a comparison to "Arrival" ("believed self-attention was a bit like ...") - but without claiming it had been the actual inspiration
www.ft.com/content/37bb...
you could TRY to explicitly encode paul graham's obsession with maker schedules or ribbonfarm's metaphorical thinking but trying to decompose it into explicit steps is sometimes counterproductive - you'll miss all the subtle correlations and emergent patterns that make the approach actually work
it's weird that the most effective prompt hack STILL is just "approach this in the style of [person]"

these models internalize not just the style but the whole vibe - the writer's voice, epistemic stance, worldview, etc. "write like x" tightens everything up so nicely
Totally. Given that much of this pedagogical debate hinges on workplace preparedness, it seems like we’re not doing a great job of tracking where GenAI is actively scanned for and punished.

I mostly see reports on how much employees using AI now but clearly it’s more complicated.
I'm seeing more AI scanning being done by employers and institutions--checking for GenAI text in resumes, grant applications, etc.

It's odd that "preparing for the workplace" now means both NOT using GenAI for some things but being really savvy at other tasks.

Anyone publishing on this?
Not sure why it took me 6 months to discover Max Read's famous article where he coined the term "Zynternet".

article here: maxread.substack.com/p/hawk-tuah-...

perplexity's summary here: www.perplexity.ai/search/zynte...
Hawk Tuah and the Zynternet
Plus, for some reason, some thoughts about the debate
maxread.substack.com
Totally. Institutions should be planning for this IMO. Departments have to figure out how to have a variety of instruction depending on what students and faculty want.

At least that’s where I’m at.
I really hope higher ed can be a place for both of these models (and more). I like that some people are super hardcore about keeping AI out and others think it’s integral to the future of ed. It would be sad to see any side win.