Ted Underwood
banner
tedunderwood.com
Ted Underwood
@tedunderwood.com
Uses machine learning to study literary imagination, and vice-versa. Likely to share news about AI & computational social science / Sozialwissenschaft / 社会科学

Information Sciences and English, UIUC. Distant Horizons (Chicago, 2019). tedunderwood.com
Pinned
Wrote a short piece arguing that higher ed must help steer AI. TLDR: If we outsource this to tech, we outsource our whole business. But rejectionism is basically stalling. If we want to survive, schools themselves must proactively shape AI for education & research. [1/6, unpaywalled at 5/6] +
Opinion | AI Is the Future. Higher Ed Should Shape It.
If we want to stay at the forefront of knowledge production, we must fit technology to our needs.
www.chronicle.com
Reposted by Ted Underwood
googles-nested-learning-paradigm-could-solve-ais-memory-and-continual venturebeat.com/ai/googles-nes… #AI #memory #Google
November 25, 2025 at 11:35 PM
Reposted by Ted Underwood
The thing about being a person on campus who is now associated with Gen AI is that you call someone to talk about not-Gen AI and then the actual conversation must be followed with a 30-min conversation about Gen AI.
There were two things I decided I was going to stay away from. One was Gen AI. I am now deeply embroiled in that topic on campus and it’s taking over my life. The other, climate change (not explicitly related to corporate Gen AI), is now the unavoidable pretext for a future book. Good job, Roopsi.
November 25, 2025 at 9:35 PM
Reposted by Ted Underwood
🤔💭What even is reasoning? It's time to answer the hard questions!

We built the first unified taxonomy of 28 cognitive elements underlying reasoning

Spoiler—LLMs commonly employ sequential reasoning, rarely self-awareness, and often fail to use correct reasoning structures🧠
November 25, 2025 at 6:26 PM
Reposted by Ted Underwood
"Emotions are so simple, it would be cool to map them out in a human understandable way" <<pauses an extremely awkward amount of time>>

—Ilya
November 25, 2025 at 7:01 PM
Reposted by Ted Underwood
⚠️ Update on Deep Research Tulu (DR Tulu), our post-training recipe for deep research agents: we’re releasing an upgraded version of our example agent, DR Tulu-8B (RL), that matches or beats systems like Gemini 3 Pro & Tongyi DeepResearch-30B-A3B on core benchmarks. 🧵
November 25, 2025 at 7:37 PM
Reposted by Ted Underwood
non-controversial take

social media is about 1000x worse for accelerating personality disorders than the current form of AI
November 25, 2025 at 6:41 PM
Reposted by Ted Underwood
New issue of my newsletter: "The Writing Is on the Wall for Handwriting Recognition" — One of the hardest problems in digital humanities has finally been solved, and it's a good use of AI newsletter.dancohen.org/archive/the-...
The Writing Is on the Wall for Handwriting Recognition
One of the hardest problems in digital humanities has finally been solved
newsletter.dancohen.org
November 25, 2025 at 4:35 PM
Reposted by Ted Underwood
I got spotlighted for my blog post about LLMs as genres.

digitalhumanitiesnow.org/2025/11/the-...
Editors’ Choice: The Curious Question of AI-written Lists: Or, LLMs are Genre Machines – Digital Humanities Now
digitalhumanitiesnow.org
November 25, 2025 at 5:07 PM
Reposted by Ted Underwood
🎶 this is my Fight Club 🎶
Wow — Senators Van Hollen, Smith, Murphy, Sanders, Warren, Markey, Merkley, Heinrich have created an official internal "Fight Club."

They're challenging Schumer & Gillibrand's leadership, arguing the party is using an old, corporate-friendly playbook insufficient to take on Trump or win elections.
Chuck Schumer Faces Pushback From a ‘Fight Club’ of Senate Democrats
www.nytimes.com
November 25, 2025 at 3:47 AM
Reposted by Ted Underwood
i’m not crying you’re crying

xkcd: Fifteen Years

Fifteen Years
xkcd.com
November 25, 2025 at 1:58 AM
Reposted by Ted Underwood
Does anyone have practical tips on generating diverse sets of short stories? We’re using a slot-filling method (“write a story in genre X with title Y”) but the results are still skewed toward particular topics and descriptions.
November 24, 2025 at 11:56 PM
Reposted by Ted Underwood
We're entering the last week of my Humanities in the Age of AI course, and I ended up completely revising the last five weeks of assignments to take advantage of Claude Code's new web version and emphasize the growing non-code uses of agents. Final version is here: anastasiasalter.net/HumanitiesAI/
November 24, 2025 at 7:00 PM
Reposted by Ted Underwood
I just donated to @liberalcurrents.com's newly launched startup fund. Liberal Currents is one of the few truly indispensable publications in our illiberal political moment, and helping them grow feels like a civic duty. gofund.me/f29325485
Donate to The Liberal Currents Startup Fund, organized by Adam Gurri
To fight fascism we need opposition media with a backbone. Liberal Currents is that. Help… Adam Gurri needs your support for The Liberal Currents Startup Fund
gofund.me
November 24, 2025 at 5:10 PM
"Ask students to explain why the chatbot is wrong" was a short-lived pedagogical fix — because it's now likely that the students will stumble more significantly than the bot.

There's a path forward for analog/offline learning, and also a path that uses AI. John Henry pedagogy is a dead end. +
November 24, 2025 at 6:15 PM
Reposted by Ted Underwood
it is snowing
November 24, 2025 at 3:57 PM
Reposted by Ted Underwood
🚨 New working paper!

How well do people predict the results of studies?

@sdellavi.bsky.social and I leverage data from the first 100 studies to have been posted on the SSPP, containing 1,482 key questions, on which over 50,000 forecasts were placed. Some surprising results below.... 🧵👇
November 24, 2025 at 3:43 PM
Bad decisions are key. If you want good decisions, what you’re looking for is called a “game.”
Great keynote talk on the fundamentals of storytelling by @antonyjohnston.bsky.social at the ever-brilliant AdventureX
November 24, 2025 at 3:31 PM
Reposted by Ted Underwood
That's pretty cool! A museum scanned their entire collection, and then matched the distribution of colors to the era from which the object comes. The browns of 19th century, reds of mid-20th, recent blues. But also more and more of Pure Black.

Source: lab.sciencemuseum.org.uk/colour-shape...
we perfected abundant cheap, vibrant, color-safe pigments in every hue and then immediately stopped using colors
November 24, 2025 at 2:12 PM
Reposted by Ted Underwood
I didn’t know this! www.nytimes.com/2025/10/20/w...
Peanut Allergies Have Plummeted in Children, Study Shows
www.nytimes.com
November 23, 2025 at 9:27 PM
Reposted by Ted Underwood
As one of the first table top phones, these were certainly an identity purchase.
These marked a shift of communication moving from the public to a more private interaction before eventually going public again.
This phone served more than a way to talk. It served as a social and practical anchor.
November 18, 2025 at 7:02 PM
Reposted by Ted Underwood
The company essentially turned a dial that made ChatGPT more appealing and made people use it more, but sent some of them into delusional spirals.

OpenAI has since made the chatbot safer, but that comes with a tradeoff: less usage.
November 23, 2025 at 6:43 PM
Reposted by Ted Underwood
In 2021, I developed PromptArray, which lets you muck around with the internals of GPT models. I moved on because this method doesn't work with closed-source models like GPT-3, but GPT-OSS makes it possible again. Read if you miss the weirdness of the GPT-2 era! jeffreymbinder.net?p=480
PromptArray, meet GPT-OSS! – Jeffrey M. Binder
jeffreymbinder.net
November 23, 2025 at 4:21 PM
Reposted by Ted Underwood
Lots of interesting LLM releases last week. My fav was actually Olmo 3 (I love the Olmo series due to their full open-sourceness and transparency).
If you are interested in reading through the architecture details, I coded it from scratch here: github.com/rasbt/LLMs-f...
November 23, 2025 at 2:31 PM
Reposted by Ted Underwood
Olmo 3 is notable as a "fully open" LLM - all of the training data is published, plus complete details on how the training process was run. I tried out the 32B thinking model and the 7B instruct models, + thoughts on why transparent training data is so important simonwillison.net/2025/Nov/22/...
Olmo 3 is a fully open LLM
Olmo is the LLM series from Ai2—the Allen institute for AI. Unlike most open weight models these are notable for including the full training data, training process and checkpoints along …
simonwillison.net
November 23, 2025 at 12:17 AM