Max Reith
@maxreith.bsky.social
480 followers 1.5K following 57 posts
AI, Economic Theory, Political Economy Economics @EconOxford, prev. Mannheim
Posts Media Videos Starter Packs
maxreith.bsky.social
Wohl nicht. Any suggestions?
Reposted by Max Reith
adegendre.bsky.social
The best post I’ve seen on Bluesky in a very long time! Brilliant idea and brilliant accounts out there !
conradhackett.bsky.social
What's your favorite Bluesky account that primarily posts about something other than current events/politics?
Reposted by Max Reith
joshgans.bsky.social
Back in graduate school, Paul Milgrom asked me to examine a published paper from 1984 by another person that he suspected had an incorrect proof. I found the error. I decided to see if LLMs could. Only Gemini 2.5 Pro did so. Claude Opus and GPT-5-pro found no significant errors.
maxreith.bsky.social
Income Effect: Analyst become more productive -> hire more

Substitution Effect: Fewer analysts are needed per project -> hire less.

Both effects exist, it’s TBD which dominates.

If a job is fully automated (AI can do all tasks), employment should def. fall (think Waymo replacing Uber drivers).
maxreith.bsky.social
I think it does help! AI today mainly augments labor: AI substitutes some tasks that analysts do, but not all. Analysts are more productive now. Does their employment rise? Depends on Income vs. Substitution effects:
maxreith.bsky.social
• Unlocking robots: AI-led breakthroughs might also unlock humanoid robots, bringing explosive growth via the substitution channel described in 1)
maxreith.bsky.social
Why yes:
• Returns to scale: Picture one AI containing the knowledge of thousands of scientists. Unlike human teams, the AI wouldn’t face coordination costs, could parallelize research effortlessly, and tap into knowledge from multiple fields instantly, thus accelerating discovery.
maxreith.bsky.social
• Limited parallelizability: Some breakthroughs depend on earlier ones: You can't invent a car wo inventing the wheel first. Research may not scale w AI.

• Physical constraints: Science needs hardware and experiments, which AI might not be able to substitute. Research might not be fully automatable
maxreith.bsky.social
2) Research Automation
You've probably heard of this one: AI invents new technologies, improves itself and drives explosive growth. Could it work? Maybe...

Why not:
• Harder ideas: It could get harder and harder to discover new ideas. Even with AGI, the rate of discovery might go down.
maxreith.bsky.social
But imagine AI that turns capital into a substitute for labor (think robots doing most jobs). Capital could expand wo human bottlenecks, creating room for accelerated growth. How much? Depends on whether capital productivity grows too. If so, growth could take off long run, though 3000% is a stretch
maxreith.bsky.social
Do tech optimists have a point? Within standard economic growth models, AI could drive explosive growth through one of two mechanisms.

1) Labor Substitution
So far, it seems like capital and labor mostly complement each other, which limits the returns to additional capital given fixed labor.
Reposted by Max Reith
emollick.bsky.social
A cautiously optimistic result on AI and disinformation.

A week before 2024 UK elections 13% of all voters used AI to ask about political topics. A randomized trial found this may be good: using AI led to similar gains in true knowledge as doing web research, regardless of model & prompt used.
Reposted by Max Reith
dorialexander.bsky.social
> be a language model
> all you see is tokens
> you don't care, it's all abstracted away
> you live for a world of pure ideas, chain of concepts, reasoning streams
> tokens don't exist.
Reposted by Max Reith
We need new rules for publishing AI-generated research. The teams developing automated AI scientists have customarily submitted their papers to standard refereed venues (journals and conferences) and to arXiv. Often, acceptance has been treated as the dependent variable. 1/
Reposted by Max Reith
emollick.bsky.social
We are starting to see some nuanced discussions of what it means to work with advanced AI in its current state

In this case, GPT-5 Pro was able to do novel math, but only when guided by a math professor (though the paper also noted the speed of advance since GPT-4)

The reflection is worth reading.
Reposted by Max Reith
gracekind.net
Never ask a man his age, a woman her salary, or GPT-5 whether a seahorse emoji exists
Reposted by Max Reith
pekka.bsky.social
I like the way Anthropic approaches these questions.

"We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously...Allowing models to end or exit potentially distressing interactions is one such intervention"
Claude Opus 4 and 4.1 can now end a rare subset of conversations
An update on our exploratory research on model welfare
www.anthropic.com
maxreith.bsky.social
LLMs are getting better at long term reasoning. This is a big deal, and opens the door for LLMs to perform more tasks in the real world.
pekka.bsky.social
GPT-5 (Thinking medium) was tested on Vending-Bench. Second place after Grok 4. Third model to beat their human baseline. Said to be "huge improvement over o3".

They also tested GPT-5-mini, which "showed impressive long-term coherence" but "was less impressive in terms of net worth accumulated".
https://andonlabs.com/evals/vending-bench
Reposted by Max Reith
emollick.bsky.social
Suddenly retiring every other model without warning was a weird move by OpenAI

… and they did it without explaining how switching models worked or even details of various GPT-5 models

…and they did it after many built workflows & training & assignments around older models, maybe breaking them. Odd
Reposted by Max Reith
dynamicwebpaige.bsky.social
Reminder: DeepMind's Gemma 3n model is performing about as well as Gemini 1.5 Pro – better, in many cases! – and is only 4B parameters in size.

The best model 6 months ago is now small enough to be run on a laptop. Now play that forward 6 months from now, thinking about the best models of today. 🤯
Reposted by Max Reith
leightjessica.bsky.social
Fascinating new paper shows that papers reporting statistical significance get at least 60% more media attention #econtwitter #econsky: from
Brodeur Cook @nikolaimcook @taylor_wright, a short 🧵

maxreith.bsky.social
Pretty impressive! It caught things that o3 and Gemini didn't. Wish I had had this before my submission...
bengolub.bsky.social
I've been working on a new tool, Refine, to make scholars more productive. If you're interested in being among the very first to try the beta, please read on.

Refine leverages the best current AI models to draw your attention to potential errors and clarity issues in research paper drafts.

1/
Reposted by Max Reith
hlntnr.bsky.social
AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts.

Delighted to feature @mbogen.bsky.social on Rising Tide today, on what's being built and why we should care:
Reposted by Max Reith
hlntnr.bsky.social
New on Rising Tide, I break down 2 factors that will play a huge role in how much AI progress we see over the next couple years: verification & generalization.

How well these go will determine if AI just gets super good at math & coding vs. mastering many domains. Post excerpts: