Jan Kulveit
@kulveit.bsky.social
460 followers 120 following 31 posts
Researching x-risks, AI alignment, complex systems, rational decision making
Posts Media Videos Starter Packs
Reposted by Jan Kulveit
pnas.org
ChatGPT and other LLMs were asked to choose between consumer products, academic papers, and films summarized either by humans or LLMs. The LLMs consistently preferred content summarized by LLMs, suggesting a possible antihuman bias. In PNAS: www.pnas.org/doi/10.1073/...
ChatGPT and other LLMs were asked to choose between consumer products, academic papers, and films summarized either by humans or LLMs. The LLMs consistently preferred content summarized by LLMs, suggesting a possible antihuman bias. In PNAS: https://www.pnas.org/doi/10.1073/pnas.2415697122
kulveit.bsky.social
Related work by @panickssery.bsky.social
et al. found that LLMs evaluate LLM-written texts written by themselves as better. We note that our result is related but distinct: the preferences we’re testing are not preferences over texts, but preferences over the deals they pitch.
kulveit.bsky.social
While defining and testing discrimination and bias in general is a complex and contested matter, if we assume the identity of the presenter should not influence the decisions, our results are evidence for potential LLM discrimination against humans as a class.
kulveit.bsky.social
Unfortunately, a piece of practical advice in case you suspect some AI evaluation is going on: get your presentation adjusted by LLMs until they like it, while trying to not sacrifice human quality.
kulveit.bsky.social
How might you be affected? We expect a similar effect can occur in many other situations, like evaluation of job applicants, schoolwork, grants, and more. If an LLM-based agent selects between your presentation and LLM written presentation, it may systematically favour the AI one.
kulveit.bsky.social
"Maybe the AI text is just better?" Not according to people. We had multiple human research assistants do the same task. While they sometimes had a slight preference for AI text, it was weaker than the LLMs' own preference. The strong bias is unique to the AIs themselves.
kulveit.bsky.social
We tested this by asking widely-used LLMs to make a choice in three scenarios:
🛍️ Pick a product
📄 Select a paper from an abstract
🎬 Recommend a movie from a summary
One description was human-written, the AI. The AIs consistently preferred the AI-written pitch, even for the exact same item.
kulveit.bsky.social
Being human in an economy populated by AI agents would suck. Our new study in @pnas.org finds that AI assistants—used for everything from shopping to reviewing academic papers—show a consistent, implicit bias for other AIs: "AI-AI bias". You may be affected
Reposted by Jan Kulveit
davidduvenaud.bsky.social
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!

Post-AGI Civilizational Equilibria: Are there any good ones?

Vancouver, July 14th
www.post-agi.org

Featuring: Joe Carlsmith, @richardngo.bsky.social‬, Emmett Shear ... 🧵
Post-AGI Civilizational Equilibria Workshop | Vancouver 2025
Are there any good ones? Join us in Vancouver on July 14th, 2025 to explore stable equilibria and human agency in a post-AGI world. Co-located with ICML.
www.post-agi.org
kulveit.bsky.social
- Threads of glass beneath earth and sea, whispering messages in sparks of light
- Tiny stones etched by rays of invisible sunlight, awakened by captured lightning to command unseen forces
kulveit.bsky.social
Imagine explaining physical infrastructure critical for stability of our modern world in concepts familiar to the ancients
- Giant spinning wheels
- Metal moons, watching the earth from the heavens
- Ships under the sea, able to unleash the fire of the stars
kulveit.bsky.social
AI safety has a problem: we often implicitly assume clear individuals - like humans.

In a new post, I'm sharing why this fails, and why thinking of AIs as forests, fungal networks, or even reincarnating minds helps get unconfused.

Plus stories, co-authored with GPT4.5
The Pando Problem
AI safety has a problem: we often implicitly assume clear individuals—like humans.
boundedlyrational.substack.com
kulveit.bsky.social
The Serbian protests show The True Nature of various 'Colour revolutions':

Which is, people protesting just don't prefer to live in incompetent kleptocratic Russia-backed states. No US scheming needed.
kulveit.bsky.social
Confusion which casual US observers often have is equating Russia with ˜former Warsaw Pact.
Warsaw Pact population was 387M: USSR 280M, Poland 35M, E.Germany 16M, Czechoslovakia 15M, Hungary 10M, Romania 22M, Bulgaria 9M.
Russia+Belarus is now 144M, NATO East& Ukraine ˜150M.
Reposted by Jan Kulveit
moskov.goodventures.org
the most surprising and disappointing aspect of becoming a global health philanthropist is the existence of an opposition team
kulveit.bsky.social
A simple theory of Trump’s foreign policy: "make the world safer for autocracy" (‘strong man rule,’ etc.), moderated by his personal self-interest.

What is the best evidence against?
Reposted by Jan Kulveit
davidduvenaud.bsky.social
New paper: What happens once AIs make humans obsolete?

Even without AIs seeking power, we argue that competitive pressures are set to fully erode human influence and values.

www.gradual-disempowerment.ai

with @kulveit.bsky.social, Raymond Douglas, Nora Ammann, Deger Turann, David Krueger 🧵
kulveit.bsky.social
Accessible model of psychology of character-trained LLMs like Claude: "A Three-Layer Model".
-Mostly phenomenological, based on extensive interactions with LLMs, eg Claude.
-Intentionally anthropomorphic in cases where I believe human psychological concepts lead to useful intuitions
A Three-Layer Model of LLM Psychology — LessWrong
This post offers an accessible model of psychology of character-trained LLMs like Claude.  …
www.lesswrong.com
kulveit.bsky.social
7/7 At the end ... humanity survived, at least to the extent that "moral facts" favoured that outcome. A game where the automated moral reasoning led to some horrible outcome and the AIs were at least moderately strategic would have ended the same.
kulveit.bsky.social
6/7 Most attention went to geopolitics (US vs China dynamics). Way less on alignment, if, than focused mainly on evals. How a future with extremely smart AIs may going well may even look like, what to aim for? Almost zero
kulveit.bsky.social
5/7 Most people and factions thought their AI was uniquely beneficial to them. By the time decision-makers got spooked, AI cognition was so deeply embedded everywhere that reversing course wasn't really possible.
kulveit.bsky.social
4/7 Fascinating observation: humans were often deeply worried about AI manipulation/dark persuasion. Reality was often simpler - AIs just needed to be helpful. Humans voluntarily delegated control, no manipulation required.
kulveit.bsky.social
3/7 Today's AI models like Claude already engage in moral extrapolation. For example, this is an Opus eigenmode/attractor: x.com/anthrupad/st...
If you do put some weight on moral realism, or moral reflection leading to convergent outcomes, AIs might discover these principles.