William Gunn
metasynthesis.net
William Gunn
@metasynthesis.net
Reposted by William Gunn
Asking informally: does anyone know someone who might be interested in a postdoc focused on understanding changes in memory representations driven by attention using EEG? ⚡️Thanks!
December 1, 2025 at 5:03 AM
Reposted by William Gunn
I'm at NeurIPS & hiring for our pretraining safety team at OpenAI! Email me if you want to chat about making safer base models!
December 1, 2025 at 6:03 AM
Reposted by William Gunn
Last week, our video on PFAS 'forever chemicals' won a AAAS Kavli Award for Science Journalism!

The project was a massive team effort that took months, so we're grateful to Kavli Foundation for recognizing our work with a Gold Award for Video In-Depth Reporting.

sjawards.aaas.org/news/2025-aa...
2025 AAAS Kavli Science Journalism Award Winners Named | AAAS Kavli Science Journalism Awards
Stories describing what can happen when science is manipulated or misapplied are among the winners of the 2025 AAAS Kavli Science Journalism Awards. Winning journalists also did stories on science at ...
sjawards.aaas.org
November 24, 2025 at 2:04 PM
Reposted by William Gunn
The world has lost a giant of virology, molecular biology and science advocacy

David Baltimore’s obituary by Stephen Goff www.nature.com/articles/d41...
David Baltimore obituary: virologist whose enzyme discovery transformed understanding of cancer and HIV/AIDS
The protein, reverse transcriptase, has become an essential tool for making DNA copies of RNA.
www.nature.com
December 1, 2025 at 1:25 PM
Reposted by William Gunn
New paper in Science:

In a platform-independent field experiment, we show that reranking content expressing antidemocratic attitudes and partisan animosity in social media feeds alters affective polarization.

🧵
December 1, 2025 at 7:59 AM
Honestly, I’m shocked I curse this little.
@mikefeigin.bsky.social has swears! They've used 422 profanities in their last 5,009 posts.

🥇 "fucking" (96 times)
🥈 "fuck" (59 times)
🥉 "shit" (56 times)
December 1, 2025 at 12:50 AM
@profanity.accountant What say you?
November 30, 2025 at 6:49 PM
Reposted by William Gunn
1/3 "What are the conditions for an AI system in the future to cause catastrophic harm?" Turing Award winner @yoshuabengio.bsky.social asked, during his Richard M. Karp Distinguished Lecture at the Simons Institute earlier this year. www.youtube.com/watch?v=g0lj...
Superintelligent Agents Pose Catastrophic Risks — ... | Richard M. Karp Distinguished Lecture
YouTube video by Simons Institute for the Theory of Computing
www.youtube.com
November 29, 2025 at 4:55 AM
There's a few good criticisms to these overall very good points. Can anyone make them?
November 29, 2025 at 4:43 AM
If I could tell everyone anything about social media, I would say:
#1. Take it WAY less seriously, no matter how much attention something is getting.
#2. Any recommendation feed will run out of good matches for your specific interests and just start showing you stuff that's popular.
November 28, 2025 at 3:46 AM
This stuff never makes sense until you realize that making sense is not what it's intended to do.
I thought people were exaggerating when they said so-called leftists on here were getting mad about lab grown meat, but nope.

When you become indistinguishable from the conspiratorial freaks I grew up around in the south, you don’t get to call yourself a leftist anymore.
November 28, 2025 at 3:33 AM
What You Don't Know About AI Use Of Water Will Shock You!
Debunking AI’s Environmental Panic - with Andy Masley pca.st/episode/54e370… #AI #environment (good episode)
November 28, 2025 at 3:26 AM
Reposted by William Gunn
Debunking AI’s Environmental Panic - with Andy Masley pca.st/episode/54e370… #AI #environment (good episode)
November 28, 2025 at 2:27 AM
Reposted by William Gunn
Do you want to fund AI alignment research?

The AISI Alignment Team and I have reviewed >800 Alignment Project Applications from 42 countries, and we have ~100 that are very promising. Unfortunately, this means we have a £13-17M funding gap! Thread with details! 🧵
I am very excited that AISI is announcing over £15M in funding for AI alignment and control, in partnership with other governments, industry, VCs, and philanthropists!

Here is a 🧵 about why it is important to bring more independent ideas and expertise into this space.

alignmentproject.aisi.gov.uk
The Alignment Project by AISI — The AI Security Institute
The Alignment Project funds groundbreaking AI alignment research to address one of AI’s most urgent challenges: ensuring advanced systems act predictably, safely, and for society’s benefit.
alignmentproject.aisi.gov.uk
November 27, 2025 at 6:25 PM
When people mimic the "stochastic parrots" they're trying to criticize...
Reminds me of a webinar & someone drops a "comment not question" saying we should distinguish neural search from generative ai cos only former can do evidence retrieval? Er The webinar literally showed screening with LLMs! I know people want villianize "gen ai" but can they think before speaking?
November 27, 2025 at 6:26 PM
Reposted by William Gunn
METRICS is accepting applications for the 2026–27 postdoctoral fellowship in meta-research at Stanford. Deadline: Feb 15, 2026. Start date will be around Oct 1, 2026 (+/- 2 month flexibility). See: metrics.stanford.edu/postdoctoral... #MetaResearch #postdoc
Postdoctoral Fellowship Announcement 2026-27
metrics.stanford.edu
November 26, 2025 at 10:05 PM
Sometimes people say things because they think they're true, sometimes they say them to indicate which side they're on. If you confuse which is which, you get what you see in this thread.
Is this platform still massively against AI or has it moved more towards acceptance?
November 26, 2025 at 5:06 PM
Reposted by William Gunn
🤔💭What even is reasoning? It's time to answer the hard questions!

We built the first unified taxonomy of 28 cognitive elements underlying reasoning

Spoiler—LLMs commonly employ sequential reasoning, rarely self-awareness, and often fail to use correct reasoning structures🧠
November 25, 2025 at 6:26 PM
Reposted by William Gunn
Agentic AI systems can plan, take actions, and interact with external tools or other agents semi-autonomously. New paper from CSA Singapore & FAR.AI highlights why conventional cybersecurity controls aren’t enough and maps agentic security frameworks & some key open problems. 👇
November 25, 2025 at 7:41 PM
Reposted by William Gunn
I’m pleased to share the Second Key Update to the International AI Safety Report, which outlines how AI developers, researchers, and policymakers are approaching technical risk management for general-purpose AI systems.
(1/6)
November 25, 2025 at 12:06 PM
Reposted by William Gunn
Interestingly, high self-reported confidence is associated with lower accuracy. This is dissimilar to most of the literature.

In our setting, we can track individual forecasters over time. And thus we can observe: this result is driven by overconfident forecasters.
November 24, 2025 at 3:43 PM
Reposted by William Gunn
🏆 Institutional: The Brazilian Reproducibility Initiative is a nationwide effort to evaluate research results in laboratory biology & the largest coordinated replication effort in the field worldwide, showcasing the potential of country-level research improvements. @redebrrepro.bsky.social (3/5)
November 24, 2025 at 10:00 AM
William's Law: Any content moderation plan that doesn't account for motivated reasoning by the people in charge of the plan will eventually expose them to risk. How long it takes depends on what senior leadership signals they want to hear.
November 24, 2025 at 1:41 AM
Good thread. You should judge people by more important actions (and also understand that many people won't have that history and will use their opinion of your appearance to form a first impression, whether they should or not).
But on issues such as respectability and morality, I think you should judge people by their deeper, more important actions. That doesn't mean how they dress, but how they treat others on a more meaningful level.

I will end with something I wrote five years ago about the messy nature of dress codes.
November 24, 2025 at 1:31 AM