akbir khan
@akbir.bsky.social
340 followers 160 following 30 posts
dumbest overseer at @anthropic https://www.akbir.dev
Posts Media Videos Starter Packs
Reposted by akbir khan
epochai.bsky.social
We’ve added four new benchmarks to the Epoch AI Benchmarking Hub: Aider Polyglot, WeirdML, Balrog, and Factorio Learning Environment!

Before we only featured our own evaluation results, but this new data comes from trusted external leaderboards. And we've got more on the way 🧵
Reposted by akbir khan
epochai.bsky.social
4. Factorio Learning Environment by Jack Hopkins, Märt Bakler , and
@akbir.bsky.social

This benchmark uses the factory-building game Factorio to test complex, long-term planning, with settings for lab-play (structured tasks) and open-play (unbounded growth).
jackhopkins.github.io/factorio-lea...
Factorio Learning Environment
Claude Sonnet 3.5 builds factories
jackhopkins.github.io
Reposted by akbir khan
gasteigerjo.bsky.social
New Anthropic blog post: Subtle sabotage in automated researchers.

As AI systems increasingly assist with AI research, how do we ensure they're not subtly sabotaging that research? We show that malicious models can undermine ML research tasks in ways that are hard to detect.
akbir.bsky.social
control is a complimentary approach to alignment.

its really sensible, practical and can be done now, even before systems are superintelligent.

youtu.be/6Unxqr50Kqg?...
Controlling powerful AI
YouTube video by Anthropic
youtu.be
Reposted by akbir khan
emollick.bsky.social
This is a crazy paper. Fine-tuning a big GPT-4o on a small amount of insecure code or even "bad numbers" (like 666) makes them misaligned in almost everything else. They are more likely to start offering misinformation, spouting anti-human values, and talk about admiring dictators. Why is unclear.
akbir.bsky.social
This is the entire goal
gracekind.net
Grace @gracekind.net · Jan 31
It’s weird to live in a world where AI models are more aligned than the CEOs of the companies creating them
Reposted by akbir khan
hankgreen.bsky.social
The fact that Deepseek R1 was released three days /before/ Stargate means these guys stood in front of Trump and said they needed half a trillion dollars while they knew R1 was open source and trained for $5M.

Beautiful.
Trump announces 500B in AI funding. Five days ago. Deepseek r1 release. 8 days ago.
Reposted by akbir khan
zswitten.bsky.social
Can anyone get a shorter DeepSeek R1 CoT than this?
Reposted by akbir khan
tom4everitt.bsky.social
Process based supervision done right, and with pretty CIDs to illustrate :)
Reposted by akbir khan
markriedl.bsky.social
I don’t really have the energy for politics right now. So I will observe without comment:

Executive Order 14110 was revoked (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence)
akbir.bsky.social
R1 model is impressive
Reposted by akbir khan
emollick.bsky.social
New randomized, controlled trial by the World Bank of students using GPT-4 as a tutor in Nigeria. Six weeks of after-school AI tutoring = 2 years of typical learning gains, outperforming 80% of other educational interventions.

And it helped all students, especially girls who were initially behind.
Reposted by akbir khan
emollick.bsky.social
Generative AI has flaws and biases, and there is a tendency for academics to fix on that (85% of equity LLM papers focus on harms)…

…yet in many ways LLMs are uniquely powerful among new technologies for helping people equitably in education and healthcare. We need an urgent focus on how to do that
Reposted by akbir khan
emollick.bsky.social
On one hand, this paper finds adding inference-time compute (like o1 does) improves medical reasoning, which is an important finding suggesting a way to continue to improve AI performance in medicine

On the other hand, scientific illustrations are apparently just anime now arxiv.org/pdf/2501.06458
akbir.bsky.social
my metabolism is noticeably higher in london than the bay.
akbir.bsky.social
What can AI researchers do *today* that AI developers will find useful for ensuring the safety of future advanced AI systems? To ring in the new year, the Anthropic Alignment Science team is sharing some thoughts on research directions we think are important.
alignment.anthropic.com/2025/recomme...
Recommendations for Technical AI Safety Research Directions
alignment.anthropic.com
Reposted by akbir khan
hankgreen.bsky.social
My hottest take is that nothing makes any sense at all outside of the context of the constantly increasing value of human life, but that increase in value is so invisible (and exists in a world that was built for previous, lower values) that we constantly think the opposite has happened.
akbir.bsky.social
wait what does that mean?

Does it mean there are bugs in lean, or that it does too much work to check a proof?
akbir.bsky.social
wait isn’t everything just regularisation?