Aki Ranin
akiranin.bsky.social
Aki Ranin
@akiranin.bsky.social
Thinking about AGI, futurism, history, philosophy, and longevity. 2x Founder. Deep Tech Investor.
Reading the OpenAI o1 system card...

One million developers generate a billion API calls per day to OpenAI.

If even 1% of those responses are deceptive, as the research suggests, that’s 10m per day.

If OpenAI can catch 92%, that leaves 800,000 per day.

Umm, what?!
December 10, 2024 at 5:10 AM
Is OpenAI o1 Pro Mode the "Oracle AI" we have been waiting for? The PhD level LLM that can help us discover new science?

Or is it actually evidence that advanced reasoning comes with advanced scheming and increased AI safety risks?

I think both.
December 9, 2024 at 6:15 AM
Reposted by Aki Ranin
Recently answered @anilananth.bsky.social's questions for Nature. No matter when it arrives, AGI and the road to reach it will both help tackle thorny problems (e.g. climate change and diseases), and pose huge risks. Understanding and transparency are key.
www.nature.com/articles/d41...
How close is AI to human-level intelligence?
Large language models such as OpenAI’s o1 have electrified the debate over achieving artificial general intelligence, or AGI. But they are unlikely to reach this milestone on their own.
www.nature.com
December 6, 2024 at 2:11 PM
More than evidence for AGI being imminent, potentially 2025. Sam Altman has reiterated his stance that AGI is just another milestone.

The real motivation here is that the AGI milestone gets OpenAI out of their Microsoft deal.
December 5, 2024 at 7:14 AM
Listening to Joe Rogan talk about climate change makes me feel dumber every time.
December 4, 2024 at 6:01 AM
When thinking about AI the most common mistake I see is failing to account for further progress.

This is so common it’s actually rare to see exceptions. I see this with executives, founders, and investors alike. Even AI professionals. 🧵

#agi #ubi #gpt5 #career #aijobs
December 3, 2024 at 2:45 AM
Here’s a short story from 1991 that tells the story of takeoff from an AGI’s perspective, in this scenario a human. How would it think, what would it think, how would it act, etc…

web.archive.org/web/20140527...
Understand - a novelette by Ted Chiang
He came so close to drowning, but they reached him just in time. It's the first time the hospital has ever tried their new drug on someone with so much brain damage. Does it work? Does it work too wel...
web.archive.org
December 2, 2024 at 11:47 PM
Unbelievable episode from @80000hours.bsky.social on OpenAI’s for-profit restructuring. Crazy facts 🧵
December 1, 2024 at 5:25 AM
If we built a Dyson sphere, how might we hide it? To avoid broadcasting our existence into the entire visible universe and solving someone else’s Fermi paradox?
November 30, 2024 at 3:33 PM
In 1942 leading physicists started disappearing before the Manhattan Project. When top AI people stop tweeting, you know it’s on…
November 30, 2024 at 9:25 AM
If you’re working in science or engineering and not applying AI systematically, you’re becoming obsolete.
🔭🧪🧬 Amazing DeepMind longread on how AI advances science by tackling scale/complexity bottlenecks: processing literature, enhancing data quality, simulations modeling complex systems, and exploring solution spaces that exceed human cognitive limits. www.aipolicyperspectives.com/p/a-new-gold...
A new golden age of discovery
Seizing the AI for Science opportunity
www.aipolicyperspectives.com
November 27, 2024 at 4:51 AM
People have been sleeping on Deepmind’s Alpha program for a decade. Even though they open-sourced most of it. Huge alpha in just applying AI, not even creating it.
Damn: researchers at Lawrence Livermore are using o1 to do complicated nuclear fusion research.

When people say AI will increase the pace of scientific R&D, this is what they're talking about.

www.theinformation.com/articles/why...
November 26, 2024 at 2:26 AM
Here's a list of reasons we should expect AGI by 2030:

1) We can scale up compute several OOMs.
2) Before we run out of chips, power, or data.
3) Prediction markets trending before 2030.
4) 2,778 AI researchers agree.
5) Ray Kurzweil, Shane Legg, Eric Schmidt, and Elon Musk agree.
November 25, 2024 at 4:33 AM
Since everyone is suddenly applauding Elon’s Diablo ranking, should I add my Counter-Strike clan from 2001 to my LinkedIn profile?
November 22, 2024 at 1:28 PM
I'm currently spending all my (limited) writing energy on Artificial General Intelligence (AGI). I think it's the biggest thing that can happen in the next decade, for better or worse.
November 22, 2024 at 3:45 AM
Bitcoin at 100k is a milestone, but let’s face it it’s not going to change your life from here. You had to get in a lot earlier.
November 21, 2024 at 11:37 AM
Shoutout @robertwiblin.bsky.social for your incredibly helpful Starter Pack on AI. Much appreciated!
November 20, 2024 at 11:54 PM
This is a roadmap used by OpenAI to judge the intelligence of its AI systems.

According to OpenAI, they are now at Level 3: Agents. So we are halfway there to AGI?

Full analysis here:
Road to AGI: Timelines
Part 1: Is AGI imminent?
open.substack.com
November 20, 2024 at 3:07 AM