jim
banner
jimfund.com
jim
@jimfund.com
I'm working on getting my head around the model of takeoff used here. I'm starting by reading this post:
www.forethought.org/research/wil...
February 2, 2026 at 10:50 AM
I'm agnostic on whether models can get more performance from latent space reasoning. If so, I expect we'll see this transition instead of one to thinking in tokens of a nonhuman language, but it will still take place at the point where models get to a near-human level of understanding of the world.
January 30, 2026 at 8:56 AM
I think many of the reasons that people give for discounting OpenAI in the race to AGI fall into this category. OpenAI's models have defined the curve of AI model capabilities across time so far. I expect this to continue at least in the short term.
January 30, 2026 at 4:30 AM
This kind of truth is important to keep in mind when trying to divine from a lab's statements or actions how it is expecting its future models to perform.
January 30, 2026 at 4:30 AM
"The other labs that are dependent on fundraising have downplayed such talk exactly because it is counterproductive for raising funds..."
From Zvi's 'AI #153: Living Documents'
www.lesswrong.com/posts/bSQagZ...
January 30, 2026 at 4:30 AM
But once AI models get to a near-human level of understanding of the world, the benefits of using human languages will be outweighed by the cost of using a language optimised for a different kind of intelligence.
January 29, 2026 at 3:35 AM
What I am speculating is that model CoT will move from human language to languages developed by models during training.

At the moment, it makes sense for models to use human language. It encodes a lot of useful information about the world.
January 29, 2026 at 3:35 AM
Obviously, these are bullish predictions. But, honestly, I wouldn't be surprised if I've massively undershot on a few of these.
January 27, 2026 at 5:42 AM
Reading this kind of document is, as I believe I saw someone else observe, simply a very sci-fi experience. And, in this particular case, it feels like an appropriate beginning to what I am quite confident is going to be the year in which humans find that they've created an intellectual peer.
January 23, 2026 at 9:45 AM
I'll investigate the empirical side of this later. Now, I'll think things through from first principles. Math is more verifiable, and is a domain in which it's easier to come up with ever-longer tasks. This means that as model horizons increase, the extent to which math is easier to train increases.
January 22, 2026 at 4:56 AM
However, the exact extent of this is unclear. Maybe a change occurred as we entered the reasoning paradigm for LLMs, so maybe inferring a shorter doubling time is a mistake, and we should instead just expect LLM abilities in mathematics to continue to just sit a little above programming for now.
January 22, 2026 at 4:43 AM
Today's LLMs are good at math relative to other domains. Up until recently, LLMs were better at programming than mathematics. But doubling times are shorter in mathematics than they are in programming, so now LLMs are better at mathematics, and the gap is growing.
January 22, 2026 at 4:23 AM
First, let's get ourselves well situated in reality and fill up our context window with useful information. What trajectory is AI capability in mathematics on? What level is it at now? What does this mean for the near future of mathematics?
January 22, 2026 at 3:37 AM
We need to consider more deeply:
- what 2% time horizons of 90h mean for mathematics
- autonomous ml research
- acceleration in harnesses which accelerate ai progress and general economic output
- how the market will react to job losses + increased productivity
But I'm tired for now.
January 19, 2026 at 5:48 AM
have a 2% time horizon of roughly 90h.
January 19, 2026 at 5:19 AM
Whereas for amplifying programmer productivity the 50% time-horizon is kind of the crucial statistic, for breakthroughs / cutting-edge research the 2% horizon might be just as relevant a figure. We can assume 2% time horizons are roughly 3x the length of 50% time horizons, so at this stage we'd...
January 19, 2026 at 5:18 AM
LLM problem solvers have the advantage of being familiar with ~all the relevant literature. It will be more like having an army of Terry Taos who will work with you for a few days then leave forever, than it will be having an army of, say, grad students who will work with your for a few days.
January 18, 2026 at 11:58 PM