@ttunguz.bsky.social
210 followers 400 following 1.3K posts
Posts Media Videos Starter Packs
ttunguz.bsky.social
Data center investment is scaling towards $400 billion this year. Meanwhile, incumbents are striking strategic deals in the tens of billions, raising questions about circular financing & demand sustainability.
ttunguz.bsky.social
I wasn’t able to find any other comparable time series from neoclouds or hyperscalers to draw broader conclusions. These data points from Google are among the few we can track.
ttunguz.bsky.social
- Google combines internal & external AI token processing. The ratio might have changed.
- Google may be driving significant efficiencies with algorithmic improvements, better caching, or other advances that reduce the total amount of tokens.
ttunguz.bsky.social
- Google may be limited by data center availability. There may not be enough GPUs to continue to grow at these rates. The company has said it would be capacity constrained through Q4 2025 in earnings calls this year.
ttunguz.bsky.social
This raises more questions than it answers. What could be driving the decreased growth? Some hypotheses :

- Google may be rate-limiting AI for free users because of unit economics.
ttunguz.bsky.social
Between May & July, Google added 250T tokens per month. In the more recent period, that number fell to 107T tokens per month.
ttunguz.bsky.social
In May, Google announced at I/O they were processing 480 trillion monthly tokens across their surfaces. Two months later in July, they announced that number had doubled to 980 trillion. Now, it’s up to 1300 trillion.

The absolute numbers are staggering. But could growth be decelerating?
ttunguz.bsky.social
Philip Schmid dropped an astounding figure yesterday about Google’s AI scale : 1,300 trillion tokens per month (1.3 quadrillion - first time I’ve ever used that unit!).

Now that we have three data points on Google’s token processing, we can chart the progress.
ttunguz.bsky.social
The record at OpenAI is 7 hours of autonomous execution, 150M tokens, and 15K lines of code refactored with this design pattern. Pretty remarkable even for a senior engineer.

Congratulations, Robot. Keep climbing that ladder.

tomtunguz.com/congratulati...
Congratulations, Robot. You've Been Promoted!
OpenAI's Codex went from intern to senior engineer in 12 months. At 92% adoption & 72% more pull requests, the architect-implementer workflow proves AI has graduated beyond junior-level work.
tomtunguz.com
ttunguz.bsky.social
These tests can be visual (evaluate screenshots), functional (does the code run), or logical (does the code meet the requirements). Then a third robot reviews for quality & style.
ttunguz.bsky.social
In the plan, designing the tests / hurdles that a robot must pass to complete the task is critical. The robot runs the tests, fixes the code, runs the tests again, and repeats until passing.
ttunguz.bsky.social
The counterintuitive part? The second robot shouldn’t see the first robot’s context. Fresh discerning digital eyes catch more errors.

CLOSED FEEDBACK LOOPS
ttunguz.bsky.social
Ask a robot to write the plan document. You’ll refine your thinking as you review it. The robot manages progress through each step.
ttunguz.bsky.social
I wrote about architect-implementer architectures on Monday. The pattern splits work between two separate robots : the first designs the solution, the second executes it.
ttunguz.bsky.social
The team shared more. The best design patterns for collaborating with Codex are architect-implementer systems & closed feedback loops.

ARCHITECT-IMPLEMENTER
ttunguz.bsky.social
Congratulations, Robot. You’ve been promoted - again! From intern to senior engineer in about a year. Quite the trajectory.

Other data points :

- 92% of technical staff use Codex daily
- those staff generate 72% more pull requests (code submissions) than those who don’t use AI
ttunguz.bsky.social
Watching the OpenAI Dev Day videos, I listened as Thibault, engineering lead for Codex, announced “Codex is now a senior engineer.”

AI entered the organization as an intern - uncertain & inexperienced. Over the summer, engineering leaders said treat it like a junior engineer.
ttunguz.bsky.social
San Francisco | October 15, 5:30–7:30pm | Space is limited

Apply to join here : gatsby.events/theory-ventu.... Submit your questions through the registration form & I’ll weave them into our conversation.
Office Hours: Sales Leadership in the AI Age
gatsby.events
ttunguz.bsky.social
- How sales leaders can adapt in a fast-changing, AI-driven landscape
- The evolution from traditional software sales to AI & data platform selling
- Creating sales cultures that attract & retain top talent
- Navigating the shift from product-led to enterprise sales motions
ttunguz.bsky.social
During this intimate fireside chat, Chris & I will explore :

- Building & motivating high-performing sales teams in the AI era
- Lessons from scaling organizations from $100M → $1B+ at Databricks, UiPath & Google Cloud
ttunguz.bsky.social
On October 15th in San Francisco, Theory Ventures is hosting an exclusive Office Hours session with Chris Klayko, SVP of Sales at Databricks.
ttunguz.bsky.social
His remarkable journey includes scaling Google Cloud from tens of millions to a multi-billion dollar business in just four years, driving UiPath’s Americas expansion during its hypergrowth phase, & building SAP’s emerging solutions division.
ttunguz.bsky.social
Chris Klayko brings over two decades of sales leadership experience transforming technology companies from promising startups to multi-billion dollar enterprises. As SVP of Sales at Databricks, Chris leads the charge in democratizing data & AI for organizations worldwide.