Ed Zitron
@edzitron.com
170K followers 2.7K following 25K posts
British, But In Las Vegas and NYC ezitron.76 Sig Newsletter - wheresyoured.at https://linktr.ee/betteroffline - podcast w/ iheartradio Chosen by god, perfected by science CEO at EZPR.com - Award-Winning Tech PR
Posts Media Videos Starter Packs
Pinned
edzitron.com
Premium: The AI Bubble's promises are impossible. NVIDIA's customers are running out of money, GPUs die in 3-5 years, most 1GW data centers will never get built, and OpenAI's Abilene data center doesn't won't have the power it needs before 2028 - if it ever does.
www.wheresyoured.at/the-ai-bubbl...
The AI Bubble's Impossible Promises
Readers: I’ve done a very generous “free” portion of this newsletter, but I do recommend paying for premium to get the in-depth analysis underpinning the intro. That being said, I want as many people ...
www.wheresyoured.at
edzitron.com
It is no longer reasonable to trust that any of this will happen. NVIDIA is having to buy its own GPUs and rent them to their customers, OpenAI cannot build 26GW of data centers, and every data center project is a financial black hole.

www.wheresyoured.at/the-ai-bubbl...
edzitron.com
Yet even if these data centers get built - and one analyst told me only 30% of them are viable - they are dead weight debt vehicles of what will be near-obsolete GPUs that *everybody* has been buying - all built for AI compute demand that doesn't exist.
www.wheresyoured.at/the-ai-bubbl...
And Even If They Build Them, These Projects Are Dead Weight Debt Vehicles
Private equity firms make money by investing in things that they eventually sell, with the average holding period being around 5.8 years.

How, exactly, does that work when you’re building data centers full of GPUs that will, after five years, be five generations behind? 

How, exactly, does a private equity firm cash out on an asset that everybody else appears to be building, and who exactly do they sell it to?

And what happens if the GPUs inside die after three years? Who pays to replace them, and what do they replace them with? 

In the best case scenario, we’re watching a situation where private equity investors pile tens or hundreds of billions of dollars into assets that start decaying the second that they’re built, ones that are commoditized by the very nature of the supposed “popularity” of generative AI and the single vendor — NVIDIA — that everybody is buying GPUs from. 

There is little to differentiate one data center from another outside of its location, and at some point you have to wonder if that will matter when only one company — OpenAI — appears to actually need these massive amounts of compute.
edzitron.com
It is impossible for OpenAI to build the 26GW of compute - which will require 33.8GW of power - that they've promised. It requires massive pre-ordering of custom electrical gear and hundreds of billions of dollars of infrastructure.

Sam Altman is lying.
www.wheresyoured.at/the-ai-bubbl...
It Is Fundamentally Impossible For OpenAI To Build The Data Center Capacity It’s Promised, As It Would Require 33.8GW of Power, Massive Pre-Ordering Of Custom Transformers, and Hundreds of Billions Of Dollars In Power Infrastructure Alone — And Most Of The Capacity Remains Unplanned
In any case, as I’ve explained, the power infrastructure necessary to build out these data centers is immense. For example, the power necessary for a 900MW data center planned in Virginia will be run across three different 300 Megawatt phases, won’t begin construction until February 2027, won’t finish construction until June 2028, and won’t even complete its first 300 Megawatt phase until 2031, despite Google saying it would take 18 to 24 months to build the data center.

For OpenAI to build 26GW of compute capacity would require them to secure 33.8GW of power. For comparison, the US Energy Information Administration predicts that the US will add approximately 63 gigawatts of new power capacity in 2025.

Honestly, it’s hard to even calculate the amount that this power infrastructure might cost, as things vary wildly based on location (hotter climes require more cooling, and are more susceptible to transmission loss), the kinds of chips used in these facilities, and so on. 

There are hard, physical limits to the amount of power that you can build, with each massive power project requiring bespoke transformers and infrastructure, each in and of itself requiring anyone planning these massive deployments to pre-order them years in advance. 

Money can only accelerate so much, and custom projects built with electrical-grade steel cannot be accelerated if the steel itself is in short supply, a problem compounded by tariffs. Even if the steel were available, power companies require massive amounts of surveys and testing to make sure the power is reliably and safely delivered to the end customer. You need to win the approval of local governments — and, crucially, local communities, who might ta… It Is No Longer Ethical To Trust Anyone Promising To Build Gigawatts Of Compute
I cannot express this clearly enough: there is not enough power to power Stargate Abilene, and there may not be enough before the year 2028, which will be multiple fiscal years into Oracle’s $300 billion contract with OpenAI. If this is the case, OpenAI may have a case to walk away from Abilene — or pay Oracle a much smaller cut until (or if) it can get the power necessary to run the facility.

Everything I’ve learned preparing this newsletter has made me question any and all claims by anybody saying it’s going to build a gigawatt data center. I can find no evidence that anybody has constructed one, no evidence that anybody has built the power sufficient to power a gigawatt of compute, and no plans that suggest anybody will successfully complete a gigawatt data center project before the year 2028.

Construction is also hard, and prone to both delays and budgetary shortfalls. The amount of money, infrastructure, time and domain specific labor (there’s a shortage, by the way!) required to pull off even a gigawatt of data centers is so blatantly unrealistic that I believe the media needs to actively stop reporting on these without asking very practical questions about their feasibility. 

While these things might get built at some point, they’re also reliant on massive amounts of debt, all of which is contingent on people still believing that there’s massive demand for generative AI, at a time when everybody is saying that we’re in a bubble.

At some point, private credit will stop issuing billions of dollars in debt for anyone who says the word “gigawatt," likely at the first sign that these projects are going over budget and require more money to keep them alive.

In simpler terms, data centers are a money pit with few chances of a return.
edzitron.com
Oracle needs 4.5GW of compute capacity to handle OpenAI's $300 billion compute contract. They have only got the land for 2.6GW, and there isn't a chance they have that before 2028.

They are guaranteed to breach their contract with OpenAI.
www.wheresyoured.at/the-ai-bubbl...
“Gigawatt Data Centers” Are A Pipedream, Taking 2.5 Years Per Gigawatt In Construction and Even Longer For Power, And Oracle’s 4.5GW Of Compute Won’t Be Ready Before 2030, Guaranteeing A Breach Of Its Agreement With OpenAI
As I discussed a few weeks ago, Oracle needs to build 4.5GW of data centers to honour its $300 billion compute contract with OpenAI, and hasn’t even decided where it’s putting 1.9GW of that compute.

What we do know is that 1.4GW of that compute is meant to be built in Shackelford Texas, with “the first building scheduled for delivery in the second half of 2026.”

Remember, it’s about 36 months of lead time — if not more — to get the transformers necessary to build the power necessary for these large data centers, and it appears that Vantage Data Centers is “in the early planning stages” as of late July 2025, and Vantage only just raised the $25 billion to develop the data center, which has yet to break ground. As discussed, the Shackelford site will require at least 1.82GW of power — requiring power structure at a scale that I’ve yet to find a comparison for. 

It takes 2.5 years per gigawatt to build a gigawatt datacenter meaning that there isn’t a chance in hell Oracle has the capacity necessary before the end of its contract with OpenAI, guaranteeing that it will breach it within the next few years, assuming that OpenAI actually has the money to pay for the compute, which it will not.
edzitron.com
1GW data centers are entirely different to the smaller ones built in the past. They have different power, cooling and construction problems, and *nobody* has built one yet. And for every 1GW of power, you only get 700MW of IT load/compute capacity.
www.wheresyoured.at/the-ai-bubbl...
1GW+ Capacity Data Centers — And Their Associated Power — Are Entirely Different To Building (and Powering) Smaller Data Centers
Kleyman also added that a 1GW data center is a vastly different beast than smaller, 200MW projects.

There are ALWAYS little energy/power gremlins you have to worry about. Now... take that statement and apply it to gigawatt-scale deployments. So... At 1 GW+ scale you're looking at a few issues that could creep up. This includes grid stability, harmonic distortion, voltage regulation, and fault current limits, which become far more acute. Here's the thing... with this level of scale, even minor inefficiencies or mis-sizing can turn into big risks. The system needs holistic protection design (switching, relays, breakers) to handle large fault currents, and transient behaviors (inrush, switching surges) can drive huge stress on components. Here's the REALLY important thing to remember... Cooling, water supply, redundancy, and electrical layout (bus duct, conductor sizing) also scale non-linearly. That means... what’s manageable at 100 MW becomes a design headache at 1 GW. There was an interesting McKinsey study which notes that scaling existing data center methods to GW size demands as many as ~290 separate generator units in large architectures, which dramatically multiplies maintenance, coordination, and failure modes.
edzitron.com
Lancium, a developer at OpenAI's Abilene data center, only just started building the 1GW substation necessary to get close to the 1.7GW they need. It will take years to build, requiring custom transformers and specialized labor during a global shortage of both.
www.wheresyoured.at/the-ai-bubbl...
Lancium Only Just Started Building Its 1GW Substation - And These Things Can Take Years To Get Parts, Years More To Build And Cost Hundreds Of Millions If $1 Billion+
So, my editor Matt Hughes and I just spent the last two days of our life learning all about power, speaking with multiple different analysts and reading every available document about Project Artemis, Project Longhorn, Crusoe, Lancium, Mortenson, and basically anyone related.

I do not see a path to building this substation any faster than January 1 2027, and if I’m honest, I don’t think they’re going to finish it before 2028, if it even happens at all.

Lancium has been talking about the 200MW substation it’s building since 2022, yet my source in Abilene said that the thing only got done a few months ago. 

Building power is also difficult. You don’t just get a permit and start building these things. They require massive infrastructure to bring the power to the location, as well as going through the Large Load Integration (LLI) process formed by Texas’ ERCOT, which requires entire system studies and engineering plans before you can even start ordering the parts.

To make matters worse, procuring the Large Power Transformers is tough, because each one is made custom for the job 

A 1GW substation is an entirely different beast — a remarkable feat of engineering requiring five massive transformers during a global shortage, with the US Department of Energy estimating lead times with domestic producers of over 36 months in July 2024. Even if Crusoe ordered them right then and there (it didn’t), the wait time would be obscene.

And I’ve been unable to find an example of somebody building one outside of those built for cities of millions of people, such as the combined Astoria Energy I and II substations that power New York City and the surrounding areas.

In simpler terms, what Crusoe, Lancium and Mortenson are suggesting is completely unrealistic.

It's a very aggressive timeline. I'm factoring in lead times, supply chain issues, labor shortages, tariffs, and more... but (and it's a BIG but) technically possible if long-lead items (e.g., 345 kV transformers) are procured early and construction runs in parallel workstreams. 

Mortenson notes the 1 GW, 345 kV substation with five main power transformers is already in design, and the site has an initial 200 MW, 138 kV substation in place—both are positive schedule indicators. Aero blocks like LM2500XPRESS are designed for rapid installation and can help phase capacity ahead of full substation completion, but schedule risk remains around interconnection energization and transformer lead times. I work with established power and data center companies, like Lancium, which is building the Stargate campus, and they have some serious issues with scheduling and timelines. 
Is it good when your lead developer has “serious issues with scheduling and timelines”?

Anyway, Kleyman added that “early procurement” is critical to any project of this scale, referring to “ordering long-lead, critical equipment... like, power transformers, switchgear, breakers, and control systems...WAY before the civil or electrical work actually begins,” adding that while Mortenson is “real good at what they do and usually parallel-path design and procurement,” if they haven’t, there’s “NO WAY they’ll go into full operation by mid-2026.” 

Getting specific, Kleyman said that “a project that wants to energize by mid-2026 would ideally have placed those equipment orders in late 2024 or very early 2025 to stay on schedule.” While it’s technically possible Lancium did this — after all, it raised $500 million in November 2024 and another $600 million earlier this week — Kleyman added there has been no public confirmation of the same.

He also added the money can be “tight” in this situation:

Just like remodeling your house, large projects (especially like the one we're discussing) typically face 20–40…
edzitron.com
Oracle's "1.2GW" Abilene data center that they're building for OpenAI is meant to be turned over end of 2026 - yet they've only got, at max, 32.35% (535MW) of the power they need, relying on 10 gas turbines an analyst told Odd Lots were "not very good."
www.wheresyoured.at/the-ai-bubbl...
Abilene’s Power Situation Is Complicated, Insufficient (only 32.35% of the 1.7GW it need) and Involves Massive, Inefficient Gas Turbines 
So, there are many different vendors working on the “clean campus” in Abilene, but we’ll simplify by focusing on those building the data center and the surrounding power.

Lancium is the owner of the land, and it’s hired construction and real estate firm Mortensen to build out the power infrastructure, starting with a 200MW substation and growing to a 1GW substation with five main power transformers according to Mortenson.

My source at Abilene tells me that the 200MW substation only just got built within the last two months, and the bigger substation has only just started construction. 

This is likely why Crusoe hired contractor Primoris in November 2024 to build “Project Longhorn,” a fracked gas power plant featuring ten gas turbines, for delivery by the end of 2025. A permit filed by Crusoe on January 10 2025 states that it wanted to change how it’d power the facility using gas, choosing to run the turbines for 8760 hours per year — meaning that Crusoe intends to run turbines that are “not very good” according to analyst James Van Geelen 24 hours a day, all year. 

It’s not totally clear if the permit was issued. While permit 177263 was issued, it shows as “pending” in the Texas Commission on Environmental Quality’s search page, likely meaning the modification to run these turbines all year is still in process.
edzitron.com
Meanwhile, OpenAI has made Fidji Simo their CEO of Applications, and reports say that she now is responsible for making ChatGPT profitable. She's being set up as an Elizabeth Holmes figure, and we have to make sure Sam Altman actually takes the blame.
www.wheresyoured.at/the-ai-bubbl...
I get that you think “wow, OpenAI has the monopoly over 800 million weekly active users, that’s exactly like what Google has,” except…Google is a massive operation precision-tuned to make sure ads are seen all the time, in a constant ever-increasing push against their users to see how far it can push them, with massive ad sales teams, decades of data, thousands of miles of underground cable, and unrivalled scale, all of it built on top of something that can be relied upon, unlike Large Language Models.

And guess what, even if it were possible, Sam Altman has now made so many promises of such egregious sums of money that it is effectively impossible for Fiji Simo to succeed. It isn’t Fiji Simo’s fault that Sam Altman promised Oracle so much money! It isn’t Fiji Simo’s fault that Sam Altman has said that she has to make the company $200 billion in 2030! It certainly isn’t Fiji Simo’s fault that Sam Altman has to build 26 Gigawatts of data centers and has plans to promise to build many, many more! 

Fiji Simo is the fall girl, and it’s very important that history remembers her accurately. It was her decision to take this job — she is likely making incredibly large amounts of money in both cash and stock — but when OpenAI implodes from the sheer force of Sam Altman’s bullshit, we need to make sure she isn’t blamed for putting this company in this mess, even if it manages to die under her watch (assuming she isn’t fired or quits before the end). 

Given the strength of feeling amongst the die-hard, I fear that when things inevitably go terminally wrong, Simo will receive the brunt of the blame — because blaming a single person is a lot easier than acknowledging that the business fundamentals behind OpenAI were deranged, and generative AI wasn’t the mass-market business and consumer tool that said die-hard believe it to be. 

To be clear: any attempt to frame her as an Elizabeth Holmes will be  a cowardly and nakedly sexist attempt to shift the blame from Sam Altman, a c…
edzitron.com
There's growing evidence that everybody loses money renting out AI GPUs. Oracle lost $100m in the space of three months renting out NVIDIA's new "Blackwell" GPUs, destroying their gross margins. They owe Crusoe $1bn/year for 15 years for their Abilene data center.
www.wheresyoured.at/the-ai-bubbl...
AI Data Centers Are A Complete Disaster
Oracle Lost $100 Million In The Three Months Ending August Renting Out Blackwell GPUs
The Information reported earlier in the week that Oracle had gross profit margins of 14% on GPU compute sales, making a “gross profit” of $125 million on $900 million of revenue in the three months ending August 2025. 

While this could be actual profit, these margins only include the immediate costs of running the GPUs, and while they include (per The Information) “depreciation expenses for some of the equipment,” “other unspecified depreciation costs would eat up another 7 percentage points of margin.” With such thin margins, it’s very likely other expenses will eat into any remaining profitability, and I’m not sure why The Information chose to go with the 14% number rather than 7% (or lower).

Yet all of this obfuscates the really bad parts. Oracle’s gross profit margins appear to be dwindling with every increase of GPU revenue. In August, Oracle made $895.7 million — a month where it lost $100 million renting out NVIDIA’s blackwell chips. While The Information claims that this is “partly because there is a period between when Oracle gets its data centers ready for customers and when customers start using and paying for them,” a statement that doesn’t really make sense when you see Oracle’s revenue growing.


This might be because Oracle is signing unprofitable deals:

As sales from the business nearly tripled in the past year, the gross profit margin from those sales ranged between less than 10% and slightly over 20%, averaging around 16%, the documents show.

In some cases, Oracle is losing considerable sums on rentals of small quantities of both newer and older versions of Nvidia’s chips, the data show. 
Oracle appears to be losing more money with every customer it signs for GPU compute, and somehow lost $100 million on Blackwell chips in the space of three months. I severely doubt that’s from not turning them on, considering its revenue increased by nearly $200 million between May and August 2025. In fact, I bet it’s because they’re extremely expensive to run, on top of the fact that Oracle has likely had to low-ball Microsoft Azure and Amazon Web Services to win business.

This is really bad on just about every foreseeable level. The future of Oracle’s cloud business has become inextricably tied to growing revenue by selling AI compute, with The Information reporting that Oracle’s GPU cloud business could equal its $50 billion+ non-cloud business by 2028. Said revenue is predominantly tied to a very small group of customers — Meta, ByteDance, xAI, NVIDIA and, of course, OpenAI — with the latter making up the majority of its future GPU revenue based on the $300 billion contract alone. If Oracle has made a bad deal with OpenAI, the only thing it’s guaranteed is that future margins will be chewed up by the incredible costs of renting out Blackwell GPUs.

Yet Oracle has a far, far bigger problem on its hands in Abilene Texas, where it’s trying to build a “data center” made up of 8 buildings, each with 50,000 NVIDIA GB200s and an overall capacity of 1.2GW, with Oracle on the hook for a $1-billion-a-year lease for 15 years regardless of whether a tenant pays or not.

And as you know from the intro, Abilene might be fucked.
edzitron.com
Every single data center project is a decaying investment full of what will be old hardware that's constantly being made obsolete by NVIDIA - yet $50bn or more of private capital has been sunk into building them every single quarter. It's a disaster in waiting.
www.wheresyoured.at/the-ai-bubbl...
Let me put it in simple terms: imagine you, for some reason, rented an M1 Mac when it was released in 2020, and your rental was done in 2025, when we’re onto the M4 series. Would you expect somebody to rent it at the same price? Or would they say “hey, wait a minute, for that price I could rent one of the newer generation ones.” And you’d be bloody right! 

Now, I realize that $70,000 data center GPUs are a little different to laptops, but that only makes their decline in value more profound, especially considering the billions of dollars of infrastructure built around them. 

And that’s the problem. Private equity firms are sinking $50 billion or more a quarter into theoretical data center projects full of what will be years-old GPU technology, despite the fact that there’s no real demand for generative AI compute, and that’s before you get to the grimmest fact of all: that even if you can build these data centers, it will take years and billions of dollars to deliver the power, if it’s even possible to do so.

Harvard economist Jason Furman estimates that data centers and software accounted for 92% of GDP growth in the first half of this year, in line with my conversation with economist Paul Kedrosky from a few months ago. 

All of this money is being sunk into infrastructure for an “AI revolution” that doesn’t exist, as every single AI company is unprofitable, with pathetic revenues ($61 billion or so if you include CoreWeave and Lambda, both of which are being handed money by NVIDIA), impossible-to-control costs that have only ever increased, and no ability to replace labor at scale (and especially not software engineers).  

OpenAI needs more than a trillion dollars to pay its massive cloud compute bills and build 27 gigawatts of data centers, and to get there, it needs to start making incredible amounts of money, a job that’s been mostly handed to Fidji Simo, OpenAI’s new CEO of Applications, who is solely responsible for turning a company that loses billions …
edzitron.com
AI GPUs appear to die in 3-5 years, and even if they don't, NVIDIA releases new ones every year, meaning that data center projects that take 3-4 years to build will, by the time they turn on, be full of years-old tech that will be worthless once the first lease ends.
wheresyoured.at/the-ai-bubbl...
Actually, wait — how long do GPUs last, exactly? Four years for training? Three years? The A100 GPU started shipping in May 2020, and the H100 (and the Hopper GPU generation) entered full production in September 2022, meaning that we’re hurtling at speed toward the time in which we’re going to start seeing a remarkable amount of chips start wearing down, which should be a concern for companies like Microsoft, which bought 150,000 Hopper GPUs in 2023 and 485,000 of them in 2024.

Alright, let me just be blunt: the entire economy of debt around GPUs is insane.

Assuming these things don’t die within five years (their warranties generally end in three), their value absolutely will, as NVIDIA has committed to releasing a new AI chip every single year, likely with significant increases to power and power efficiency. At the end of the five year period, the Special Purpose Vehicle will be the proud owner of five-year-old chips that nobody is going to want to rent at the price that Elon Musk has been paying for the last five years. Don’t believe me? Take a look at the rental prices for H100 GPUs that went from $8-an-hour in 2023 to $2-an-hour in 2024, or the Silicon Data Indexes (aggregated realtime indexes of hourly prices) that show H100 rentals at around $2.14-an-hour and A100 rentals at a dollar-an-hour, with Vast.AI offering them at as little as $0.67 an hour.

This is, by the way, a problem that faces literally every data center being built in the world, and I feel insane talking about it. It feels like nobody is talking about how impossible and ridiculous all of this is. It’s one thing that OpenAI has promised one trillion dollars to people — it’s another that large swaths of that will be spent on hardware that will, by the end of these agreements, be half-obsolete and generating less revenue than ever.

Think about it. Let’s assume we live in a fantasy land where OpenAI is somehow able to pay Oracle $300 billion over 5 years — which, although the costs will almost c…
edzitron.com
Gigawatt data centers are a pipedream, requiring 1.3GW per gigawatt of IT load, and tens of billions of dollars. NVIDIA's customers - like Elon Musk - are running out of cash to buy the GPUs to fill them, so NVIDIA is building Enron-esque SPVs to "rent" GPUs out.
www.wheresyoured.at/the-ai-bubbl...
Gigawatt data centers are a ridiculous pipe dream, one that runs face-first into the walls of reality.  

The world’s governments and media have been far too cavalier with the term “gigawatt,” casually breezing by the fact that Altman’s plans require 17 or more nuclear reactors’ worth of power, as if building power is quick and easy and cheap and just happens.

I believe that many of you think that this is an issue of permitting — of simply throwing enough money at the problem — when we are in the midst of a shortage in the electrical grade steel and transformers required to expand America’s (and the world’s) power grid.

I realize it’s easy to get blinded by the constant drumbeat of “gargoyle-like tycoon cabal builds 1GW  data center” and feel that they will simply overwhelm the problem with money, but no, I’m afraid that isn’t the case at all, and all of this is so silly, so ridiculous, so cartoonishly bad that it threatens even the seemingly-infinite wealth of Elon Musk, with xAI burning over a billion dollars a month and planning to spend tens of billions of dollars building the Colossus 2 data center, dragging two billion dollars from SpaceX in his desperate quest to burn as much money as possible for no reason. 

This is the age of hubris — a time in which we are going to watch stupid, powerful and rich men fuck up their legacies by finding a technology so vulgar in its costs and mythical outcomes that it drives the avaricious insane and makes fools of them. 

Or perhaps this is what happens when somebody believes they’ve found the ultimate con — the ability to become both the customer and the business, which is exactly what NVIDIA is doing to fund the chips behind Colossus 2.

According to Bloomberg, NVIDIA is creating a company — a “special purpose vehicle” — that it will invest $2 billion in, along with several other backers. Once that’s done, the special purpose vehicle will then use that equity to raise debt from banks, buy GPUs from NVIDIA, and then rent…
edzitron.com
Reporters just got back from OpenAI's Stargate Abilene data center project, yet don't seem to have checked if there's enough power. From my research and sources' info, they have 200MW of the 1.7GW they need, and construction *only just started* on a 1GW substation
www.wheresyoured.at/the-ai-bubbl...
We’re in a bubble. Everybody says we’re in a bubble. You can’t say we’re not in a bubble anymore without sounding insane, because everybody is now talking about how OpenAI has promised everybody $1 trillion — something you could have read about two weeks ago on my premium newsletter.

Yet we live in a chaotic, insane world, where we can watch the news and hear hand-wringing over the fact that we’re in a bubble, read article after CEO after article after CEO after analyst after investor saying we’re in a bubble, yet the market continues to rip ever-upward on increasingly more-insane ideas, in part thanks to analysts that continue to ignore the very signs that they’re relied upon to read.

AMD and OpenAI signed a very strange deal where AMD will give OpenAI the chance to buy 160 million shares at a cent a piece, in tranches of indeterminate size, for every gigawatt of data centers OpenAI builds using AMD’s chips, adding that OpenAI has agreed to buy “six gigawatts of GPUs.”

This is a peculiar way to measure GPUs, which are traditionally measured in the price of each GPU, but nevertheless, these chips are going to be a mixture of AMD’s mi450 instinct GPUs — which we don’t know the specs of! — and its current generation mi350 GPUs, making the actual scale of these purchases a little difficult to grasp, though the Wall Street Journal says it would “result in tens of billions of dollars in new revenue” for AMD.

This AMD deal is weird, but one that’s rigged in favour of Lisa Su and AMD. OpenAI doesn’t get a dollar at any point - it has work out how to buy those GPUs and figure out how to build six further gigawatts of data centers on top of the 10GW of data centers it promised to build for NVIDIA and the seven-to-ten gigawatts that are allegedly being built for Stargate, bringing it to a total of somewhere between 23 and 26 gigawatts of data center capacity.

Hell, while we’re on the subject, has anyone thought about how difficult and expensive it is to build a data cent… Nevertheless, everybody is happily publishing stories about how Stargate Abilene Texas — OpenAI’s massive data center with Oracle — is “open,” by which they mean two buildings, and I’m not even confident both of them are providing compute to OpenAI yet. There are six more of them that need to get built for this thing to start rocking at 1.2GW — even though it’s only 1.1GW according to my sources in Abilene.

But, hey, sorry — one minute — while we’re on that subject, did anybody visiting Abilene in the last week or so ever ask whether they’ll have enough power there? 

Don’t worry, you don’t need to look. I’m sure you were just about to, but I did the hard work for you and read up on it, and it turns out that Stargate Abilene only has 200MW of power — a 200MW substation that, according to my sources, has only been built within the last couple of months, with 350MWs of gas turbine generators that connect to a natural gas power plant that might get built by the end of the year.

Said turbine is extremely expensive, featuring volatile pricing (for context, natural gas price volatility fell in Q2 2025…to 69% annualized) and even more volatile environmental consequences, and is, while permitted for it (this will download the PDF of the permit), impractical and expensive to use long-term. 

Analyst James van Geelen, founder of Citrini Research recently said on Bloomberg’s Odd Lots podcast that these are “not the really good natural gas turbines” because the really good ones would take seven years to deliver due to a natural gas turbine shortage.

But they’re going to have to do. According to sources in Abilene, developer Lancium has only recently broken ground on the 1GW substation and five transformers OpenAI’s going to need to build out there, and based on my conversations with numerous analysts and researchers, it does not appear that Stargate Abilene will have sufficient power before the year 2027. 

Then there’s the question of whether 1GW of power actually gets you …
edzitron.com
Premium: The AI Bubble's promises are impossible. NVIDIA's customers are running out of money, GPUs die in 3-5 years, most 1GW data centers will never get built, and OpenAI's Abilene data center doesn't won't have the power it needs before 2028 - if it ever does.
www.wheresyoured.at/the-ai-bubbl...
The AI Bubble's Impossible Promises
Readers: I’ve done a very generous “free” portion of this newsletter, but I do recommend paying for premium to get the in-depth analysis underpinning the intro. That being said, I want as many people ...
www.wheresyoured.at
edzitron.com
oh it's SO silly but i love it so much
edzitron.com
prepared this video for the day this all ends
edzitron.com
This all happened to me
edzitron.com
FRIEND: Long week?
MY WEEK:
edzitron.com
guy with fake job tells people to do something impossible
techmeme.com
Memo: Vishal Shah, Meta's VP of Metaverse, tells his team to use AI to "go 5X faster" and expects 80% of them to integrate AI into their day-to-day by Q4 (Jason Koebler/Wired)

Main Link | Techmeme Permalink
edzitron.com
they're running out of money. also they need $20bn for this unless their expectation is that the round is cut in half because OpenAI won't convert to a for profit
techmeme.com
Sources: SoftBank is nearing a deal for a $5B margin loan secured by Arm shares in order to fund additional investment in OpenAI later this year (Bloomberg)

Main Link | Techmeme Permalink
Reposted by Ed Zitron
edzitron.com
Tomorrow: The AI Bubble is built on impossible promises. GPUs die in 5 years, nobody has built a 1GW data center, and they have the power to do so. Stargate Abilene won't have enough power before 2028.

Here's a link for $10 off premium.

edzitronswheresyouredatghostio.outpost.pub/public/promo...
Everybody is very casual with how they talk about Sam Altman’s theoretical promises of trillions of dollars of data center infrastructure, and I'm not sure anybody realizes how difficult even the very basics of this plan will be.
Nevertheless, everybody is happily publishing stories about how “Stargate Abilene Texas - OpenAI’s massive data center with Oracle - is open,” by which they mean two buildings, and I’m not even confident both of them are providing compute to OpenAI yet. There are six more of them that need to get built for this thing to start rocking at 1.2GW - even though it’s only 1.1GW according to my sources in Abilene.
But, hey, sorry - one minute - while we’re on that subject, did anybody visiting Abilene in the last week or so ever ask whether they’ll have enough power there? 
Don’t worry, you don’t need to look - I’m sure you were just about to, and had simply been busy! - but I did the hard work for you and read up on it, and it turns out that Stargate Abilene only has 200 megawatts of power - a 200 megawatt substation that, according to my sources, has only been built within the last couple of months, with 350 Megawatt of gas turbine generators that connect to a natural gas power plant that might get built by the end of the year in the event that one of the multiple construction firms involved . Said turbine is extremely expensive, featuring volatile pricing (for context, volatility fell in Q2 2025…to 69% annualized, meaning that if you had these prices across the entirety of a year you’d see swings of 69% up or down) and even more volatile environmental consequences, and is, while permitted for it (this will download the PDF of the permit), impractical and expensive to use long-term. 
Analyst James van Geelen, founder of Citrini Research recently said on Bloomberg’s Odd Lots podcast that these are “not the really good natural gas turbines” because the really good ones would take seven years to deliver due to a natural gas turbine shortage.
But th… Stargate Abilene does not have sufficient power to run at even half of its supposed IT load of 1.2GW, and at its present capacity - assuming that the gas turbines function at full power - can only hope to run 370MW to 460MW of IT load.
I’ve seen article after article about the gas turbines and their use of fracked gas - a disgusting and wasteful act typical of OpenAI - but nobody appears to have asked “how much power does a 1.2GW data center require?” and then chased it with “how much power does Stargate Abilene have?”
The answer is not enough, and the significance of said “not enough” is remarkable.
Today, I’m going to tell you, at length, how impossible the future of generative AI is. 
Gigawatt data centers are a ridiculous pipe dream, one that runs face-first into the walls of reality.  
The world’s governments and media have been far too cavalier with the term “gigawatt,” casually breezing by the fact that Altman’s plans require 17 or more nuclear reactors’ worth of power, as if building power is quick and easy and cheap and just happens.
I believe that many of you think that this is an issue of permitting - of simply throwing enough money at the problem - when we are in the midst of a shortage in the electrical grade steel and transformers required to expand America (and the world’s) power grid.
I realize it’s easy to get blinded by the constant drumbeat of “gargoyle-like tycoon cabal builds 1 gigawatt data center” and feel that they will simply overwhelm the problem with money, but no, I’m afraid that isn’t the case at all, and all of this is so silly, so ridiculous, so cartoonishly bad that it threatens even the seemingly-infinite wealth of Elon Musk, with xAI burning over a billion dollars a month and planning to spend tens of billions of dollars building the Colossus 2 data center, dragging two billion dollars from SpaceX in his desperate quest to burn as much money as possible for no reason. 
This is the age of hubris - a time in which we are going to watc…
Reposted by Ed Zitron
edzitron.com
Every single AI data center is a toxic investment, with most of its value tied up in GPUs that will be multiple generations behind by the time these things turn on - and then die in 3-5 years.

All to build for AI demand that doesn't exist.
edzitronswheresyouredatghostio.outpost.pub/public/promo...
Actually, wait - how long do GPUs last, exactly? Four years for training? Three years? The A100 GPU started shipping in May 2020, and the H100 (and the Hopper GPU generation) entered full production in September 2022, meaning that we’re hurtling at speed toward the time in which we’re going to start seeing a remarkable amount of chips start wearing down, which should be a concern for companies like Microsoft, who bought 150,000 Hopper GPUs in 2023 and 485,000 of them in 2024.
Alright, let me just be blunt: the entire economy of debt around GPUs is insane.
Assuming these things don’t die within five years (their warranties generally end in three), their value absolutely will, as NVIDIA has committed to releasing a new AI chip every single year, likely with significant increases to power and power efficiency. At the end of the five year period, the Special Purpose Vehicle will be the proud owner of five-year-old chips that nobody is going to want to rent at the price that Elon Musk has been paying for the last five years. Don’t believe me? Take a look at the rental prices for H100 GPUs that went from $8-an-hour in 2023 to $2-an-hour in 2024, or the Silicon Data Indexes (aggregated realtime indexes of hourly prices) that show H100 rentals at around $2.14-an-hour and A100 rentals at a dollar-an-hour, with Vast.AI offering them at as little as $0.67 an hour.
This is, by the way, a problem that faces literally every data center being built in the world, and I feel insane talking about it. It feels like nobody is talking about how impossible and ridiculous all of this is - it’s one thing that OpenAI has promised one trillion dollars to people - it’s another that large swaths of that will be spent on hardware that will, by the end of these agreements, be half-obsolete and generating less revenue than ever.
Think about it - let’s assume we live in a fantasy land where OpenAI is somehow able to pay Oracle $300 billion over 5 years. Said money is paying for access to Blackwell…
edzitron.com
I have no idea but nothing good
edzitron.com
The answer is I have no idea but they just started construction, which doesn’t mean they have it yet
edzitron.com
They did it so they always had some new shit to hock