Dawn of the Paul Riddell
@kyloboomhauer.bsky.social
980 followers 1.1K following 3.3K posts
Former carnivorous plant gallery owner, recovering writer, custodian and chef to Parker the lynx-point Siamese. New project: St. Remedius Medical College. http://www.stremedius.com “…one of Dallas’s best eccentrics.” - Dallas Observer
Posts Media Videos Starter Packs
Pinned
kyloboomhauer.bsky.social
New Perennial introduction, now Substack-free:
1989-2002: Angry young essayist for a range of now-forgotten magazines and weekly newspapers.
2008-2023: Owner/operator of Dallas’s pretty much only carnivorous plant gallery.
2024-Present: Trying this writing thing again. www.stremedius.com
Reposted by Dawn of the Paul Riddell
davehone.bsky.social
In advance of the publication of my new book with @markwitton.bsky.social, "Spinosaur Tales" (Nov 6th), for #FossilFriday, here's a nice close up of a tooth of Baryonyx sitting in the jaw of the holotype. More spinosaur goodness coming in the next weeks as I desperately try to promote the book.
kyloboomhauer.bsky.social
Here we go: the full “Welcome to your career in the arts” video.
inchargeofthegirls.bsky.social
sherman, for the love of god!
Reposted by Dawn of the Paul Riddell
edzitron.com
Oracle needs 4.5GW of compute capacity to handle OpenAI's $300 billion compute contract. They have only got the land for 2.6GW, and there isn't a chance they have that before 2028.

They are guaranteed to breach their contract with OpenAI.
www.wheresyoured.at/the-ai-bubbl...
“Gigawatt Data Centers” Are A Pipedream, Taking 2.5 Years Per Gigawatt In Construction and Even Longer For Power, And Oracle’s 4.5GW Of Compute Won’t Be Ready Before 2030, Guaranteeing A Breach Of Its Agreement With OpenAI
As I discussed a few weeks ago, Oracle needs to build 4.5GW of data centers to honour its $300 billion compute contract with OpenAI, and hasn’t even decided where it’s putting 1.9GW of that compute.

What we do know is that 1.4GW of that compute is meant to be built in Shackelford Texas, with “the first building scheduled for delivery in the second half of 2026.”

Remember, it’s about 36 months of lead time — if not more — to get the transformers necessary to build the power necessary for these large data centers, and it appears that Vantage Data Centers is “in the early planning stages” as of late July 2025, and Vantage only just raised the $25 billion to develop the data center, which has yet to break ground. As discussed, the Shackelford site will require at least 1.82GW of power — requiring power structure at a scale that I’ve yet to find a comparison for. 

It takes 2.5 years per gigawatt to build a gigawatt datacenter meaning that there isn’t a chance in hell Oracle has the capacity necessary before the end of its contract with OpenAI, guaranteeing that it will breach it within the next few years, assuming that OpenAI actually has the money to pay for the compute, which it will not.
Reposted by Dawn of the Paul Riddell
edzitron.com
1GW data centers are entirely different to the smaller ones built in the past. They have different power, cooling and construction problems, and *nobody* has built one yet. And for every 1GW of power, you only get 700MW of IT load/compute capacity.
www.wheresyoured.at/the-ai-bubbl...
1GW+ Capacity Data Centers — And Their Associated Power — Are Entirely Different To Building (and Powering) Smaller Data Centers
Kleyman also added that a 1GW data center is a vastly different beast than smaller, 200MW projects.

There are ALWAYS little energy/power gremlins you have to worry about. Now... take that statement and apply it to gigawatt-scale deployments. So... At 1 GW+ scale you're looking at a few issues that could creep up. This includes grid stability, harmonic distortion, voltage regulation, and fault current limits, which become far more acute. Here's the thing... with this level of scale, even minor inefficiencies or mis-sizing can turn into big risks. The system needs holistic protection design (switching, relays, breakers) to handle large fault currents, and transient behaviors (inrush, switching surges) can drive huge stress on components. Here's the REALLY important thing to remember... Cooling, water supply, redundancy, and electrical layout (bus duct, conductor sizing) also scale non-linearly. That means... what’s manageable at 100 MW becomes a design headache at 1 GW. There was an interesting McKinsey study which notes that scaling existing data center methods to GW size demands as many as ~290 separate generator units in large architectures, which dramatically multiplies maintenance, coordination, and failure modes.
Reposted by Dawn of the Paul Riddell
edzitron.com
Lancium, a developer at OpenAI's Abilene data center, only just started building the 1GW substation necessary to get close to the 1.7GW they need. It will take years to build, requiring custom transformers and specialized labor during a global shortage of both.
www.wheresyoured.at/the-ai-bubbl...
Lancium Only Just Started Building Its 1GW Substation - And These Things Can Take Years To Get Parts, Years More To Build And Cost Hundreds Of Millions If $1 Billion+
So, my editor Matt Hughes and I just spent the last two days of our life learning all about power, speaking with multiple different analysts and reading every available document about Project Artemis, Project Longhorn, Crusoe, Lancium, Mortenson, and basically anyone related.

I do not see a path to building this substation any faster than January 1 2027, and if I’m honest, I don’t think they’re going to finish it before 2028, if it even happens at all.

Lancium has been talking about the 200MW substation it’s building since 2022, yet my source in Abilene said that the thing only got done a few months ago. 

Building power is also difficult. You don’t just get a permit and start building these things. They require massive infrastructure to bring the power to the location, as well as going through the Large Load Integration (LLI) process formed by Texas’ ERCOT, which requires entire system studies and engineering plans before you can even start ordering the parts.

To make matters worse, procuring the Large Power Transformers is tough, because each one is made custom for the job 

A 1GW substation is an entirely different beast — a remarkable feat of engineering requiring five massive transformers during a global shortage, with the US Department of Energy estimating lead times with domestic producers of over 36 months in July 2024. Even if Crusoe ordered them right then and there (it didn’t), the wait time would be obscene.

And I’ve been unable to find an example of somebody building one outside of those built for cities of millions of people, such as the combined Astoria Energy I and II substations that power New York City and the surrounding areas.

In simpler terms, what Crusoe, Lancium and Mortenson are suggesting is completely unrealistic.

It's a very aggressive timeline. I'm factoring in lead times, supply chain issues, labor shortages, tariffs, and more... but (and it's a BIG but) technically possible if long-lead items (e.g., 345 kV transformers) are procured early and construction runs in parallel workstreams. 

Mortenson notes the 1 GW, 345 kV substation with five main power transformers is already in design, and the site has an initial 200 MW, 138 kV substation in place—both are positive schedule indicators. Aero blocks like LM2500XPRESS are designed for rapid installation and can help phase capacity ahead of full substation completion, but schedule risk remains around interconnection energization and transformer lead times. I work with established power and data center companies, like Lancium, which is building the Stargate campus, and they have some serious issues with scheduling and timelines. 
Is it good when your lead developer has “serious issues with scheduling and timelines”?

Anyway, Kleyman added that “early procurement” is critical to any project of this scale, referring to “ordering long-lead, critical equipment... like, power transformers, switchgear, breakers, and control systems...WAY before the civil or electrical work actually begins,” adding that while Mortenson is “real good at what they do and usually parallel-path design and procurement,” if they haven’t, there’s “NO WAY they’ll go into full operation by mid-2026.” 

Getting specific, Kleyman said that “a project that wants to energize by mid-2026 would ideally have placed those equipment orders in late 2024 or very early 2025 to stay on schedule.” While it’s technically possible Lancium did this — after all, it raised $500 million in November 2024 and another $600 million earlier this week — Kleyman added there has been no public confirmation of the same.

He also added the money can be “tight” in this situation:

Just like remodeling your house, large projects (especially like the one we're discussing) typically face 20–40…
Reposted by Dawn of the Paul Riddell
edzitron.com
Oracle's "1.2GW" Abilene data center that they're building for OpenAI is meant to be turned over end of 2026 - yet they've only got, at max, 32.35% (535MW) of the power they need, relying on 10 gas turbines an analyst told Odd Lots were "not very good."
www.wheresyoured.at/the-ai-bubbl...
Abilene’s Power Situation Is Complicated, Insufficient (only 32.35% of the 1.7GW it need) and Involves Massive, Inefficient Gas Turbines 
So, there are many different vendors working on the “clean campus” in Abilene, but we’ll simplify by focusing on those building the data center and the surrounding power.

Lancium is the owner of the land, and it’s hired construction and real estate firm Mortensen to build out the power infrastructure, starting with a 200MW substation and growing to a 1GW substation with five main power transformers according to Mortenson.

My source at Abilene tells me that the 200MW substation only just got built within the last two months, and the bigger substation has only just started construction. 

This is likely why Crusoe hired contractor Primoris in November 2024 to build “Project Longhorn,” a fracked gas power plant featuring ten gas turbines, for delivery by the end of 2025. A permit filed by Crusoe on January 10 2025 states that it wanted to change how it’d power the facility using gas, choosing to run the turbines for 8760 hours per year — meaning that Crusoe intends to run turbines that are “not very good” according to analyst James Van Geelen 24 hours a day, all year. 

It’s not totally clear if the permit was issued. While permit 177263 was issued, it shows as “pending” in the Texas Commission on Environmental Quality’s search page, likely meaning the modification to run these turbines all year is still in process.
Reposted by Dawn of the Paul Riddell
edzitron.com
Meanwhile, OpenAI has made Fidji Simo their CEO of Applications, and reports say that she now is responsible for making ChatGPT profitable. She's being set up as an Elizabeth Holmes figure, and we have to make sure Sam Altman actually takes the blame.
www.wheresyoured.at/the-ai-bubbl...
I get that you think “wow, OpenAI has the monopoly over 800 million weekly active users, that’s exactly like what Google has,” except…Google is a massive operation precision-tuned to make sure ads are seen all the time, in a constant ever-increasing push against their users to see how far it can push them, with massive ad sales teams, decades of data, thousands of miles of underground cable, and unrivalled scale, all of it built on top of something that can be relied upon, unlike Large Language Models.

And guess what, even if it were possible, Sam Altman has now made so many promises of such egregious sums of money that it is effectively impossible for Fiji Simo to succeed. It isn’t Fiji Simo’s fault that Sam Altman promised Oracle so much money! It isn’t Fiji Simo’s fault that Sam Altman has said that she has to make the company $200 billion in 2030! It certainly isn’t Fiji Simo’s fault that Sam Altman has to build 26 Gigawatts of data centers and has plans to promise to build many, many more! 

Fiji Simo is the fall girl, and it’s very important that history remembers her accurately. It was her decision to take this job — she is likely making incredibly large amounts of money in both cash and stock — but when OpenAI implodes from the sheer force of Sam Altman’s bullshit, we need to make sure she isn’t blamed for putting this company in this mess, even if it manages to die under her watch (assuming she isn’t fired or quits before the end). 

Given the strength of feeling amongst the die-hard, I fear that when things inevitably go terminally wrong, Simo will receive the brunt of the blame — because blaming a single person is a lot easier than acknowledging that the business fundamentals behind OpenAI were deranged, and generative AI wasn’t the mass-market business and consumer tool that said die-hard believe it to be. 

To be clear: any attempt to frame her as an Elizabeth Holmes will be  a cowardly and nakedly sexist attempt to shift the blame from Sam Altman, a c…
Reposted by Dawn of the Paul Riddell
edzitron.com
There's growing evidence that everybody loses money renting out AI GPUs. Oracle lost $100m in the space of three months renting out NVIDIA's new "Blackwell" GPUs, destroying their gross margins. They owe Crusoe $1bn/year for 15 years for their Abilene data center.
www.wheresyoured.at/the-ai-bubbl...
AI Data Centers Are A Complete Disaster
Oracle Lost $100 Million In The Three Months Ending August Renting Out Blackwell GPUs
The Information reported earlier in the week that Oracle had gross profit margins of 14% on GPU compute sales, making a “gross profit” of $125 million on $900 million of revenue in the three months ending August 2025. 

While this could be actual profit, these margins only include the immediate costs of running the GPUs, and while they include (per The Information) “depreciation expenses for some of the equipment,” “other unspecified depreciation costs would eat up another 7 percentage points of margin.” With such thin margins, it’s very likely other expenses will eat into any remaining profitability, and I’m not sure why The Information chose to go with the 14% number rather than 7% (or lower).

Yet all of this obfuscates the really bad parts. Oracle’s gross profit margins appear to be dwindling with every increase of GPU revenue. In August, Oracle made $895.7 million — a month where it lost $100 million renting out NVIDIA’s blackwell chips. While The Information claims that this is “partly because there is a period between when Oracle gets its data centers ready for customers and when customers start using and paying for them,” a statement that doesn’t really make sense when you see Oracle’s revenue growing.


This might be because Oracle is signing unprofitable deals:

As sales from the business nearly tripled in the past year, the gross profit margin from those sales ranged between less than 10% and slightly over 20%, averaging around 16%, the documents show.

In some cases, Oracle is losing considerable sums on rentals of small quantities of both newer and older versions of Nvidia’s chips, the data show. 
Oracle appears to be losing more money with every customer it signs for GPU compute, and somehow lost $100 million on Blackwell chips in the space of three months. I severely doubt that’s from not turning them on, considering its revenue increased by nearly $200 million between May and August 2025. In fact, I bet it’s because they’re extremely expensive to run, on top of the fact that Oracle has likely had to low-ball Microsoft Azure and Amazon Web Services to win business.

This is really bad on just about every foreseeable level. The future of Oracle’s cloud business has become inextricably tied to growing revenue by selling AI compute, with The Information reporting that Oracle’s GPU cloud business could equal its $50 billion+ non-cloud business by 2028. Said revenue is predominantly tied to a very small group of customers — Meta, ByteDance, xAI, NVIDIA and, of course, OpenAI — with the latter making up the majority of its future GPU revenue based on the $300 billion contract alone. If Oracle has made a bad deal with OpenAI, the only thing it’s guaranteed is that future margins will be chewed up by the incredible costs of renting out Blackwell GPUs.

Yet Oracle has a far, far bigger problem on its hands in Abilene Texas, where it’s trying to build a “data center” made up of 8 buildings, each with 50,000 NVIDIA GB200s and an overall capacity of 1.2GW, with Oracle on the hook for a $1-billion-a-year lease for 15 years regardless of whether a tenant pays or not.

And as you know from the intro, Abilene might be fucked.
Reposted by Dawn of the Paul Riddell
edzitron.com
Every single data center project is a decaying investment full of what will be old hardware that's constantly being made obsolete by NVIDIA - yet $50bn or more of private capital has been sunk into building them every single quarter. It's a disaster in waiting.
www.wheresyoured.at/the-ai-bubbl...
Let me put it in simple terms: imagine you, for some reason, rented an M1 Mac when it was released in 2020, and your rental was done in 2025, when we’re onto the M4 series. Would you expect somebody to rent it at the same price? Or would they say “hey, wait a minute, for that price I could rent one of the newer generation ones.” And you’d be bloody right! 

Now, I realize that $70,000 data center GPUs are a little different to laptops, but that only makes their decline in value more profound, especially considering the billions of dollars of infrastructure built around them. 

And that’s the problem. Private equity firms are sinking $50 billion or more a quarter into theoretical data center projects full of what will be years-old GPU technology, despite the fact that there’s no real demand for generative AI compute, and that’s before you get to the grimmest fact of all: that even if you can build these data centers, it will take years and billions of dollars to deliver the power, if it’s even possible to do so.

Harvard economist Jason Furman estimates that data centers and software accounted for 92% of GDP growth in the first half of this year, in line with my conversation with economist Paul Kedrosky from a few months ago. 

All of this money is being sunk into infrastructure for an “AI revolution” that doesn’t exist, as every single AI company is unprofitable, with pathetic revenues ($61 billion or so if you include CoreWeave and Lambda, both of which are being handed money by NVIDIA), impossible-to-control costs that have only ever increased, and no ability to replace labor at scale (and especially not software engineers).  

OpenAI needs more than a trillion dollars to pay its massive cloud compute bills and build 27 gigawatts of data centers, and to get there, it needs to start making incredible amounts of money, a job that’s been mostly handed to Fidji Simo, OpenAI’s new CEO of Applications, who is solely responsible for turning a company that loses billions …
Reposted by Dawn of the Paul Riddell
edzitron.com
AI GPUs appear to die in 3-5 years, and even if they don't, NVIDIA releases new ones every year, meaning that data center projects that take 3-4 years to build will, by the time they turn on, be full of years-old tech that will be worthless once the first lease ends.
wheresyoured.at/the-ai-bubbl...
Actually, wait — how long do GPUs last, exactly? Four years for training? Three years? The A100 GPU started shipping in May 2020, and the H100 (and the Hopper GPU generation) entered full production in September 2022, meaning that we’re hurtling at speed toward the time in which we’re going to start seeing a remarkable amount of chips start wearing down, which should be a concern for companies like Microsoft, which bought 150,000 Hopper GPUs in 2023 and 485,000 of them in 2024.

Alright, let me just be blunt: the entire economy of debt around GPUs is insane.

Assuming these things don’t die within five years (their warranties generally end in three), their value absolutely will, as NVIDIA has committed to releasing a new AI chip every single year, likely with significant increases to power and power efficiency. At the end of the five year period, the Special Purpose Vehicle will be the proud owner of five-year-old chips that nobody is going to want to rent at the price that Elon Musk has been paying for the last five years. Don’t believe me? Take a look at the rental prices for H100 GPUs that went from $8-an-hour in 2023 to $2-an-hour in 2024, or the Silicon Data Indexes (aggregated realtime indexes of hourly prices) that show H100 rentals at around $2.14-an-hour and A100 rentals at a dollar-an-hour, with Vast.AI offering them at as little as $0.67 an hour.

This is, by the way, a problem that faces literally every data center being built in the world, and I feel insane talking about it. It feels like nobody is talking about how impossible and ridiculous all of this is. It’s one thing that OpenAI has promised one trillion dollars to people — it’s another that large swaths of that will be spent on hardware that will, by the end of these agreements, be half-obsolete and generating less revenue than ever.

Think about it. Let’s assume we live in a fantasy land where OpenAI is somehow able to pay Oracle $300 billion over 5 years — which, although the costs will almost c…
Reposted by Dawn of the Paul Riddell
edzitron.com
Gigawatt data centers are a pipedream, requiring 1.3GW per gigawatt of IT load, and tens of billions of dollars. NVIDIA's customers - like Elon Musk - are running out of cash to buy the GPUs to fill them, so NVIDIA is building Enron-esque SPVs to "rent" GPUs out.
www.wheresyoured.at/the-ai-bubbl...
Gigawatt data centers are a ridiculous pipe dream, one that runs face-first into the walls of reality.  

The world’s governments and media have been far too cavalier with the term “gigawatt,” casually breezing by the fact that Altman’s plans require 17 or more nuclear reactors’ worth of power, as if building power is quick and easy and cheap and just happens.

I believe that many of you think that this is an issue of permitting — of simply throwing enough money at the problem — when we are in the midst of a shortage in the electrical grade steel and transformers required to expand America’s (and the world’s) power grid.

I realize it’s easy to get blinded by the constant drumbeat of “gargoyle-like tycoon cabal builds 1GW  data center” and feel that they will simply overwhelm the problem with money, but no, I’m afraid that isn’t the case at all, and all of this is so silly, so ridiculous, so cartoonishly bad that it threatens even the seemingly-infinite wealth of Elon Musk, with xAI burning over a billion dollars a month and planning to spend tens of billions of dollars building the Colossus 2 data center, dragging two billion dollars from SpaceX in his desperate quest to burn as much money as possible for no reason. 

This is the age of hubris — a time in which we are going to watch stupid, powerful and rich men fuck up their legacies by finding a technology so vulgar in its costs and mythical outcomes that it drives the avaricious insane and makes fools of them. 

Or perhaps this is what happens when somebody believes they’ve found the ultimate con — the ability to become both the customer and the business, which is exactly what NVIDIA is doing to fund the chips behind Colossus 2.

According to Bloomberg, NVIDIA is creating a company — a “special purpose vehicle” — that it will invest $2 billion in, along with several other backers. Once that’s done, the special purpose vehicle will then use that equity to raise debt from banks, buy GPUs from NVIDIA, and then rent…
Reposted by Dawn of the Paul Riddell
edzitron.com
Reporters just got back from OpenAI's Stargate Abilene data center project, yet don't seem to have checked if there's enough power. From my research and sources' info, they have 200MW of the 1.7GW they need, and construction *only just started* on a 1GW substation
www.wheresyoured.at/the-ai-bubbl...
We’re in a bubble. Everybody says we’re in a bubble. You can’t say we’re not in a bubble anymore without sounding insane, because everybody is now talking about how OpenAI has promised everybody $1 trillion — something you could have read about two weeks ago on my premium newsletter.

Yet we live in a chaotic, insane world, where we can watch the news and hear hand-wringing over the fact that we’re in a bubble, read article after CEO after article after CEO after analyst after investor saying we’re in a bubble, yet the market continues to rip ever-upward on increasingly more-insane ideas, in part thanks to analysts that continue to ignore the very signs that they’re relied upon to read.

AMD and OpenAI signed a very strange deal where AMD will give OpenAI the chance to buy 160 million shares at a cent a piece, in tranches of indeterminate size, for every gigawatt of data centers OpenAI builds using AMD’s chips, adding that OpenAI has agreed to buy “six gigawatts of GPUs.”

This is a peculiar way to measure GPUs, which are traditionally measured in the price of each GPU, but nevertheless, these chips are going to be a mixture of AMD’s mi450 instinct GPUs — which we don’t know the specs of! — and its current generation mi350 GPUs, making the actual scale of these purchases a little difficult to grasp, though the Wall Street Journal says it would “result in tens of billions of dollars in new revenue” for AMD.

This AMD deal is weird, but one that’s rigged in favour of Lisa Su and AMD. OpenAI doesn’t get a dollar at any point - it has work out how to buy those GPUs and figure out how to build six further gigawatts of data centers on top of the 10GW of data centers it promised to build for NVIDIA and the seven-to-ten gigawatts that are allegedly being built for Stargate, bringing it to a total of somewhere between 23 and 26 gigawatts of data center capacity.

Hell, while we’re on the subject, has anyone thought about how difficult and expensive it is to build a data cent… Nevertheless, everybody is happily publishing stories about how Stargate Abilene Texas — OpenAI’s massive data center with Oracle — is “open,” by which they mean two buildings, and I’m not even confident both of them are providing compute to OpenAI yet. There are six more of them that need to get built for this thing to start rocking at 1.2GW — even though it’s only 1.1GW according to my sources in Abilene.

But, hey, sorry — one minute — while we’re on that subject, did anybody visiting Abilene in the last week or so ever ask whether they’ll have enough power there? 

Don’t worry, you don’t need to look. I’m sure you were just about to, but I did the hard work for you and read up on it, and it turns out that Stargate Abilene only has 200MW of power — a 200MW substation that, according to my sources, has only been built within the last couple of months, with 350MWs of gas turbine generators that connect to a natural gas power plant that might get built by the end of the year.

Said turbine is extremely expensive, featuring volatile pricing (for context, natural gas price volatility fell in Q2 2025…to 69% annualized) and even more volatile environmental consequences, and is, while permitted for it (this will download the PDF of the permit), impractical and expensive to use long-term. 

Analyst James van Geelen, founder of Citrini Research recently said on Bloomberg’s Odd Lots podcast that these are “not the really good natural gas turbines” because the really good ones would take seven years to deliver due to a natural gas turbine shortage.

But they’re going to have to do. According to sources in Abilene, developer Lancium has only recently broken ground on the 1GW substation and five transformers OpenAI’s going to need to build out there, and based on my conversations with numerous analysts and researchers, it does not appear that Stargate Abilene will have sufficient power before the year 2027. 

Then there’s the question of whether 1GW of power actually gets you …
Reposted by Dawn of the Paul Riddell
edzitron.com
Premium: The AI Bubble's promises are impossible. NVIDIA's customers are running out of money, GPUs die in 3-5 years, most 1GW data centers will never get built, and OpenAI's Abilene data center doesn't won't have the power it needs before 2028 - if it ever does.
www.wheresyoured.at/the-ai-bubbl...
The AI Bubble's Impossible Promises
Readers: I’ve done a very generous “free” portion of this newsletter, but I do recommend paying for premium to get the in-depth analysis underpinning the intro. That being said, I want as many people ...
www.wheresyoured.at
Reposted by Dawn of the Paul Riddell
sarahmackattack.bsky.social
Wanna do something to make a small part of the shit salad we're being tossed in a little better?

Help me pack up native seeds into nice little envelopes that we'll distribute in Philly this winter/spring!

November 19th, at @indyhall.org!
RSVP here: luma.com/hvqyrx9v

Poster by @theavocadojam.com!
a skull has a butterfly on their forehead and is holding a little chunk of dirt with a seedling popping out of it, with the philly skyline in the distance. The top right has the title Native Plant prep Party! with the tagline "Help pack native seed packets for free distribution!" hosted by Skype a Scientist, Nov 19th 6:30pm Indy Hall Clubhouse
Reposted by Dawn of the Paul Riddell
matthewseiji.com
It’s 2050 and a teen girl is torrenting a .tar.gz file of all the consciousnesses of all the tech bros who uploaded themselves into the cloud in a bid for immortality and modding them into The Sims 4
Reposted by Dawn of the Paul Riddell
mountsthelens1980.bsky.social
#MSH45 | Richard Lasher
You've been at a job long enough to know a decent colleague to chat with.

The more you get to know them, the more they share. What's fact or fiction? You don't know, but you listen.

Then there's one story that's so over the top—a bona fide lie. But then, a photo appears.
A red Ford Pinto hatchback angles across a narrow gravel forest service road, a blue enduro motorcycle on a rear hitch carrier. Tall firs frame the view. Beyond them, a towering ash cloud from a pyroclastic density flow billows skyward. Photo by Richard Kent Lasher, May 18, 1980.
kyloboomhauer.bsky.social
When there’s no more room in digital Hell, we set up expansions.
sarahtaber.bsky.social
hmm this Art. it speaks to me
screenshot from tumblr. username @matthewsiji.com says "It's 2050 and a teen girl is torrenting a .tar.gz file of all the consciousness of all the tech bros who uploaded themselves into the cloud in a bid for immortality and modding them into The Sims 4"

username crazy-pages reblogs with the added note: "I can't decide if it's better if they're not really uploaded into the cloud, they just all tricked themselves into thinking a GPT model trained on their brain is them, so she's playing with their fake echoes in a macabre imitation of the immortality they deluded themselves into believing they had. Or if they're real and she's just about to torment them forever"
kyloboomhauer.bsky.social
“People are always asking me if I know Judas Iscariot.”
kyloboomhauer.bsky.social
He rises from His grave, looks down at His shadow, and realizes He has to wait six more weeks until spring.
Reposted by Dawn of the Paul Riddell
sofarrsogud.bsky.social
Be the reason they start searching bags for googly eyes at the entrance to your local zoo.
Reposted by Dawn of the Paul Riddell
evandorkin.bsky.social
Sarah has fixed Jerry's eye color on the Nerd Inferno cover. Those who complained must now pick up a copy or you will be cursed. First step, your plants die. Second, your pets run away. Third...well, it involves teeth. Avoid potential calamity with a simple purchase. More info: tinyurl.com/yfw59k2r
The cover for the Nerd Inferno Omnibus featuring Milk and CHeese, the Eltingville Club and characters from Dork.