Charlie Williams
@pontifc8.bsky.social
390 followers 650 following 82 posts
Professor at Bocconi University in Milan. Amiable dabbler posting about business research, business education, strategy execution, human impact of business, and maybe a few other things.
Posts Media Videos Starter Packs
pontifc8.bsky.social
How causality ate the world. I don’t see how causality is the key issue in psychology research, most of which is based on RCTs. Replicability crisis, yes a big issue. Generalizability out of the lab, yes. But causality? That hardly seems to be the crux.
dingdingpeng.the100.ci
Happy to announce that I'll give a talk on how we can make rigorous causal inference more mainstream 📈

You can sign up for the Zoom link here: tinyurl.com/CIIG-JuliaRo...
Causal inference interest group, supported by the Centre for Longitudinal Studies

Seminar series
20th October 2025, 3pm BST (UTC+1)

"Making rigorous causal inference more mainstream"
Julia Rohrer, Leipzig University

Sign up to attend at tinyurl.com/CIIG-JuliaRohrer
Reposted by Charlie Williams
thorbenson.bsky.social
[writing the constitution]

Benjamin Franklin: Write down "no dumb fucks as president"

Alexander Hamilton: Stop saying that

Benjamin Franklin: I'm tellin' you. You're gonna regret not writing that down
Reposted by Charlie Williams
thomasfuchs.at
“Technology will just get better”

idk but I liked the Internet better 25 years ago
Reposted by Charlie Williams
leospalteholz.bsky.social
The renewable energy transition goes very slowly, then all at once
pontifc8.bsky.social
How did the hype get so far ahead of reality?
thomasfuchs.at
The relationship of AI to sentience is like the one of homeopathy to real medicine.
pontifc8.bsky.social
It’s time for scientists to publicize the research that is being cancelled. Enough silence in hopes of evading the axe! Disease communities (and others) are organized & can be activated!
carlbergstrom.com
How is this not impeachable?

No other president ever would been allowed such a transparent and harmful vendetta.

/imagine being such an asshole you make me side with Harvard
scott-delaney.bsky.social
All NIH and NSF grants for my entire team--and for all of Harvard, I guess?--have been terminated.

As provocative as that sounds, the practical effect is probably not much. Everything was already frozen. 🤷‍♂️

On the upside, it makes tracking terminated grants easier. No more guesswork at Harvard!
Reposted by Charlie Williams
edzitron.com
Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot. The tech and business media must change their ways, or risk repeating these cycles forever.
www.wheresyoured.at/reality-check/
If I'm Wrong, How Am I Wrong Exactly?

I dunno man, all of this sure seems like the hyperscalers are reducing their capital expenditures at a time when tariffs and economic uncertainty are making investors more critical of revenues. It sure seems like nobody outside of OpenAI is making any real revenue on generative AI, and they're certainly not making a profit.

It also, at this point, is pretty obvious that generative AI isn't going to do much more than it does today. If Amazon is only making $5 billion in revenue from the literal only shiny new thing it has, sold on the world's premier cloud platform, at a time when businesses are hungry and desperate to integrate AI, then there's little chance this suddenly turns into a remarkable revenue-driver.

Amazon made $187.79 billion in its last quarterly earnings, and if $5 billion is all it’s making at the very height of the bubble, it heavily suggests that there may not actually be that much money to make, either because it's too expensive to run these services or because these services don't have the kind of total addressable market as the rest of Amazon's services.

Microsoft reported that it was making a paltry $13 billion a year — so the equivalent of $3.25 billion a quarter — selling generative AI services and model access. The Information reported that Salesforce's "Agentforce" bullshit isn't even going to boost sales growth in 2025, in part because it’s pitching it as "digital labor that can essentially replace humans for tasks" and it turns out that it doesn't do that very well at all, costs $2 a conversation, and requires paying Salesforce to use its "data cloud" product.

What, if anything, suggests that I'm wrong here? That things have worked out in the past with things like the Internet and smartphones, and so it surely must happen for generative AI and, by extension, OpenAI? That companies like Uber lost money and eventually worked out (see my response here)? That OpenAI is growing fast, and that somehow … Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot. Silicon Valley is dominated by management consultants that no longer know what innovation looks like, tricked by Sam Altman, a savvy con artist who took advantage of tech’s desperation for growth. 

Generative AI is the perfected nihilistic form of tech bubbles — a way for people to spend a lot of money and power on cloud compute because they don’t have anything better to do. Large Language Models are boring, unprofitable cloud software stretched to their limits — both ethically and technologically — as a means of tech’s collapsing growth era, OpenAI’s non-profit mission fattened up to make foie gras for SaaS companies to upsell their clients and cloud compute companies to sell GPUs at an hourly rate. 

The Rot Economy has consumed the tech industry. Every American tech firm has become corrupted by the growth-at-all-costs mindset, and thus they no longer know how to make sustainable businesses that solve real problems, largely because the people that run them haven’t experienced them for decades. 

As a result, none of them were ready for when Sam Altman tricked them into believing he was their savior. 

Generative AI isn’t about helping you or me do things — it’s about making new SKUs, new monthly subscription costs for consumers and enterprises, new ways to convince people to pay more for the things that they already used to be slightly different in a way that often ends up being worse. 

Only an industry out of options would choose this bubble, and the punishment for doing so will be grim. I don’t know if you think I’m wrong or not. I don’t know if you think I’m crazy for the way I communicate about this industry. Even if you think I am, think long and hard about why it is you disagree with me, and the consequences of me being wrong. 

There is nothing else after generative AI. There are no other hype…
Reposted by Charlie Williams
edzitron.com
It is now actively illogical to act as if generative AI is a real movement. Analyst Josh Beck said last week that Amazon will only make *$5bn* from AI in 2025 - after spending $105bn in capex. And they're pulling back on data centers too. This isn't a revolution!
www.wheresyoured.at/reality-check/
Oh, By The Way, The Bubble Might Be Bursting

Hey, remember in August of last year when I talked about the pale horses of the AIpocalpyse? One of the major warning signs that the bubble was bursting was big tech firms reducing their capital expenditures, a call I've made before, with a little more clarity, on April 4 2024:

    While I hope I'm wrong, the calamity I fear is one where the massive over-investment in data centers is met with a lack of meaningful growth or profit, leading to the markets turning on the major cloud players that staked their future on unproven generative AI. If businesses don't adopt AI at scale — not experimentally, but at the core of their operations — the revenue is simply not there to sustain the hype, and once the market turns, it will turn hard, demanding efficiency and cutbacks that will lead to tens of thousands of job cuts.

We're about to find out if I'm right.

Last week, Yahoo Finance reported that analyst Josh Beck said that Amazon's generative AI revenue for Amazon Web Services would be $5 billion, a remarkably small sum that is A) not profit and B) a drop in the bucket compared to Amazon's projected $105 billion in capital expenditures in 2025, its $78.2 billion in 2024, or its $48.4 billion in 2023.
Is That Really It? Are you kidding me? Amazon will only make $5 billion from AI in 2025? What?

5 billion dollars? Five billion god damn dollars? Are you fucking kidding me? You'd make more money auctioning dogs! This is a disgrace! And if you're wondering, yes! All of this is for AI:

    CEO Andy Jassy said in February that the vast majority of this year’s $100 billion in capital investments from the tech giant will go toward building out artificial intelligence capacity for its cloud segment, Amazon Web Services (AWS).

Well shit, I bet investors are gonna love this! Better save some money, Andy! Oh, shit! A report from Wells Fargo analysts (called "Data Centers: AWS Goes on Pause") says that Amazon has "paused a portion of its leasing discussions on the colocation side...[and while] it's not clear the magnitude of the pause...the positioning is similar to what [analysts have] heard recently from Microsoft, [that] they are digesting aggressive recent lease-up deals...pulling back from a pipeline of LOIs or SOQs."

    Some asshole is going to say "LOIs and SOQs aren't a big deal," but they are. I wrote about it here.

"Digesting" in this case refers to when hyperscalers sit with their current capacity for a minute, and Wells Fargo adds that these periods typically last 6-12 months, though can be much shorter. It's not obvious how much capacity Amazon is walking away from, but they are walking away from capacity. It's happening.

But what if it wasn't just Amazon? Another report from friend of the newsletter (read: people I email occasionally asking for a PDF) analyst TD Cowen put out a report last week that, while titled in a way that suggested there wasn't a pull back, actually said there was.

Let's take a look at one damning quote:

    ...relative to the hyperscale demand backdrop at PTC, hyperscale demand has moderated a bit (driven by the Microsoft pullback and to a lesser extent Amazon, discussed below), particularly in Europe, 2) there has been a broader moderation in the urgency and speed with which the hyperscalers are looking to take down capacity, and 3) the number of large deals (i.e. +400MW deals) in the market appears to have moderated.

In plain English, this means "demand has come down, there's less urgency in building this stuff, and the market is slowing down. Cowen also added that it "...observed a moderation in the exuberance around the outlook for hyperscale demand which characterized the market this time last year." 

Brother, isn't this meant to be the next big thing? We need more exuberance! Not less!
pontifc8.bsky.social
Mark Halperin hasn't seen a "clever and invigorating joke" since the 2004 presidential election cycle.
originalist.bsky.social
Rufo: "these chats as a good investment of my time to radicalize tech elites who I thought were the most likely and high-impact new coalition partners for the right."

"Andreessen in 2022 asked
@RichardHanania to set up a group with smart conservatives, where Hanania saw Andreessen radicalize"
But I do hope someone in those groups took some screenshots and a fuller story can be told. I was able to reconstruct fragments from participants who spoke to me because they considered the group chats an important open secret. And it's hard to deny their power. The political journalist Mark Halperin, who now runs 2WAY and has a show on Megyn Kelly's network, said it was remarkable that "the left seems largely unaware that some of the smartest and most sophisticated Trump supporters in the nation from coast to coast are part of an overlapping set of text chains that allow their members to share links, intel, tactics, strategy, and ad hoc assignments. Also: clever and invigorating jokes. And they do this (not kidding) like 20 hours a day, including on weekends.' He called their influence "substantial."
Semafor (Ben Smith), Apri 27, 2025 'How does he have the time?'

Occasionally over the past few years, I've had a friend or source tell me in wonder that Andreessen was blowing up their phone. His hunger for information was "astonishing," one participant in the group chat said. "My impression is Marc spends half his life on 100 of these at the same time," another correspondent marveled. "This man should be a lot busier than I am and I can barely keep up with his group chat. How does he have the time?"

Andreessen has told friends he finds the medium efficient — a way to keep in touch with three times the people in a third of the time. The fact that he and other billionaires spend so much time writing to group chats prompted participants to joke that the very pinnacle of Maslow's Hierarchy of Needs is posting. "It's the same thing happening on both sides, and I've been amazed at how much this is coordinating our reality," said the writer Thomas Chatterton Williams, who was for a time a member of a group chat with Andreessen. "If you weren't in the business at all, you'd think everyone was arriving at conclusions independently - and [they're] not. It's a small group of people who talk to each other and overlap between politics and journalism and a few industries.'
pontifc8.bsky.social
We are definitely at peak market moment for “Andreesen seems brilliant for spending 20 hours a day on signal chats.” Gonna be a fast slide from here - joining Elon in the heap at the bottom.
davekarpf.bsky.social
I guess I just think anyone who says stuff like "The groupchats are the memetic upstream of mainstream opinion" should be stuffed in a locker and never let out.

These groupchats are only powerful because of the sheer concentration of wealth. Your crowd isn't brilliant or clever. You're just rich.
"The group chats are 'the memetic upstream of mainstream opinion," says some VC asshole who now works at Trump's White House.
pontifc8.bsky.social
This is a good rundown of some research about how AI thinks. Big bags of heuristics do not “generalize” to new problems very well. AGI may be much farther away than industry leaders claim. www.wsj.com/tech/ai/how-...
We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All
The vast ‘brains’ of artificial intelligence models can memorize endless lists of rules. That’s useful, but not how humans solve problems.
www.wsj.com
Reposted by Charlie Williams
leightjessica.bsky.social
Cool graph. If you had asked me, I would not have correctly predicted that the slope was steepest for Latin America
pontifc8.bsky.social
Powerful. But not “intelligent.” A transformative technology. But not going to replace all human & organizational intelligence in a matter of years. We’re like the 18th C audiences mesmerized by batteries attached to twitching frog legs thinking the font of life had been discovered.
emilymbender.bsky.social
LLMs are nothing more than models of the distribution of the word forms in their training data, with weights modified by post-training to produce somewhat different distributions. Unless your use case requires a model of a distribution of word forms in text, indeed, they suck and aren't useful.
hankgreen.bsky.social
There are a lot of critiques of LLMs that I agree with but "they suck and aren't useful" doesn't really hold water.

I understand people not using them because of social, economic, and environmental concerns. And I also understand people using them because they can be very useful.

Thoughts?
pontifc8.bsky.social
I had not seen the revelation last month that Daniel Kahneman ended his life with medical assistance in 2024.

I wonder if it was at the same location in Switzerland where my mother ended her life this January. Reading the article I was back there, living her choice.

www.wsj.com/arts-culture...
Essay | The Last Decision by the World’s Leading Thinker on Decisions
Shortly before Daniel Kahneman died last March, he emailed friends a message: He was choosing to end his own life in Switzerland. Some are still struggling with his choice.
www.wsj.com
Reposted by Charlie Williams
aijaleiponen.bsky.social
Not a lot of people know that manufacturing generates just 10% of U.S. GDP and employs 10% of workers.

Ag + mining is about 2% of GDP.

The rest comes from various services. The U.S. is a service economy like most other developed nations.
pontifc8.bsky.social
That post is getting passed around a lot (as pics at the other place), so it's worth knowing it's probably wrong.
vfxgordon.bsky.social
Deleting my posts about domains, because it does look like this dataset is the underlying source.

It’s obvious that they didn’t bother studying the data in any kind of detail, however.
1000kindsofrain.bsky.social
www.census.gov/foreign-trad...

It might be based on the US Census Bureau data. "Svalbard, Jan Mayen Island" is in that dataset. As is "Heard and McDonald Islands" - with the penguin's seasonally adjusted imports and exports shown.
pontifc8.bsky.social
“Sinks in” - Could there be a funnier headline about “efficient” markets? Like the market is Homer Simpson slapping his forehead, “doh!”
Reposted by Charlie Williams
abigaillarson.com
Today in Milan, a student activist group hung a trash-filled effigy of Musk upside down on a gate outside of piazzale Loreto, where Mussolini's body was displayed in 1945.
They left the message: "C'è sempre posto a piazzale Loreto, Elon" (There's always room in piazzale Loreto, Elon)
photo of a bodysuit filled with garbage and a taped portrait of Elon Musk hangs upside down on the gate near piazzale Loreto in Milan, Italy. Photo from Ansa Italia 2025
pontifc8.bsky.social
Exactly!! 😆 Great show, so much insightful history.
pontifc8.bsky.social
Alright, here's a more #strategy specific question. What do academic strategists think of Hamilton Helmer's 7 Powers: The Foundations of Business Strategy? I've just come across it and it seems like a pretty good synthesis of defensible sources of advantage, which he calls power. Your thoughts?
pontifc8.bsky.social
Actually, the interesting question is how the case of BlueSky fits or doesn't fit the analogies and models people are offering here. @kissane.bsky.social seems to be right that no amount of moderating tools or "federated" systems will keep a commuinity safe once it becomes popular. That tracks.