Chris Chapman
@cchapman.bsky.social
810 followers 410 following 450 posts
UX researcher, psychologist. Author "Quantitative User Experience Research" (w/Rodden), "R | Python for Marketing Research and Analytics" (w/Feit & Schwarz). Previously 24 yrs @ Google, Amazon, Microsoft. Personal account. Blog at https://quantuxblog.com
Posts Media Videos Starter Packs
cchapman.bsky.social
They are making such bets with other people's money (and time, IP, attention, electricity, water, etc) ... so it is never "wrong" for them.

"Heads, I win; tails, you lose! (and I'll get promoted because of my so called hard-won experience)"
cchapman.bsky.social
Although I would include a somewhat different set of considerations (e.g. the roles of compassion and intentionality), this is the most clarifying and tech-fantasy-debunking paper I've read in this space.

Well worth reading for anyone interested in ethics towards AI/robots/etc.
abeba.bsky.social
Robot personhood/rights is conceptually bogus and legally puts more power/rights in the hands of those that develop and deploy robots/AI systems

firstmonday.org/ojs/index.ph...
Reposted by Chris Chapman
richarddmorey.bsky.social
Also - contrast b/w the response when I advocate teaching R instead of SPSS -- "No hurry, let's not rush into it" (still waiting) -- & others re: use of LLMs -- "It's inevitable, we're behind; need it implement it ASAP!" -- is telling. Learning to code is freeing. Overhyped LLMs create dependency.
Excerpt from Guest & van Rooij, 2025:

As Danielle Navarro (2015) says about shortcuts through us-
ing inappropriate technology, which chatbots are, we end up dig-
ging ourselves into “a very deep hole.” She goes on to explain:

"The business model here is to suck you in during
your student days, and then leave you dependent on
their tools when you go out into the real world. [...]
And you can avoid it: if you make use of packages
like R that are open source and free, you never get
trapped having to pay exorbitant licensing fees." (pp.
37–38)
Reposted by Chris Chapman
doctorwaffle.substack.com
In honor of National Poetry Day, the greatest parody rewrite of all time:
Screen cap of parodic version of William Blake's "The Tyger" that begins:
Tyger! Tyger! Burning bright
(Not sure if I spelled that right) 
What immortal hand or eye
Could fashion such a stripy guy? 
What the hammer that hath hewn it 
Into such a chonky unit?
Did who made the lamb make thee, 
Or an external franchisee?
Reposted by Chris Chapman
wblau.bsky.social
Spot the North-American anomaly: only region where social media use is still growing.
Great work by the FT’s @jburnmurdoch.ft.com
www.ft.com/content/a072... “Have we passed peak social media?”
Reposted by Chris Chapman
mrlockyer.bsky.social
Let me make your Sunday. Got a library card? Great.

Download the Libby app. Free.

Up to 10 audiobooks. FREE (cancel Audible).

Up to 10 e books. FREE.
(Cancel Kindle Prime).

UNLIMITED high street magazines (I chose Empire, RW, Wired, Simple Things to start).

NEWSPAPERS!

This app is AMAZING.
cchapman.bsky.social
Small preview: "far from being an unstoppable force, [AI] is irrevocably shaped ... by the ownership class that steers its development and deployment.... The technology of AI is ultimately not that complex. It is insidious, however, in its capacity to steer results to its owners’ wants and ends."
cchapman.bsky.social
Sounds great! And I suggest the book "Why we fear AI" by @hagenblix.bsky.social and I. Glimmer if not already on the list.

In a nutshell, it discusses how the social & economic patterns of late capitalism (anti labor, anti knowledge knowledge; but pro fear) show up in technology, i.e. AI.
cchapman.bsky.social
A much needed reflection on the crisis of rational thought today: www.theguardian.com/news/2025/oc...

As a side note on reason itself, the article's attention to Arendt on imagination aligns with the unique aspects of Pierce's abductive reasoning (as complementary to inductive and deductive forms)
A critique of pure stupidity: understanding Trump 2.0
If the first term of Donald Trump provoked anxiety over the fate of objective knowledge, the second has led to claims we live in a world-historical age of stupid, accelerated by big tech. But might th...
www.theguardian.com
Reposted by Chris Chapman
bharrap.bsky.social
Are you a student or early-career statisticians & data scientists in the Sydney area? Come hear from a diverse panel on their experience and career trajectories! Food and drink to follow

📅 9th Oct, 6-8pm Sydney time
📍 USyd

Registration required:
statsoc.org.au/event-6332903

#statssky #databs
Statistical Society of Australia - SSA NSW: Early Career and Student Statisticians Career Event 2025
statsoc.org.au
Reposted by Chris Chapman
trekkiebill.bsky.social
Do you ever think about the fact that Wikipedia is the last good major website on the internet? You aren't bombarded with ads. It doesn't try to push video on you, and it doesn't redirect you to a scam site.
Reposted by Chris Chapman
kenwhite.bsky.social
Every few months now I re-read this "Who Goes Nazi?" piece from 1941 and am blown away by how it captures the people we are dealing with 80 years later.

harpers.org/archive/1941...
Who Goes Nazi?, by Dorothy Thompson
harpers.org
cchapman.bsky.social
Impressive work, esp. combining the two papers!

Last week I spoke at Google's (internal) Survey Con about why "Synthetic Survey Data is Not Data".

One audience question: would the estimates get better by adding more data?

My response was 🤷 "maybe" 🤷 ... but this is a much better answer!
verasight.bsky.social
In Verasight’s second synthetic data paper, @gelliottmorris.com, Ben Leff and @peterenns.bsky.social find that the performance of synthetic samples does not consistently improve (and can perform worse) with additional administrative data or real survey responses. Link in the thread.
Reposted by Chris Chapman
Reposted by Chris Chapman
courtneymilan.com
MW correctly reading the room
merriam-webster.com
We are thrilled to announce that our NEW Large Language Model will be released on 11.18.25.
cchapman.bsky.social
A book in the works, or ... ?
Reposted by Chris Chapman
nuphoto.com
Well this is (not) reassuring: a TFR (temporary flight restriction) just went up over the city from now through October 12th for "Special Security Reasons."

Not that this makes it any better, but this TFR is specifically for drones (UAS) provided you don't have prior approval or authorization.
This image displays a NOTAM (Notice to Airmen) detailing temporary flight restrictions around Chicago, Illinois, near O'Hare International Airport. The restriction zone spans a 15 nautical mile radius centered around the O'Hare VOR/DME, from the surface to 400 feet above ground level. The NOTAM, issued for special security reasons, is in effect from October 1, 2025, at 1445 UTC until October 12, 2025, at 2359 UTC. It highlights that no Unmanned Aircraft Systems (UAS) operations are allowed in the area unless authorized under specific conditions for defense, law enforcement, or other critical missions.
Reposted by Chris Chapman
anthrofuentes.bsky.social
Oh no. Jane Goodall has passed. Jane changed so many lives across so many species. The best remembrance is to keep fighting for the rights of all species and to center care in all relations. Plus, do go and read her early work from Gombe, it's revolutionary.
cchapman.bsky.social
nb: my article is about synthetic *survey* data in particular. The OP🧵is broader and some of those questions are better formed than surveys (even if LLMs are still dubious for them).
Reposted by Chris Chapman
jamiecummins.bsky.social
This problem will only get worse when people try to use LLMs to model vulnerable or hard-to-reach populations. People are already suggesting that LLMs could do this to help increase representativeness of research; I believe presently it will likely have the exact opposite effect.
cchapman.bsky.social
Excellent 🧵 about LLM synthetic data (silicon samples etc) and why they don't solve any particular problem in human research.

FWIW, in addition to results and considerations like these, I've argued elsewhere that the entire question is ill-formed: quantuxblog.com/synthetic-su...
Reposted by Chris Chapman
randyau.com
This week on Counting Stuff. Layoffs! I'm in search for a job now! It sucks but here we are!

Now's the ideal time to give me money in exchange for time or a project. Or if you just have leads in quantitative/data/researchy positions across NYC or remotely

www.counting-stuff.com/looking-for-...
Looking for work, surprise edition
I always told my wife this newsletter was my backup plan...
www.counting-stuff.com
Reposted by Chris Chapman
anthonymoser.com
perfect. this is it.

ai is a political project, arguing about its usefulness as an office tool misses the point
Reposted by Chris Chapman
jamellebouie.net
the president of the united states wants to use the american military to kill american citizens on american soil. that's the whole story!
Reposted by Chris Chapman
edzitron.com
Generative AI is a failure. Across every major AI company and hyperscaler selling models or software or compute, there's only $61 billion of revenue in 2025 - on hundreds of billions of dollars of capex and investment. Every AI company is losing money.

www.wheresyoured.at/the-case-aga...
Every AI Company Is Unprofitable, Struggling To Grow, And Generative AI's Revenues Are Pathetic (around $61 billion in 2025 across all companies) comparable to their costs (hundreds of billions) 
Mea Culpa! I have said a few times “$40 billion” is the total amount of AI revenue in 2025, and I need to correct the record. $35 billion is what hyperscalers will make this year (roughly), and when you include OpenAI, Anthropic and other startups, the amount is around $55 billion. If you include neoclouds, this number increases by about $6.1 billion. In any case, this doesn’t dramatically change my thesis. 
As I covered on my premium newsletter a few weeks ago, everybody is losing money on generative AI, in part because the cost of running AI models is increasing, and in part because the software itself doesn’t do enough to warrant the costs associated with running them, which are already subsidized and unprofitable for the model providers. 

Outside of OpenAI (and to a lesser extent Anthropic), nobody seems to be making much revenue, with the most “successful” company being Anysphere, makers of AI coding tool Cursor, which hit $500 million ‘annualized” (so $41.6 million in one month) a few months ago, just before Anthropic and OpenAI jacked up the prices for “priority processing” on enterprise queries, raising its operating costs as a result.

In any case, that’s some piss-poor revenue for an industry that’s meant to be the future of software. Smartwatches are projected to make $32 billion this year, and as mentioned, the Magnificent Seven expects to make $35 billion or so in revenue from AI this year.

Even Anthropic and OpenAI seem a little lethargic, both burning billions of dollars while making, by my estimates, no more than $2 billion and $6.26 billion in 2025 so far, despite projections of $5 billion and $13 billion respectively. 

Outside of these two, AI startups are floundering, struggling to stay alive and raising money in several-hundred million dollar bursts as their negative-gross-margin businesses flounder. 

As I dug into a few months ago, I could find only 12 AI-powered companies making more than $8.3 million a month, with two of them slightly improving their revenues, specifically AI search company Perplexity (which has now hit $150 million ARR, or $12.5 million in a month) and AI coding startup Replit (which also hit $150 million ARR in September). 

Both of these companies burn ridiculous amounts of money. Perplexity burned 164% of its revenue on Amazon Web Services, OpenAI and Anthropic last year, and while Replit hasn’t leaked its costs, The Information reports its gross margins in July were 23%, which doesn’t include the costs of its free users, which you simply have to do with LLMs as free users are capable of costing you a hell of a lot of money.

Problematically, your paid users can also cost you more than they bring in as well. In fact, every user loses you money in generative AI, because it’s impossible to do cost control in a consistent manner.