Gary Marcus
@garymarcus.bsky.social
28K followers 1.2K following 1.4K posts
AI and cognitive science, Founder and CEO (Geometric Intelligence, acquired by Uber). 8 books including Guitar Zero, Rebooting AI and Taming Silicon Valley. Newsletter (50k subscribers): garymarcus.substack.com
Posts Media Videos Starter Packs
Pinned
garymarcus.bsky.social
AI Bait & Switch:

bait: we’re gonna make an AI that can solve any problem experts could solve. it’s gonna transform the whole world.

switch: what we have actually made is fun and amazing, but rarely reliable and often makes mistakes – but ordinary people makes mistakes, too. So … AGI solved!
Reposted by Gary Marcus
Reposted by Gary Marcus
jukkan.bsky.social
The reason why LLMs fail to play a game of chess, even though they can cite the rules from their training data, is an awesome example of what's stopping autonomous AI agents from being reliable.

Great discussion between @garrykasparov.bsky.social ♟️ and @garymarcus.bsky.social 🧠 on limits of GenAI.
The Atlantic podcast: AI and the Rise of Techno-Fascism in the United States

Kasparov: Yes. It’s very interesting, because it seems to me that, you know, what you are telling us is that machines know the rules because rules are written, but it still doesn’t know what can be done or cannot be done unless it’s explicitly written. Correct?

Marcus: Well, I mean, it’s worse than that. I mean, the rules are explicitly written, but there’s another sense of knowing the rules—which, we actually understand what a queen is, what a knight is, what a rook is. What a piece is. And it never understands anything. It’s one of the most profound illusions of our time that most people witness these things and attribute an understanding to them that they don’t really have.
Reposted by Gary Marcus
bobehayes.bsky.social
"The chances of #AGI’s arrival by 2027 now seem remote. The government has let #AI companies lead a charmed life, with almost zero #regulation. It now ought to enact legislation that addresses costs and harms unfairly offloaded onto the public." ~ @garymarcus.bsky.social
Opinion | The Fever Dream of Imminent Superintelligence Is Finally Breaking
Building bigger A.I. isn’t leading to better A.I.
www.nytimes.com
Reposted by Gary Marcus
jessefelder.bsky.social
“Did people promising to build potential future greenhouses for tulip-growers in 1636 ever have it so good?” garymarcus.substack.com/p/peak-bubbl... by @garymarcus.bsky.social
Reposted by Gary Marcus
mitpress.bsky.social
"Power has really gone to the tech companies, who have enormous influence over the government. And unless people get out of their apathy, that’s ... certainly where the U.S. is likely to stay."

@garymarcus.bsky.social discussed what we need to do to prevent AI tools from undermining democracy:
AI and the Rise of Techno-Fascism in the United States
Will powerful new tools be used to promote democracy or undermine it?
www.theatlantic.com
Reposted by Gary Marcus
707kat.bsky.social
Gary Kasparov (legendary Chess Player) & @garymarcus.bsky.social are having a sobering conversation about AI, LLMs and Deepblue in this @theatlantic.com piece.

Many people believe generative or any AI show signs of "intelligence", but Marcus attributes it to brute forcing with pattern recognition.
AI and the Rise of Techno-Fascism in the United States
Will powerful new tools be used to promote democracy or undermine it?
www.theatlantic.com
Reposted by Gary Marcus
Reposted by Gary Marcus
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Gary Marcus
rooseveltinstitute.org
When tech is built to replace and not empower people, the future of work & democracy is at risk.

@garymarcus.bsky.social sounds the alarm about unchecked AI and discusses how we can reclaim tech for the public good. #GoodLife https://bit.ly/4nBRcPh

AI and the Rise of Techno-Fascism in the United States
Will powerful new tools be used to promote democracy or undermine it?
www.theatlantic.com
Reposted by Gary Marcus
jessefelder.bsky.social
‘The current strategy of merely making A.I. bigger is deeply flawed — scientifically, economically and politically.’ www.nytimes.com/2025/09/03/o... by @garymarcus.bsky.social
Opinion | The Fever Dream of Imminent ‘Superintelligence’ Is Finally Breaking
www.nytimes.com
Reposted by Gary Marcus
justinhendrix.bsky.social
"The government has let A.I. companies lead a charmed life with almost zero regulation. It now ought to enact legislation that addresses costs and harms unfairly offloaded onto the public," argues @garymarcus.bsky.social.
Opinion | How to Rethink A.I.
www.nytimes.com
garymarcus.bsky.social
In the morning @nytimes: Where I think AI has gone wrong — and what we should do about it.
garymarcus.bsky.social
Anybody remember AI a month ago?

• Everybody in the Bay Area seemed to think that AGI was imminent
• Expectations for GPT-5 were through the roof
• Zuckerberg was spending bajillions of dollars on Alexandr Wang, staff and GPUs, and seemed to have a coherent plan
Reposted by Gary Marcus
bobehayes.bsky.social
What scaling does and doesn’t buy you: peeling back the hype surrounding Google‘s trendy Nano Banana

"But it’s all still just an extended form of mimicry, not something deeper, as the persistent failures with parts and wholes keep showing us." ~ @garymarcus.bsky.social

garymarcus.substack....

#AI
What scaling does and doesn’t buy you: peeling back the hype surrounding Google‘s trendy Nano Banana
Some things never change
garymarcus.substack.com
garymarcus.bsky.social
not an April Fool’s joke, not the
Onion
justinhendrix.bsky.social
"In an exclusive statement to the New York Post, first lady Melania Trump has revealed her next official project: leading the Presidential Artificial Intelligence Challenge to inspire children and teachers to embrace AI technology and help accelerate innovation in the field."
Exclusive | First lady Melania Trump will head effort to teach next generation about AI
First lady Melania Trump will lead the Presidential Artificial Intelligence Challenge to inspire children and teachers to embrace AI technology and help accelerate innovation in the field.
nypost.com
garymarcus.bsky.social
possibly among the most dangerous PACs in history.
washingtonpost.com
A super PAC backed by Silicon Valley’s most powerful investors and executives was created to support “pro-AI” candidates in the 2026 midterms.

The group will also oppose any candidates perceived as slowing down AI development.
Super PAC aims to drown out AI critics in midterms, with $100M and counting
Leading the Future, backed by tech moguls, launched this month to boost midterm candidates who favor few regulations on the AI industry.
wapo.st
Reposted by Gary Marcus
jessefelder.bsky.social
“This entire market has been based on people not understanding that these machines don’t actually work like you, imagining that scaling was going to solve all of this, because they don’t really understand the problem. I mean, it’s almost tragic.” - @garymarcus.bsky.social fortune.com/2025/08/24/i...
'It's almost tragic': Bubble or not, the AI backlash is validating one critic's warnings
Gary Marcus told Fortune that AI valuations remind him of Wile E Coyote. "We are off the cliff."
fortune.com
Reposted by Gary Marcus
theonion.com
Sam Altman Places Gun To Head After New GPT Claims Dogs Are Crustaceans For 60th Time theonion.com/sam-alt...
Sam Altman Places Gun To Head After New GPT Claims Dogs Are Crustaceans For 60th Time
garymarcus.bsky.social
that would fuck with their narrative
paulmatzko.bsky.social
Normalize journalists interviewing anybody other than AI industry boosters and AI safety doomers.

Make @garymarcus.bsky.social a mandatory journo pit stop before publication.
garymarcus.bsky.social
Breaking: Sam Altman admits they “totally screwed up” the GPT-5 launch - and shamelessly asks for a LOT more good money to chase after bad.
Reposted by Gary Marcus
carlquintanilla.bsky.social
“.. Fears of a bubble are mounting, as tech stocks follow a pattern that is ‘surprisingly similar’ to the dot-com bubble of the late 1990s ..”

@bloomberg.com $QQQ
www.bloomberg.com/news/article...
Reposted by Gary Marcus
anthonymoser.com
If something is incorrect on Wikipedia, it can be sourced, traced, disputed, fixed.

If it's wrong in the LLM, it's just...wrong. It's not a fact explicitly stored somewhere, it's just a string of words generated by a probability map