Sami Beaumont
@samibeaumont.bsky.social
27 followers 72 following 27 posts
Psychiatrist @ GHU Paris psychiatry and neurosciences Research in computational cognitive science @computationalbrain.bsky.social
Posts Media Videos Starter Packs
Reposted by Sami Beaumont
emp1.bsky.social
I deeply agree with the sentiment of this paper. Although, as a norm, I dislike mixing up "activism" and research, there are occasions in which it is justified. Universities and society are blindly and impositively adopting corporative tools that severely jeopardise our capabilities of reasoning
>>
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Sami Beaumont
newseye.bsky.social
🚨BREAKING: The last journalists working for AFP in Gaza have said they can no longer work for the news agency.

They are out of energy and they are starving to death.

I have never seen a statement from a news organisation like it.

🧵
Sans intervention immédiate, les derniers reporters de Gaza vont mourir
21 juillet 2025
L'AFP travaille avec une pigiste texte, trois photographes et six pigistes vidéo dans la Bande de Gaza depuis le départ de ses journalistes staff courant 2024.
Avec quelques autres, ils sont aujourd'hui les seuls à rapporter ce qu'il se passe dans la Bande de Gaza. La presse internationale est interdite d'entrer dans ce territoire depuis près de deux ans.
Nous refusons de les voir mourir.
L'un deux, Bashar, collabore pour l'AFP depuis 2010, d'abord comme fixeur, ensuite comme photographe pigiste, et depuis 2024 comme principal photographe.
Samedi 19 juillet, il est parvenu à poster un message sur Facebook: « Je n'ai plus la force de travailler pour les médias.
Mon corps est maigre et je ne peux plus travailler ».
Bashar, 30 ans, travaille et vit dans des conditions égales à celles de tous les Gazaouis, allant d'un camp de réfugiés à un autre camp au gré des bombardements israéliens. Depuis plus d'un an il vit dans le dénuement le plus total et travaille en prenant d'énormes risques pour sa vie. L'hygiène est pour lui un problème majeur, avec des périodes de maladies intestinales sévères.
Bashar vit depuis février dans les ruines de sa maison de Gaza City avec sa mère, ses quatre frères et sœurs et la famille d'un de ses frères. Leur maison est vide de tout aménagement et confort, à part quelques coussins. Dimanche matin, il a rapporté que son frère aîné était « tombé, à cause de la faim ».
Même si ces journalistes reçoivent un salaire mensuel de l'AFP, il n'y a rien à acheter ou alors à des prix totalement exorbitants. Le système bancaire a disparu, et ceux qui pratiquent le change entre les comptes bancaires en ligne et l'argent liquide prennent une commission de près de 40%.
L'AFP n'a plus la possibilité d'avoir un véhicule et encore moins de l'essence pour permettre à ses journalistes de se déplacer pour leurs reportages. Circuler en voiture équivaut de toutes les façons à pre…
Reposted by Sami Beaumont
bayesianboy.bsky.social
The only reason to refer to technology as “AI” is to confuse people. Change my mind.
Reposted by Sami Beaumont
polgreen.bsky.social
A powerful essay by @omerbartov.bsky.social that concludes, with care and precision, that Israel is committing genocide against the Palestinian people. Others have reached this conclusion too. Bartov's piece explores some deep and important questions. www.nytimes.com/2025/07/15/o...
Opinion | I’m a Genocide Scholar. I Know It When I See It.
www.nytimes.com
Reposted by Sami Beaumont
laklab.bsky.social
Our work, out at Cell, shows that the brain’s dopamine signals teach each individual a unique learning trajectory. Collaborative experiment-theory effort, led by Sam Liebana in the lab. The first experiment my lab started just shy of 6y ago & v excited to see it out: www.cell.com/cell/fulltex...
samibeaumont.bsky.social
The rough idea is that the explanatory power of computationalism is not in the analogy between brains and machines but in the formal tools brought by computational sciences
samibeaumont.bsky.social
Hi ! Allow me to intrude on the conversation, here is a probably relevant paper bsky.app/profile/iris...
irisvanrooij.bsky.social
⚡️Very excited to share our new preprint "Reclaiming AI as a theoretical tool for cognitive science", by @olivia.science @fedeadolfi.bsky.social #ronalddehaan #antoninakolokolova & #patriciarich and myself psyarxiv.com/4cbuv Highlights/summary in thread 🧵👇 1/n
Reposted by Sami Beaumont
marloscmachado.bsky.social
📢 I'm very excited to release AgarCL, a new evaluation platform for research in continual reinforcement learning‼️

Repo: github.com/machado-rese...
Website: agarcl.github.io
Preprint: arxiv.org/abs/2505.18347

Details below 👇
Reposted by Sami Beaumont
irisvanrooij.bsky.social
I am seeing news that AI companies face “unexpected” obstacles in scaling up their AI systems.

Not unexpected at all, of course. Completely predictable from the Ingenia theorem.
irisvanrooij.bsky.social
The intractability proof (a.k.a. Ingenia theorem) implies that any attempts to scale up AI-by-Learning to situations of real-world, human-level complexity will consume an astronomical amount of resources (see Box 1 for an explanation). 13/n
Box 1 in the paper, intuitively explaining the implications of the intractability result.
Reposted by Sami Beaumont
khamascience.bsky.social
Ça se passe en direct sous nos yeux. Nous ne pourrons pas dire que nous ne savions pas. (Pétition à signer)

blogs.mediapart.fr/les-invites-...
Reposted by Sami Beaumont
samibeaumont.bsky.social
Hi! this looks really interesting, but the link doesn't redirect to the paper
Reposted by Sami Beaumont
khamascience.bsky.social
Ça se passe en direct sous nos yeux. Nous ne pourrons pas dire que nous ne savions pas.
Reposted by Sami Beaumont
abalosaurus.bsky.social
Excited to share our latest paper on lizard contests and agonistic signals! Read on for a daydream on how being a lizard could be like, disguised as a discussion on the relative impact of static colour patches and behavioral displays in animal contests 🧵
academic.oup.com/beheco/artic...
Behavioral threat and appeasement signals take precedence over static colors in lizard contests
Behavioral signals outweigh static color patches in determining the winner of territorial disputes. To understand what limits aggression in wall lizards, w
academic.oup.com
Reposted by Sami Beaumont
Reposted by Sami Beaumont
khamascience.bsky.social
Ça se passe en direct sous nos yeux. Nous ne pourrons pas dire que nous ne savions pas.
Reposted by Sami Beaumont
irisvanrooij.bsky.social
🚨Our paper `Reclaiming AI as a theoretical tool for cognitive science' is now forthcoming in the journal Computational Brain & Behaviour. (Preprint: osf.io/preprints/ps...)

Below a thread summary 🧵1/n

#metatheory #AGI #AIhype #cogsci #theoreticalpsych #criticalAIliteracy
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of Al in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of Al, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.
Reposted by Sami Beaumont
olivia.science
Tired but happy to say this is out w @andreaeyleen.bsky.social: Are Neurocognitive Representations 'Small Cakes'? philsci-archive.pitt.edu/24834/

We analyse cog neuro theories showing how vicious regress, e.g. the homunculus fallacy, is (sadly) alive and well — and importantly how to avoid it. 1/
In order to understand cognition, we often recruit analogies as building blocks of theories to aid us in this quest. One such attempt, originating in folklore and alchemy, is the homunculus: a miniature human who resides in the skull and performs cognition. Perhaps surprisingly, this appears indistinguishable from the implicit proposal of many neurocognitive theories, including that of the 'cognitive map,' which proposes a representational substrate for episodic memories and navigational capacities. In such 'small cakes' cases, neurocognitive representations are assumed to be meaningful and about the world, though it is wholly unclear who is reading them, how they are interpreted, and how they come to mean what they do. We analyze the 'small cakes' problem in neurocognitive theories (including, but not limited to, the cognitive map) and find that such an approach a) causes infinite regress in the explanatory chain, requiring a human-in-the-loop to resolve, and b) results in a computationally inert account of representation, providing neither a function nor a mechanism. We caution against a 'small cakes' theoretical practice across computational cognitive modelling, neuroscience, and artificial intelligence, wherein the scientist inserts their (or other humans') cognition into models because otherwise the models neither perform as advertised, nor mean what they are purported to, without said 'cake insertion.' We argue that the solution is to tease apart explanandum and explanans for a given scientific investigation, with an eye towards avoiding van Rooij's (formal) or Ryle's (informal) infinite regresses.

Figure 1 in https://philsci-archive.pitt.edu/24834/ Box 1 in https://philsci-archive.pitt.edu/24834/ Box 2 in https://philsci-archive.pitt.edu/24834/
samibeaumont.bsky.social
21/21 If you're interested in the details, check out the full preprint doi.org/10.1101/2025.... I'd love to hear your thoughts and feedback!