Stefan Herzog
@stefanherzog.bsky.social
3.3K followers 1K following 190 posts

Senior Researcher @arc-mpib.bsky.social MaxPlanck @mpib-berlin.bsky.social, group leader #BOOSTING decisions: cognitive science, AI/collective intelligence, behavioral public policy, comput. social science, misinfo; stefanherzog.org scienceofboosting.org .. more

Computer science 20%
Business 13%
Posts Media Videos Starter Packs
Pinned
stefanherzog.bsky.social
🌟🧠💪📝
#BOOSTING: Empowering citizens with behavioral science

New, freely available paper in Annual Review of Psychology.
PDF: tinyurl.com/boosting2025

For more: scienceofboosting.org

@arc-mpib.bsky.social @mpib-berlin.bsky.social

@annualreviews.bsky.social
#policy #behavioralscience

1/ 🧵👇
The image is the cover page of an article from the "Annual Review of Psychology" titled "Boosting: Empowering Citizens with Behavioral Science" by Stefan M. Herzog and Ralph Hertwig. It features a brief abstract, keywords, and publication details. The abstract outlines the concept of "boosting" as a behavioral public policy that emphasizes empowering individuals to make informed decisions, in contrast to "nudging," which subtly steers behavior. The abstract reads:

Behavioral public policy came to the fore with the introduction of nudging, which aims to steer behavior while maintaining freedom of choice. Responding to critiques of nudging (e.g., that it does not promote agency and relies on benevolent choice architects), other behavioral policy approaches focus on empowering citizens. Here we review boosting, a behavioral policy approach that aims to foster people's agency, self-control, and ability to make informed decisions. It is grounded in evidence from behavioral science showing that human decision making is not as notoriously flawed as the nudging approach assumes. We argue that addressing the challenges of our time—such as climate change, pandemics, and the threats to liberal democracies and human autonomy posed by digital technologies and choice architectures—calls for fostering capable and engaged citizens as a first line of response to complement slower, systemic approaches. List with summary points:

1. Behavioral public policy garnered widespread attention with the introduction of nudging, which aims to steer behavior while maintaining freedom of choice.
2. Criticisms of nudging include that it does not promote agency and competences and that it relies—overly optimistically—on the presence of benevolent choice architects.
3. The proliferation of environments threatening people's autonomy, the slow pace of systemic approaches to tackling societal issues, and the intrinsic benefits of empowerment make empowering citizens an indispensable objective of behavioral public policy.
4. Boosting is a behavioral public policy approach to empowerment grounded in evidence from behavioral science that shows that humans’ boundedly rational decision making is not as flawed as the nudging approach assumes.
5. Boosts are interventions that improve people's competencies to make informed choices that conform to their goals, preferences, and desires.
6. In self-nudging boosts, people learn to use architectural changes in their proximate choice environment to regulate their own behavior—that is, they are empowered to adapt their own choice environments.
7. There are boosts to foster core competences in many domains, including finance, online environments, and health, as well as broader, overarching areas, such as motivation, risk, and judgment and decision making. Boosts should be part of a policy mix that also includes system-level approaches.
8. When implementing boosts, policy makers need to avoid the trap of individualizing responsibility and to be mindful that, due to differences in cognition and motivation, inequalities in the desirable effects across boosted individuals may emerge.

stefanherzog.bsky.social
🚨⬇️
___
Bald könnten ALLE Chat-Nachrichten von den Behörden durchsucht werden – selbst ohne Verdacht! Das entscheiden *heute* die Minister*innen. Sag Nein zur Chatkontrolle! Jetzt unterzeichnen ✍️ weact.campact.de/petitions/ch...
Chatkontrolle stoppen!
Die EU-Kommission will Messenger-Dienste wie WhatsApp und Signal zwingen, alle privaten Nachrichten und Fotos in Echtzeit zu scannen. Angeblich zum Kinderschutz. In Wahrheit bedeutet die Chatkontrolle...
weact.campact.de

Reposted by Stefan M. Herzog

Reposted by Stefan M. Herzog

sixtus.net
Eine Partei, die Frauen, die über ihren Körper selbst bestimmen, zu Straftäterinnen macht, die sich dagegen wehrt, ein Verbotsprüfungsverfahren gegen eine faschistische Partei einzuleiten, setzt sich mit einer frauenhassenden, religiös-faschistischen Terrorregierung an einen Tisch. Passt.
Migrationspolitik: Dobrindt verteidigt Gespräche mit Taliban über Abschiebungen
Bundesinnenminister Alexander Dobrindt will "regelmäßig" weitere Straftäter nach Afghanistan abschieben. Zugleich verteidigt er im Bundestag Gespräche mit den Taliban.
www.zeit.de

Reposted by Stefan M. Herzog

adfichter.bsky.social
Wir werden alle Datenfutter für die KI von LinkedIn (zu Microsoft zugehörig), wenn wir uns nicht wehren. LinkedIn nutzt das Opt-Out-Land Schweiz aus (ob EU weiss ich nicht). Und setzt den Toogle bei uns allen auf ON.

Ja, es ist verdammt frech.

Hier ⬇️aussteigen:
www.linkedin.com/mypreference...
Daten zur Verbesserung generativer KI
Darf LinkedIn Ihre auf LinkedIn erstellten personenbezogenen Daten und Inhalte verwenden, um generative KI-Modelle von LinkedIn zu schulen, die zum Erstellen von Inhalten verwendet werden?

Ihre Daten zum Schulen von KI-Modellen verwenden, um Inhalte zu erstellen

ein/aus
jamiecummins.bsky.social
Can large language models stand in for human participants?
Many social scientists seem to think so, and are already using "silicon samples" in research.

One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.

THREAD 🧵
The threat of analytic flexibility in using large language models to simulate human data: A call to attention
Social scientists are now using large language models to create "silicon samples" - synthetic datasets intended to stand in for human respondents, aimed at revolutionising human subjects research. How...
arxiv.org

Reposted by Stefan M. Herzog

patrickbreitenbach.de
"Free Speech" wird als taktisches Werkzeug genutzt, um gesellschaftlich eher geächtete Positionen (Rassismus, Sexismus, Klassismus) wieder in die breite Akzeptanz zurück zu holen. Es geht dabei nicht um "Free Speech", denn wir sehen ja, dass Gegenrede von den gleichen Leuten hart sanktioniert wird.

Reposted by Andreas Ortmann

stefanherzog.bsky.social
🧠🤖 Ever wondered about the risks of the increasing customization capabilities of AI/AI-powered chatbots & what we can do about it? Check out our recent-ish paper:

The governance & behavioral challenges of generative artificial intelligence’s hypercustomization capabilities. doi.org/10.1177/2379...
vitotrianni.bsky.social
🌍 Join us for #HACID Webinar #4: Gender & Diversity in Hybrid Collective Intelligence 🧠🤖
🗓 Sept 29, 2025 | ⏰ 18:00 CEST

Register here: events.teams.microsoft.com/event/71775f...
Photo by cottonbro studio: https://www.pexels.com/photo/photo-of-people-using-their-smartphones-8088445/
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles

Reposted by Stefan M. Herzog

anasofiamorais.bsky.social
We’ve all heard it: people make 200 mindless food decisions a day. But is it true? In our latest episode, Almudena Claassen reveals why this number is misleading and offers strategies that help families and individuals make healthier choices. tinyurl.com/3k2tjw6x
eloplop.bsky.social
New research out!🚨

In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
NYAS Publications
Generative artificial intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. Whi...
nyaspubs.onlinelibrary.wiley.com
tobiasgalla.bsky.social
Review "Opinion dynamics: Statistical physics and beyond"
arxiv.org/abs/2507.11521

Lab experiments, data, models, analytical/computational tools. 93 pages, >1k references.

With Fabian Baumann, David Garcia, Gerardo Iñiguez, Márton Karsai, Jan Lorenz, Katarzyna Sznajd-Weron. Led by Michele Starnini
Opinion dynamics: Statistical physics and beyond
Opinion dynamics, the study of how individual beliefs and collective public opinion evolve, is a fertile domain for applying statistical physics to complex social phenomena. Like physical systems, soc...
arxiv.org

Reposted by Stefan M. Herzog

Reposted by Stefan M. Herzog

jamiecummins.bsky.social
New preprint commentary from me, @malte.the100.ci, and @ianhussey.mmmdata.io.

Cognitive dissonance in large language models is neither cognitive nor dissonant.

THREAD BELOW 🧵

osf.io/preprints/ps...
OSF
osf.io

Reposted by Stefan M. Herzog

arc-mpib.bsky.social
We wrapped up the 22nd Summer Institute on Bounded Rationality at the Center for Adaptive Rationality at the Max Planck Institute for Human Development (@mpib-berlin.bsky.social).

ECRs from around the world joined us to share their insights on rationality, cognition, and decision-making.

1/🧵👇

Reposted by Stefan M. Herzog

jamiecummins.bsky.social
Social scientists should not use chat interfaces when using LLMs in their research: they are impressively inefficient, and obscure/impose important methodological decisions that require thought.

THREAD🧵

Reposted by Stefan M. Herzog

levinbrinkmann.bsky.social
You sound like ChatGPT -- that is what The Verge said about our recent preprint on how ChatGPT is reshaping human spoken communication.

Chatbots are becoming a cultural medium and agent of cultural homogenisation. Is this inevitable, or can model diversity help? www.theverge.com/openai/68674...
You sound like ChatGPT
AI isn’t just impacting how we write — it’s changing how we speak and interact with others. And there’s only more to come.
www.theverge.com

Reposted by Stefan M. Herzog

volksverpetzer.de
(1/9) Hier gibt es die Fakten:

1. Das Bundesamt für Verfassungsschutz ist dem Bundesinnenministerium unterstellt. Von Storchs Aussage ist hier aber irreführend, da die Aufgaben des Verfassungsschutzes gesetzlich geregelt sind und der*die Innenminister*in Weisungen nicht willkürlich erteilen kann.

Reposted by Stefan M. Herzog

christianstoecker.de
Disinformation is the greatest threat of all. It is the root cause of our inability to tackle climate change (the first Big Lie was climate change denial), the deterioration of democracies, the rise of corrupt populists like Trump.That‘s why the fight science and real journalism.
Pew graph about Republican vs Democratic voters news sources. It is mostly FoX News for Republicans.

Reposted by Stefan M. Herzog

christianstoecker.de
In diesem Zusammenhang zitiere ich immer gern den ehemaligen NSA-Chef der USA, Michael Hayden: „We kill people based on metadata.“ abcnews.go.com/blogs/headli...

Reposted by Stefan M. Herzog

netzpolitik.bsky.social
Whatsapp führt Werbung ein. Dafür relevant sind die Datenspuren, wer mit wem wann und wo kommuniziert. Das reicht schon aus, um sehr viel über eine Person zu wissen, auch wenn die Unterhaltung selbst verschlüsselt ist.

It´s the Metadata!

Viel Spaß damit. Und wechselt einfach zu Signal.

Reposted by Stefan M. Herzog

volksverpetzer.de
Wenn Linksextremismus so ein großes Problem ist, sollte man sie in die Talkshows einladen, mit ihnen diskutieren und ihre Forderungen selbst umsetzen, um sie politisch zu stellen. Oh? Zu fahrlässig? Warum ist das dann unsere Strategie bei der AfD?

Reposted by Stefan M. Herzog

arnesemsrott.bsky.social
Sachen gibt's: Wenn man die Skalierung der Achse ändert, ist auf einmal der Balken beim Linksextremismus länger.
Rechtsextremistisches Personenpotenzial. Grafik von Dobrindt: Liegt bei etwa 50.000 Achse kleiner, dadurch bei Linksextremismus deutlich längere Balken