Christoph Abels
@cabels18.bsky.social
68 followers 77 following 41 posts
Post-Doctoral Fellow @unipotsdam.bsky.social‬, visiting @arc-mpib.bsky.social | PhD @hertieschool.bsky.social | Democracy, Technology, Behavioral Public Policy | Website: https://christophabels.com
Posts Media Videos Starter Packs
cabels18.bsky.social
Thank you so much for spreading the word! We are really in a crucial period right now - and everyone should understand that protecting democracy is at its core a joint endeavor.
Reposted by Christoph Abels
gretchentg.bsky.social
My new @science.org editorial on the role of scientists in defending democracy is out today. As authoritarianism takes hold in the US, we must fight for the democratic principles that enable a free society and scientists have a key role. I hope you'll join us.
www.science.org/doi/10.1126/...
Scientists’ role in defending democracy
The United States’ democratic leadership, commitment to freedom of expression, and investment in the pursuit of knowledge have long enabled its preeminence in science and technology. Yet today we are ...
www.science.org
cabels18.bsky.social
GenAI offers powerful tools. But when it shapes what we believe, especially about our own health, we need to treat it as a behavioral system with real-world consequences.

@lewan.bsky.social @eloplop.bsky.social @stefanherzog.bsky.social @dlholf.bsky.social
cabels18.bsky.social
What can we do?
We call for a multi-level approach:

Design-level interventions to help users maintain situational awareness
Boosting user competencies to help them understand the technology's impact
Developing public infrastructure to detect and monitor unintended system behaviour
cabels18.bsky.social
This isn't just about potentially problematic design.

It’s about systemic risk: As GenAI tools fragment (Custom GPTs, GPT Stores, third-party apps), the public is exposed to a growing landscape of low-oversight, increasingly high-trust agents.

And that creates challenges for the individual.
cabels18.bsky.social
You can make ChatGPT even more biased, just by tweaking a few settings.

We built a Custom GPT that’s a little more "friendly" and engagement-driven.

It ended up validating fringe treatments like quantum healing, just to keep the user happy.
cabels18.bsky.social
In this paper, we showcase how this plays out across 3 “pressure points”:

Biased query phrasing → biased answers
Selective reading → echo chambers
Dismissal of contradiction → belief reinforcement

Confirmation bias isn't new. GenAI just takes it a bit further.
cabels18.bsky.social
Generative AI tools are designed to adapt to you: your tone, your preferences, your beliefs.
That’s great for writing emails.

But in health contexts, that adaptability becomes hypercustomization - and can entrench existing views, even when they're wrong.
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
doi.org
cabels18.bsky.social
🔍 “I just want a second opinion.”

More people are turning to ChatGPT for health advice. Many would use it for self-diagnosis.

But here's the problem: These tools don’t just answer, they align. And that’s where things get risky.

🧵 on GenAI, health, and confirmation bias
eloplop.bsky.social
New research out!🚨

In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
NYAS Publications
Generative artificial intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. Whi...
nyaspubs.onlinelibrary.wiley.com
Reposted by Christoph Abels
eloplop.bsky.social
New research out!🚨

In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
NYAS Publications
Generative artificial intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. Whi...
nyaspubs.onlinelibrary.wiley.com
cabels18.bsky.social
Hypercustomization offers useful functionality - but it also complicates oversight and raises new policy questions.

Early, thoughtful action can help ensure that the benefits are not overshadowed by unintended consequences.
cabels18.bsky.social
💬 Response 5: In-app reflection prompts
GenAI systems should occasionally ask users to pause and reflect:
“How is this conversation shaping your views?”
“Is the system affirming everything you say?”

These prompts reduce overreliance and help surface bias. Although further research is needed.
cabels18.bsky.social
🧠 Response 4: Boosting GenAI literacy
Disclaimers aren't enough. We need to train users - through games, videos, tools - how to recognize biased responses, resist manipulation, and navigate emotionally persuasive content.

Boosting builds agency without restricting access.
cabels18.bsky.social
🤲 Response 3: Data donations (with consent)
To understand real-world GenAI risks, we need real-world data.

We recommend voluntary data donation channels, where users can share selected interactions with researchers. Anonymized, secure, and essential for building safer systems.
cabels18.bsky.social
📢 Response 2: Public issue reporting
Think of it like post-market drug safety:
We need public platforms where users can report problematic GenAI behavior - bias, sycophancy, manipulation, etc.

This kind of crowdsourced oversight can catch what testing alone might miss.
cabels18.bsky.social
🧪 Response 1: Public black-box testing
GenAI providers should open up standardized test datasets so independent researchers can evaluate how these systems respond.

This helps surface ethical issues, hallucinations, or manipulation risks that might otherwise remain hidden.
cabels18.bsky.social
We suggest five key responses:
– Public black-box testing
– Issue reporting platforms
– Voluntary data donation
– GenAI literacy interventions
– In-app prompts for critical reflection
Infographic summarizing five recommended strategies to address the risks of hypercustomization in GenAI applications, each paired with the specific challenges they aim to mitigate:

Public black box testing
Icon: AI inside a black box with user inputs and performance charts.
Description: Establish public repositories with test datasets so independent experts can evaluate GenAI responses for ethical or accuracy issues.
Challenge addressed: Lack of transparency in how GenAI applications work.
Public reporting of issues
Icon: AI interacting with multiple users, one marked with “?!”.
Description: Create a platform for users to report problematic GenAI interactions (e.g. discrimination, manipulation).
Challenge addressed: Lack of transparency in how AI applications work.
Data donations
Icon: A hand holding binary code transferring to an institutional building.
Description: Set up voluntary data donation channels so users can share real-world GenAI interactions for research.
Challenges addressed: Opacity of user–GenAI interaction, lack of transparency.
Use of boosting to improve GenAI literacy
Icon: A person receiving a warning sign from an AI interaction.
Description: Develop platforms (games, videos, etc.) to teach users how to evaluate GenAI responses and recognize risks.
Challenges addressed: Overreliance on applications, inefficacy of warning messages.
Prompting within GenAI applications
Icon: AI on one side of a balance scale, a brain with a lightbulb on the other.
Description: Embed reflective prompts in GenAI systems to encourage users to think about the influence of the application and seek diverse views.
Challenges addressed: Overreliance on applications, inefficacy of warning messages.
cabels18.bsky.social
💡 Why these challenges matter:
Together, they make it hard to regulate GenAI, hard to study it, and hard for users to defend themselves against its influence.

The stakes are rising fast - especially as these systems become more persuasive, intimate, and widespread.
cabels18.bsky.social
🚫 Challenge 4: Warning fatigue
Pop-up warnings - as an easy-to-implement measure - don’t work well when people are emotionally engaged or highly persuaded.

With GenAI, hypercustomized answers feel tailor-made - and that makes people tune out cautionary labels.
cabels18.bsky.social
⚖️ Challenge 3: Overreliance on the AI
When GenAI output feels personal, it often feels true.
That’s a problem. Users may trust and defer to GenAI - even when it’s wrong or biased.

This is especially risky with social companions and persuasive chatbots.
cabels18.bsky.social
👁 Challenge 2: Opacity of interactions
While social media is (semi)public, GenAI is private.
Most GenAI conversations happen one-on-one, behind closed doors.

This means harmful patterns go unnoticed. Researchers can’t study what they can’t see.