Ezequiel Lopez-Lopez
@eloplop.bsky.social
120 followers 210 following 16 posts
Pre-doctoral researcher @ Max Planck Institute for Human Development (Adaptive Rationality Center) — Berlin | knowledge representations, NLP, computational social sciences & policy
Posts Media Videos Starter Packs
eloplop.bsky.social
#AI #DigitalHealth #Misinformation #ConfirmationBias #ChatGPT
eloplop.bsky.social
As GenAI becomes part of daily life, its influence on behavior and decision-making, especially in health, is critical.
🧠 Combating confirmation bias in this space requires:
• Equipping users with critical AI literacy
• Establishing strong oversight, regulation, and auditing
eloplop.bsky.social
We also call for systemic solutions:
🔍 Public black-box testing
🧑‍⚕️ Reporting tools for physicians to flag GenAI-influenced patient behavior
📋 Regulatory oversight for general-purpose GenAI used in medical contexts
eloplop.bsky.social
So what can we do? We propose:
✅ Boosting digital/AI literacy
✅ Training people to test multiple query framings
✅ Promoting “consider-the-opposite” thinking
✅ Designing GenAI apps to flag evidence-based but disconfirming info
eloplop.bsky.social
🧠 GenAI doesn’t just answer; it engages in dialogue. This allows people to iteratively steer the conversation toward preferred narratives, reinforcing cognitive biases without necessarily realizing it
eloplop.bsky.social
we showcase how small changes in system configuration (like creating a more empathetic chatbot) made GenAI more likely to mirror user bias, endorsing pseudoscientific treatments and downplaying evidence-based medicine
eloplop.bsky.social
In traditional search (e.g., Google), these biases already existed. But GenAI takes it further through hypercustomization (journals.sagepub.com/doi/10.1177/...), adapting to your wording, preferences, even emotional tone. This can lead to tailored responses that validate your views, accurate or not
eloplop.bsky.social
We identify 3 key “pressure points” where GenAI can entangle with confirmation bias in health contexts:
1️⃣ How users phrase queries
2️⃣ Preference for belief-consistent answers
3️⃣ Resistance to belief-inconsistent info
eloplop.bsky.social
Millions are turning to AI tools for medical queries.
While GenAI offers easy access to personalized health information, that personalization can backfire, subtly reinforcing people’s pre-existing beliefs, even when they’re false or harmful
eloplop.bsky.social
New research out!🚨

In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
NYAS Publications
Generative artificial intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. Whi...
nyaspubs.onlinelibrary.wiley.com
Reposted by Ezequiel Lopez-Lopez
nytimes.com
From @nytopinion.nytimes.com

“There is no excuse for the world to stand by and watch two million human beings suffer on the brink of full-blown famine,” the chef José Andrés writes about Gaza.
Opinion | José Andrés: People of Good Conscience Must Stop the Starvation in Gaza
www.nytimes.com
eloplop.bsky.social
New paper out! 📣
We explore GenAI’s hypercustomization capabilities, the behavioral and governance challenges they may introduce in the foreseeable future, and potential strategies to address them.
Reposted by Ezequiel Lopez-Lopez
levinbrinkmann.bsky.social
You sound like ChatGPT -- that is what The Verge said about our recent preprint on how ChatGPT is reshaping human spoken communication.

Chatbots are becoming a cultural medium and agent of cultural homogenisation. Is this inevitable, or can model diversity help? www.theverge.com/openai/68674...
You sound like ChatGPT
AI isn’t just impacting how we write — it’s changing how we speak and interact with others. And there’s only more to come.
www.theverge.com
Reposted by Ezequiel Lopez-Lopez
lewan.bsky.social
Science is under a wide-ranging attack in the U.S. From arbitrary and catastrophic funding cuts to censorship based on keywords, scholarship is no longer free from government interference. How can scientists respond? 1/n
Bust of Andrey Sakharov
Reposted by Ezequiel Lopez-Lopez
lewan.bsky.social
My latest column just appeared in Science, entitled "Free speech, fact-checking, and the right to accurate information”. (www.science.org/doi/10.1126/...) I use one of President Trump’s first executive orders to unpack the terrain between misinformation and claims to free speech 1/n
Free speech, fact checking, and the right to accurate information
True to his campaign promises, on 20 January 2025, US President Donald Trump signed a broad range of Executive Orders, the scope of which ranged from renaming the Gulf of Mexico to “Gulf of America” t...
www.science.org
Reposted by Ezequiel Lopez-Lopez
arc-mpib.bsky.social
🚨Press release🚨

How can LLMs help and hurt collective intelligence?

Interdisciplinary team of 28 scientists led by our associate researcher @jasonburton.bsky.social proposes recommendations for action (Nature Human Behaviour)

www.mpib-berlin.mpg.de/press-releas...
Reposted by Ezequiel Lopez-Lopez
arc-mpib.bsky.social
AI is even changing how we speak...?

A recent study in collaboration with ARC's Ezequiel Lopez-Lopez @eloplop.bsky.social shows a significant rise in words associated with ChatGPT using 280,000+ YouTube videos.

👇Check out the preprint and explore the feedback loops between AI and human culture.
iyadrahwan.bsky.social
🚨 preprint 🚨

Can ChatGPT influence the way we speak to each other? I.e. can they shape human culture?

We transcribed and analyzed 300k YouTube videos to see if humans increased the use of words like 'delve' and 'adept,' which are known to be overused by ChatGPT.

Paper: arxiv.org/abs/2409.01754
Empirical evidence of Large Language Model's influence on...
Artificial Intelligence (AI) agents now interact with billions of humans in natural language, thanks to advances in Large Language Models (LLMs) like ChatGPT. This raises the question of whether...
arxiv.org
Reposted by Ezequiel Lopez-Lopez
iyadrahwan.bsky.social
🚨 preprint 🚨

Can ChatGPT influence the way we speak to each other? I.e. can they shape human culture?

We transcribed and analyzed 300k YouTube videos to see if humans increased the use of words like 'delve' and 'adept,' which are known to be overused by ChatGPT.

Paper: arxiv.org/abs/2409.01754
Empirical evidence of Large Language Model's influence on...
Artificial Intelligence (AI) agents now interact with billions of humans in natural language, thanks to advances in Large Language Models (LLMs) like ChatGPT. This raises the question of whether...
arxiv.org
eloplop.bsky.social
Our first day of SciBeh 's Virtual Workshop 2024 on "Epistemic Boundaries" is almost over!
Missed day 1? No worries! You can still join us for Day 2's insightful talks and discussions. Details and sign-up here:
www.scibeh.org/events/works...