Scholar

Andreas Jungherr

H-index: 30
Communication & Media Studies 43%
Political science 22%
felixsimon.bsky.social
🚨✨ Publication alert: How do people in 6 countries (🇬🇧 🇺🇸 🇫🇷 🇦🇷 🇩🇰 🇯🇵 ) use AI 🤖 and think about it in the context of information, news, and institutions?

Our new @reutersinstitute.bsky.social survey research (n ≈ 12,000) with @richardfletcher.bsky.social & @rasmuskleis.bsky.social explores this.
ajungherr.bsky.social
📖 The article contributes to a better understanding of public opinion and digital governance — and shows why international comparison matters for both research and regulation.
ajungherr.bsky.social
🌏 Our findings highlight that cultural and societal contexts shape how people think about digital campaign regulation. The same perceptions and cognitions can have very different consequences across countries.
ajungherr.bsky.social
General attitudes toward AI also play out differently:

🇺🇸 In the U.S., perceived AI risks increase support for regulation, while perceived AI benefits reduce it.
🇹🇼 In Taiwan, both critical and optimistic citizens tend to support stricter rules.
ajungherr.bsky.social
In Taiwan, by contrast, we observe a second-person effect: People favor regulation when they think that both they and others can be influenced by campaigning.
ajungherr.bsky.social
In the U.S., we find a third-person effect: People tend to support regulation when they believe others are more influenced by campaign messages than they themselves are.
ajungherr.bsky.social
🇺🇸 & 🇹🇼 Majorities in both the U.S. and Taiwan favor clear rules for using AI in election campaigns. But factors correlated with supporting regulation differ markedly between the two countries.
ajungherr.bsky.social
⚠️ This means: Even if AI might factually improve the processes of democratic deliberation, there is a risk that its use will exacerbate existing inequalities in willingness to participate.

(6/7)
ajungherr.bsky.social
🔸 Positive attitudes toward AI increase acceptance; perceived risks, on the other hand, significantly reduce it.

(5/7)
ajungherr.bsky.social
🔸 A new "deliberation divide" emerges: those who are skeptical of AI are less likely to participate.

(4/7)
ajungherr.bsky.social
🔸 If people are informed about the use of AI in deliberation, they expect discussions to be of lower quality than when moderated by a human.

(3/7)
ajungherr.bsky.social
🧐 Our key findings:

🔸 AI-supported deliberation significantly reduces the willingness to participate.

(2/7)
ajungherr.bsky.social
You can take the speaker out of pol sci, but you can’t take pol sci out of the speaker :)
ajungherr.bsky.social
In short: let’s start with what we do control and by doing so, expand our chances to manage interdependencies.
ajungherr.bsky.social
Enforce internal reform of our own institutions & practices that slow development and fuel discontent: politics, journalism, industry-protective tendencies, and EU regulatory habits.
ajungherr.bsky.social
Build capacity and capability for future tech & industries. Not replicate what’s already settled. That gives the EU power it currently lacks to negotiate real commitments from others and better manage interdependencies.
ajungherr.bsky.social
I agree it’s high time to engage. But for me, this is about addressing aspects we can control. I see two arms to this:
ajungherr.bsky.social
From a European perspective, that’s a lose–lose.
ajungherr.bsky.social
Blaming technology lets institutions dodge responsibility and internal reform, while deepening Europe’s dependencies on foreign infrastructures.
ajungherr.bsky.social
Narratives of “disinformation” and “manipulated unruly publics” too often serve established elites and institutions as a way to avoid facing their own contribution to discontent and reform.
ajungherr.bsky.social
Especially if we base policy on shaky analyses claiming that digital media themselves cause discontent with the state of play in Western democracies.
ajungherr.bsky.social
The impulse to demand greater control is understandable. But unless we are honest about why we’re in this mess to begin with, we risk only increasing dependencies.
ajungherr.bsky.social
I think we’re in an unfortunate bind. Because of past industry-protective regulation in the EU, we lack the structures, knowledge, and power to govern today’s crucial information infrastructures, let alone those of the future.
ajungherr.bsky.social
Who is “we”? Wresting control of communication structures from capitalist entities and handing it to bureaucratic or academic elites feels like a technocratic answer to a popular problem. No?

Reposted by: Andreas Jungherr

bidt.bsky.social
Vertrauen kann vieles bedeuten. Genau diese Mehrdeutigkeit erschwert interdisziplinäre Forschung zu Vertrauen in KI. Unsere AG „Vertrauen und Akzeptanz“ hat einen Überblick zum Vertrauensbegriff entwickelt und schlägt ein Arbeitsmodell vor, das Disziplinen verbindet und anpassbar ist. Mehr im Blog 👇
Der Vertrauensbegriff in der interdisziplinären Forschung zur Mensch-KI-Interaktion | bidt DE
Wie verstehen wir Vertrauen in KI? Dieser Beitrag thematisiert die Begriffsunschärfe und schlägt ein gemeinsames Konzept von Vertrauen vor.
www.bidt.digital

References

Fields & subjects

Updated 1m