Lara Kirfel
@larakirfel.bsky.social
300 followers 340 following 5 posts
CogSci, Philosophy & AI, Postdoc at Max Planck Institute Berlin.
Posts Media Videos Starter Packs
larakirfel.bsky.social
"A framework for blaming willlful ignorance" 🫣
-- New review paper with @rzultan.bsky.social and @tobigerstenberg.bsky.social now out in "Current Opinion in Psychology".
rzultan.bsky.social
People receive more blame for unanticipated consequences of their actions if they could have been informed of said consequences but chose not to. We review possible explanations in our new paper.
doi.org/10.1016/j.co...
A framework for blaming willful ignorance, published in Current Opinion in Psychology
Reposted by Lara Kirfel
koenfucius.bsky.social
What makes people judge that someone was forced to take a particular action?

Research by @larakirfel.bsky.social et al suggests one influence on this judgment is people’s representation of what an agent knows is possible:

buff.ly/Z0oaRNk

HT @xphilosopher.bsky.social
larakirfel.bsky.social
🏔️ Brad is lost in the wilderness—but doesn’t know there’s a town nearby. Was he forced to stay put?

In our #CogSci2025 paper, we show that judgments of what’s possible—and whether someone had to act—depend on what agents know.

📰 osf.io/preprints/ps...

w/ Matt Mandelkern & @jsphillips.bsky.social
Title: Representations of what’s possible reflect others’ epistemic states

Authors: Lara Kirfel, Matthew Mandelkern, and Jonathan Scott Phillips

Abstract: People’s judgments about what an agent can do are shaped by various constraints, including probability, morality, and normality. However, little is known about how these representations of possible actions—what we call modal space representations—are influenced by an agent’s knowledge of their environment. Across two studies, we investigated whether epistemic constraints systematically shift modal space representations and whether these shifts affect high-level force judgments. Study 1 replicated prior findings that the first actions that come to mind are perceived as the most probable, moral, and normal, and demonstrated that these constraints apply regardless of an agent’s epistemic state. Study 2 showed that limiting an agent’s knowledge changes which actions people perceive to be available for the agent, which in turn affects whether people judged an agent as being “forced” to take a particular action. These findings highlight the role of Theory of Mind in modal cognition, revealing how epistemic constraints shape perceptions of possibilities.
Reposted by Lara Kirfel
neeleengelmann.bsky.social
New paper for #CogSci2025: People cheat more when they delegate to AI. How can we stop this? We tested:

🧠 Explaining what the AI does (transparency)
🗣️ Calling cheating what it is (framing)

Only one worked.

w/ @larakirfel.bsky.social, Anne-Marie Nussberger, Raluca Rilla & @iyadrahwan.bsky.social
psyarxivbot.bsky.social
Framing, not transparency, reduces cheating in algorithmic delegation: https://osf.io/pqmnx
larakirfel.bsky.social
❗Now out in "AI and Ethics"❗

What are the consequences of AI that can reason counterfactually? Our new paper explores the ethical dimensions of AI-driven counterfactual world simulation. 🌎 🤖

With @tobigerstenberg.bsky.social, Rob MacCoun and Thomas Icard.

Link: shorturl.at/bHYEO
When AI meets counterfactuals: the ethical implications of counterfactual world simulation models
rdcu.be
Reposted by Lara Kirfel
koenfucius.bsky.social
Does wilful ignorance—intentionally neglecting to ascertain you’re not implicated in criminal activity—make you culpable? Research by @larakirfel.bsky.social & Hannikainen suggests yes, as long as you suspected that may have been the case: https://buff.ly/3XDopzh
HT @xphilosopher.bsky.social
larakirfel.bsky.social
🗣️ People often select only a few events when explaining what happened. What drives people’s explanation selection?

🗞️ In our new paper, we propose a new model and show that people use explanations to communicate effective interventions. #Cogsci2024

🔗 Link to paper: osf.io/preprints/ps...