Jakob Schuster
schusterj.bsky.social
Jakob Schuster
@schusterj.bsky.social
PhD Candidate at the Institute of Computational Linguistics in Heidelberg, Germany
To address this, we propose a novel knowledge-distillation training approach that makes models agnostic to repeated information. This reduces repetition bias by up to 99.8%, while retaining up to 88.8% of the original source preference.

5/7
January 12, 2026 at 2:36 PM
By explicitly providing source information, we disentangle whether models truly favor majorities as often reported, or whether they are simply influenced by repeated information. Our findings strongly suggest the latter, leaving models vulnerable to adversarial manipulation.

4/7
January 12, 2026 at 2:36 PM
When comparing different source types, we find that source information significantly affects conflict resolution, with models following a highly consistent source credibility hierarchy: institutional sources (government, newspaper) over individual ones (person, social media user).

3/7
January 12, 2026 at 2:36 PM
Excited to share the first preprint of my PhD!
While many papers focus on what kind of information LLMs trust, @dippedrusk.com, Katja Markert, and I instead investigate whose evidence models prefer by looking at source credibility.

#NLP #Research #CL #LLMs

1/7 🧵
January 12, 2026 at 2:36 PM