Jakob Schuster
schusterj.bsky.social
Jakob Schuster
@schusterj.bsky.social
PhD Candidate at the Institute of Computational Linguistics in Heidelberg, Germany
We will release all code and data in the coming weeks to encourage further research. To read up on all the omitted details, check out the whole paper here:

👉 arxiv.org/pdf/2601.03746 📄

7/7
arxiv.org
January 12, 2026 at 2:36 PM
TL;DR: Multiple factors of source credibility influence how LLMs resolve knowledge conflicts, following a consistent, internal credibility hierarchy. But these preferences can be easily overwritten by simple repetition. Fine-tuning can mitigate this vulnerability.

6/7
January 12, 2026 at 2:36 PM
To address this, we propose a novel knowledge-distillation training approach that makes models agnostic to repeated information. This reduces repetition bias by up to 99.8%, while retaining up to 88.8% of the original source preference.

5/7
January 12, 2026 at 2:36 PM
By explicitly providing source information, we disentangle whether models truly favor majorities as often reported, or whether they are simply influenced by repeated information. Our findings strongly suggest the latter, leaving models vulnerable to adversarial manipulation.

4/7
January 12, 2026 at 2:36 PM
When comparing different source types, we find that source information significantly affects conflict resolution, with models following a highly consistent source credibility hierarchy: institutional sources (government, newspaper) over individual ones (person, social media user).

3/7
January 12, 2026 at 2:36 PM
We evaluate 13 models from 4 model families on fully synthetic conflicts and sources and measure shifts in answer probabilities when conflicting answers are attributed to different sources, all grounded in interdisciplinary frameworks.

2/7
January 12, 2026 at 2:36 PM