Sarah Griebel
sgriebel.bsky.social
Sarah Griebel
@sgriebel.bsky.social
IS PhD student at UIUC
This new study uses continued pretraining with historical documents on Qwen2.5, along with supervised fine-tuning and reinforcement learning for a more historically accurate CoT-tuned model. Cool methods! arxiv.org/pdf/2504.09488
arxiv.org
May 26, 2025 at 9:26 PM
Reposted by Sarah Griebel
New preprint from @lauraknelson.bsky.social, @mattwilkens.bsky.social, and myself tests different ways of simulating the past with LLMs. We don't fully answer the title question here—just show that simple strategies based on prompting and fine-tuning are insufficient. +
Can Language Models Represent the Past without Anachronism?
Before researchers can use language models to simulate the past, they need to understand the risk of anachronism. We find that prompting a contemporary model with examples of period prose does not pro...
arxiv.org
May 2, 2025 at 12:47 PM
There are many ways to identify texts that seem ahead of their time. Our CHR 2024 paper asks which measures of textual precocity align best with social evidence about influence and change.
November 26, 2024 at 9:25 PM