danielsc4.it
Meet EAGer: We show that monitoring token-level uncertainty lets LLMs allocate compute dynamically - spending MORE on hard problems, LESS on easy ones.
🧵👇
Meet EAGer: We show that monitoring token-level uncertainty lets LLMs allocate compute dynamically - spending MORE on hard problems, LESS on easy ones.
🧵👇
Happy to chat about cool interpretability stuff there!
goodfire.ai/ for sponsoring! nemiconf.github.io/summer25/
If you can't make it in person, the livestream will be here:
www.youtube.com/live/4BJBis...
Happy to chat about cool interpretability stuff there!
We steer LLM generations to mimic human translator styles on literary novels in 7 languages. 📚
SAE steering can beat few-shot prompting, leading to better personalization while maintaining quality.
🧵1/
We steer LLM generations to mimic human translator styles on literary novels in 7 languages. 📚
SAE steering can beat few-shot prompting, leading to better personalization while maintaining quality.
🧵1/
go.bsky.app/UDf92a2