hardmaru
hardmaru.bsky.social
hardmaru
@hardmaru.bsky.social
Co-Founder & CEO, Sakana AI 🎏 → @sakanaai.bsky.social

https://sakana.ai/careers
Reposted by hardmaru
RePo moves us toward models that intelligently curate their own working memory rather than passively accepting input order.

Read the full breakdown on our website:
pub.sakana.ai/repo/

Paper: arxiv.org/abs/2512.14391
RePo: Language Models with Context Re-Positioning
In-context learning is fundamental to modern Large Language Models (LLMs); however, prevailing architectures impose a rigid and fixed contextual structure by assigning linear or constant positional in...
arxiv.org
January 19, 2026 at 12:40 AM
Reminded me of my older NeurIPS 2021 paper, where we removed the positional encoding entirely, and by doing so, an agent can process an arbitrarily long list of noisy, sensory inputs, in an arbitrary order.

I even made a fun browser demo to play with the agent back then: attentionneuron.github.io
January 12, 2026 at 5:51 AM
Especially in such times, hackers and tinkerers tend to fare better at harnessing evolving technology with a high level of uncertainty and ambiguity, compared to traditional well-read professional types.
December 27, 2025 at 1:09 AM
“The US follows the idea that there will be one winner who takes it all. Even coming short of AGI, if you have the best model, almost all people will use your model and not the competition’s model. The idea is: develop the Biggest, Baddest model and people will come.”
timdettmers.com/2025/12/10/w...
Why AGI Will Not Happen — Tim Dettmers
If you are reading this, you probably have strong opinions about AGI, superintelligence, and the future of AI. Maybe you believe we are on the cusp of a transformative breakthrough. Maybe you are skep...
timdettmers.com
December 14, 2025 at 11:20 PM