If you’d like to collaborate, we’re eager to work with labs, companies, standards bodies, and researchers who bring strong research taste + builder energy, openness to standards, and interest in rigorous evaluation.
As a first step, we’re announcing 5 fully funded academic visitor slots at Oxford (4) and Stanford (1) for researchers working on agent security, distributed anomaly detection, and decentralized safety. decentralized-ai.org#positions
Current research is focused on decentralized oversight of agent networks: trust protocols in agent networks/economies; global, federated anomaly detection; formalizing safety in agent networks; and human-friendly oversight tools for distributed AI.
Our work combines research + infrastructure: protocols and standards, distributed oversight systems, tooling and reference implementations, and field building for modern decentralized AI.
This approach is the fastest path to AI commons: – unlocks collaboration without surrendering control – reduces single-point failures & lock-in – enables safer, auditable agent networks
By “decentralized AI” we mean AI where compute, data, governance, control, and outcomes are distributed across many parties. No single switch or owner: instead, systems that are composed, federated, and interoperable.
Excited to launch the Institute for Decentralized AI, alongside 5 fully funded visiting researcher positions in Oxford and Stanford! Our mission: build the protocols, standards, and tooling that make decentralized AI work in the real world.
Having a coauthor (the great Tobin) post here was the final push for me to sign up, after much mulling over. Check out our paper on permission management for agents!