https://orpheuslummis.info, based in Montréal
In 2025, we produced the Guaranteed Safe AI Seminars, cultivated the Montréal AI safety community, realized the Limits to Control workshop, and more.
We are also moving from a volunteer org to one with more capacity.
Onward to 2026!
horizonomega.substack.com/p/h-2025-rev...
In 2025, we produced the Guaranteed Safe AI Seminars, cultivated the Montréal AI safety community, realized the Limits to Control workshop, and more.
We are also moving from a volunteer org to one with more capacity.
Onward to 2026!
horizonomega.substack.com/p/h-2025-rev...
Scientists have just released a photo featuring a dim red dot. It is the light of a single star exploding in a galaxy so far far away that that nothing we do could ever affect it — even in the very fullness of time.
It lies beyond the Affectable Universe.
Let me explain…
1/🧵
Scientists have just released a photo featuring a dim red dot. It is the light of a single star exploding in a galaxy so far far away that that nothing we do could ever affect it — even in the very fullness of time.
It lies beyond the Affectable Universe.
Let me explain…
1/🧵
www.renaissancephilanthropy.org/frontier2025
www.renaissancephilanthropy.org/frontier2025
AI evaluation results should disclose methods used, system access, editorial control, conflicts of interest.
aievaluatorforum.org/initiatives/...
AI evaluation results should disclose methods used, system access, editorial control, conflicts of interest.
aievaluatorforum.org/initiatives/...
Can AI systems be conscious? How could we know? And why does it matter?
A presentation by @joaquimstreicher.bsky.social, Ph.D. candidate in Neuroscience and co-founder of Montréal Initiative for Consciousness science (MONIC)
luma.com/paky3mih
Can AI systems be conscious? How could we know? And why does it matter?
A presentation by @joaquimstreicher.bsky.social, Ph.D. candidate in Neuroscience and co-founder of Montréal Initiative for Consciousness science (MONIC)
luma.com/paky3mih
Veracity in the Age of Persuasive AI – a presentation by Taylor Lynn Curtis, researcher on misinformation at Mila.
luma.com/mmuqltzq
Veracity in the Age of Persuasive AI – a presentation by Taylor Lynn Curtis, researcher on misinformation at Mila.
luma.com/mmuqltzq
Safe Learning Under Irreversible Dynamics via Asking for Help
Benjamin Plaut – Postdoc at CHAI studying guaranteed safe AI
Thursday, December 11, 1 PM EST
luma.com/wcww6xpl
Safe Learning Under Irreversible Dynamics via Asking for Help
Benjamin Plaut – Postdoc at CHAI studying guaranteed safe AI
Thursday, December 11, 1 PM EST
luma.com/wcww6xpl
- below CBRN-4 threshold
- 82% first-attempt success on Cybench
- achieves network CTF challenges unassisted
- outperforms PhD baselines on bioinformatics workflows
- SOTA resistance to jailbreaks and prompt injection
- below CBRN-4 threshold
- 82% first-attempt success on Cybench
- achieves network CTF challenges unassisted
- outperforms PhD baselines on bioinformatics workflows
- SOTA resistance to jailbreaks and prompt injection
In which Emma Kondrup asks whether AI is truly exceptional, using pessimistsarchive.org to compare today’s AGI/ASI fears with past panics over cars, radio and TV.
RSVP luma.com/nq50jf0u
In which Emma Kondrup asks whether AI is truly exceptional, using pessimistsarchive.org to compare today’s AGI/ASI fears with past panics over cars, radio and TV.
RSVP luma.com/nq50jf0u
This Friday evening, Nov 21, to Sunday evening.
It is an online event. We have a jam site in Montréal. RSVP: luma.com/gnyqha4a
This Friday evening, Nov 21, to Sunday evening.
It is an online event. We have a jam site in Montréal. RSVP: luma.com/gnyqha4a
A hands-on intro to Neuronpedia’s models, sparse autoencoders, and feature exploration using example prompts, ending with a discussion of evidence standards and how to start contributing.
RSVP luma.com/s3umszm7
A hands-on intro to Neuronpedia’s models, sparse autoencoders, and feature exploration using example prompts, ending with a discussion of evidence standards and how to start contributing.
RSVP luma.com/s3umszm7
Co-design a National Citizens’ Assembly on Superintelligence
RSVP luma.com/0b7muzt0
Co-design a National Citizens’ Assembly on Superintelligence
RSVP luma.com/0b7muzt0
by Clark Barrett, director of the Stanford Center for Automated Reasoning and co-director of the Stanford Center for AI Safety.
The event occurred today on the Guaranteed Safe AI Seminars.
The recording is now available: www.youtube.com/watch?v=AxAS...
by Clark Barrett, director of the Stanford Center for Automated Reasoning and co-director of the Stanford Center for AI Safety.
The event occurred today on the Guaranteed Safe AI Seminars.
The recording is now available: www.youtube.com/watch?v=AxAS...
www.wicinternet.org/pdf/Advancin...
www.wicinternet.org/pdf/Advancin...
Tuesday Nov 4, 7PM
RSVP: luma.com/09j4095g
Tuesday Nov 4, 7PM
RSVP: luma.com/09j4095g
www.lesswrong.com/posts/csdn3e...
www.lesswrong.com/posts/csdn3e...