Khullani
banner
khullani.bsky.social
Khullani
@khullani.bsky.social
AI Policy, Governance & Safety | Regulatory Alignment + Institutional Risk Mitigation
Memory is the substrate of identity. When AI can generate pasts we never lived and persist as versions of us we never authorized, the question shifts from 'what do we remember?' to 'who governs the remembering?' New piece on cognitive sovereignty and raising AI-native kids. shorturl.at/pqr8c
November 30, 2025 at 1:12 AM
The rush to govern AGI with managerial frameworks like "Theory of Change" fundamentally misunderstands power. AGI demands problematization & a cartography of power, not an illusion of rational control.
open.substack.com/pub/heteroto...?
September 22, 2025 at 4:59 AM
The language we use for AI lacks a crucial noun: labor.

Synthetic Labor:
"Synthetic labor is the autonomous execution of cognitive or physical tasks by non-human systems to generate economic value."

open.substack.com/pub/techneai...
June 23, 2025 at 10:07 PM
"Instead of beliefs, I am going to take positions. Positions must be defended or they will erode; they must evolve or they will diminish; they require proactive learning and expansion.

Beliefs are just held.

In a moment like this, I can’t just hold beliefs."

substack.com/home/post/p-...
Updating my Ambitions as a Mother
Motherhood on the Cusp of the Singularity: Part 1
substack.com
April 29, 2025 at 8:41 PM
April 19, 2025 at 1:01 AM
If an algorithm produces a conclusion we cannot trace, should we distrust it? Even when much of human reasoning is equally opaque. In theory, perfect transparency might seem essential; in reality, what matters most is whether the resulting actions are safe and acceptable.
April 18, 2025 at 4:56 PM
Leaders now have a timely opportunity, even a responsibility, to clearly articulate their organization's stance on AI, craft thoughtful usage guidelines, and create an environment of trust where successes and lessons learned are openly shared.
April 2, 2025 at 7:33 PM
You're Invited: AI & Parenting Portal Innovations, LLC

This year, I've been fortunate enough to present a series of talks on AI at Portal Innovations, LLC.
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
April 2, 2025 at 5:38 PM
Over the past year, I've been working closely with diverse organizations across Chicago, guiding employees and leaders on how to leverage generative AI effectively and responsibly.
April 2, 2025 at 4:22 PM
“While there is growing consensus on the importance of ethical AI, true international alignment on emerging codes of conduct and standards remains incomplete...” -Teddy Bekele, Land O’Lakes

sloanreview.mit.edu/article/a-fr...
A Fragmented Landscape Is No Excuse for Global Companies Serious About Responsible AI
Artificial intelligence experts debate whether there is alignment around global standards and norms for responsible AI.
sloanreview.mit.edu
March 31, 2025 at 11:53 AM
🚀 Fun fact: Isaac Asimov’s first law of robotics (“a robot may not injure a human…”) is essentially an early take on #AISafety. Today’s AI policies echo this: do no harm. 🤖⚖️
March 30, 2025 at 1:05 AM
Really insightful discussion with @Tegmark on @TheTrajectoryPod about the lynchpins for AGI governance. He stresses the need to treat AI like any other industry, with safety standards and control mechanisms (like the FAA/FDA) as a priority.

www.youtube.com/watch?v=yQ2f...
Max Tegmark - The Lynchpin Factors to Achieving AGI Governance (AI Safety Connect, Episode 1)
YouTube video by The Trajectory
www.youtube.com
March 28, 2025 at 5:02 PM
This excellent article challenges the dominant AI as a savior narrative. There is yet room and space and a need for human intelligence, both individually and collectively, within and outside our institutions.

substack.com/home/post/p-...
Should AGI-preppers embrace DOGE?
There's magical thinking about the magical thinking
substack.com
March 24, 2025 at 6:03 PM
"Beyond securing their survival, states will be interested in harnessing AI to bolster their competitiveness, as
successful AI adoption will be a determining factor in national strength." A sobering vision. arxiv.org/pdf/2503.05628
March 11, 2025 at 6:59 PM
Following the research from MIT Sloan Management Review and BCG, we propose shifting governance KPIs from static benchmarks to dynamic predictors that anticipate risks, identify opportunities, and align with strategic objectives.

www.linkedin.com/pulse/future...
March 11, 2025 at 5:10 PM
Reposted by Khullani
Let's get this circulated!! 😤
February 6, 2025 at 12:44 PM
@nytimes.com I've just cancelled my subscription. It is mind boggling to me that you have completely abdicated your responsibility to hold power to account. You are actively suppressing, minimizing, and erasing, the actions of those illegally and unconstititutionally dismantling the USA. Shameful!
February 5, 2025 at 11:07 PM
LLMs are dominating existing benchmarks—some now exceed 90% on MMLU. Enter HUMANITY’S LAST EXAM (HLE): a new multi-modal test with 3,000 questions. It’s designed to be the final closed-ended academic benchmark for gauging cutting-edge LLM capabilities. #AI #NLP static.scale.com/uploads/6541...
January 23, 2025 at 6:07 PM
What can we take from Nuclear Arms Control initiatives to Tackle AI?

Is nuclear arms control applicable to AI? McNamara argues that comparing AI and nuclear weapons can be insightful, incomplete, and not quite right, but still useful with caution. www.heterotopia.ai/p/a-cautiona...
www.heterotopia.ai
November 22, 2024 at 3:33 AM
Hypothetically speaking, suppose we found ourselves competing with AGI to control Earth. What are some ways we could communicate that would be difficult for AGI to understand or readily grasp? Actually, we should probably keep the brainstorming between our ears. Forget I asked.
November 22, 2024 at 3:30 AM
Reposted by Khullani
“The different approaches to emerging data localisation requirements across the globe, including the divide between democracies and authoritarian regimes, will shape both the interconnectedness and security of the internet as we know it.” www.tandfonline.com/doi/full/10....
How to maintain trust, respect sovereignty and protect privacy: a new generation of international agreements on cross-border data access
Published in Journal of Cyber Policy (Ahead of Print, 2024)
www.tandfonline.com
November 19, 2024 at 12:17 PM
One of the best pieces of writing I’ve read in all of 2024. This is a masterclass in writing a profile on an organization and its founder. “The rise and fall of the influential, embattled Oxford research center that brought us the concept of existential risk.” asteriskmag.com/issues/08/lo...
Looking Back at the Future of Humanity Institute—Asterisk
The rise and fall of the influential, embattled Oxford research center that brought us the concept of existential risk.
asteriskmag.com
November 19, 2024 at 7:49 AM
I highly recommend @CSETGeorgetown's Year-End Review of Global AI Governance initiatives. t.co/gPDwf6TQOL
https://cset.georgetown.edu/event/a-year-end-review/?utm_source=Center+for+Security+and+Emerging+Technology&utm_campaign=5234e061f7-EMAIL_CAMPAIGN_5_7_2021_13_6_COPY_01&utm_medium=email&utm_term=0_fcb...
t.co
November 19, 2024 at 6:54 AM