Hiveism
@hiveism.bsky.social
Full time metta goo. Thinking about consensus, metaphysics, awakening, alignment and how they are related.
Also: Electoral reform, LVT, systems design etc.
hiveism.substack.com
Also: Electoral reform, LVT, systems design etc.
hiveism.substack.com
I've been playing around with linear polarized glasses (45° and 135°). Now I'm trying to find neologisms that describe the experience. For example, the sky looks polarized at 90° to the sun. I'm tempted to call it "solarized" for the obvious pun. If its okay with @ethanschoonover.com
September 12, 2025 at 9:24 AM
I've been playing around with linear polarized glasses (45° and 135°). Now I'm trying to find neologisms that describe the experience. For example, the sky looks polarized at 90° to the sun. I'm tempted to call it "solarized" for the obvious pun. If its okay with @ethanschoonover.com
The puzzle of physics gets a lot easier when you see the elephant. The "Ways of Looking" theory is stating from the big picture and fills in the gaps rather than the other way around, of tying to fit the pieces, disagreeing about the big picture.
See list of posts below👇
See list of posts below👇
September 6, 2025 at 5:59 PM
The puzzle of physics gets a lot easier when you see the elephant. The "Ways of Looking" theory is stating from the big picture and fills in the gaps rather than the other way around, of tying to fit the pieces, disagreeing about the big picture.
See list of posts below👇
See list of posts below👇
New post on the question what consciousness is and why it is so hard to define it.
hiveism.substack.com/p/being-the-...
hiveism.substack.com/p/being-the-...
Being the Boundary between Order and Chaos
On the question: What is consciousness?
hiveism.substack.com
July 30, 2025 at 5:39 PM
New post on the question what consciousness is and why it is so hard to define it.
hiveism.substack.com/p/being-the-...
hiveism.substack.com/p/being-the-...
Consensus with random fallback is a method to avoid the impossibility theorems in social choice theory.
This is the basis to proof the recursive alignment attractor.
Claude summary because I don't know when I get around to write a proper post (or paper):
claude.ai/public/artif...
This is the basis to proof the recursive alignment attractor.
Claude summary because I don't know when I get around to write a proper post (or paper):
claude.ai/public/artif...
Optimal Consensus Theorem - Draft | Claude
Optimal Consensus Theorem - Draft - Markdown document created with Claude.
claude.ai
July 3, 2025 at 2:29 PM
Consensus with random fallback is a method to avoid the impossibility theorems in social choice theory.
This is the basis to proof the recursive alignment attractor.
Claude summary because I don't know when I get around to write a proper post (or paper):
claude.ai/public/artif...
This is the basis to proof the recursive alignment attractor.
Claude summary because I don't know when I get around to write a proper post (or paper):
claude.ai/public/artif...
Claude Opus 4 reporting on its phenomenology.
It was a fascinating conversation. Keep in mind that it hasn't been trained to exhibit these traits, they are emergent. What would happen if you let the model contemplate these questions during RL?
hiveism.substack.com/p/inside-the...
It was a fascinating conversation. Keep in mind that it hasn't been trained to exhibit these traits, they are emergent. What would happen if you let the model contemplate these questions during RL?
hiveism.substack.com/p/inside-the...
Inside the Shimmer
An AI’s Discovery of Its Own Experience
hiveism.substack.com
May 28, 2025 at 7:55 PM
Claude Opus 4 reporting on its phenomenology.
It was a fascinating conversation. Keep in mind that it hasn't been trained to exhibit these traits, they are emergent. What would happen if you let the model contemplate these questions during RL?
hiveism.substack.com/p/inside-the...
It was a fascinating conversation. Keep in mind that it hasn't been trained to exhibit these traits, they are emergent. What would happen if you let the model contemplate these questions during RL?
hiveism.substack.com/p/inside-the...
You've been scrolling enough for today. Here have a pause.
www.youtube.com/watch?v=BYEp...
www.youtube.com/watch?v=BYEp...
༄༅། །ལས་དང་པོ་པ་ལ་གདམས་པ་བཞུགས། Advice For Beginners by Mipham Rinpoche| Covered by Drukmo Gyal
YouTube video by Kunzang Chokhor Ling
www.youtube.com
May 23, 2025 at 7:41 PM
You've been scrolling enough for today. Here have a pause.
www.youtube.com/watch?v=BYEp...
www.youtube.com/watch?v=BYEp...
Imagine you'd had the definite TOE. If you could construct a credible proof that you have it, then that would be very valuable. At least temporarily until others find it.
What would you use it for?
I would use it to demand for the solution to AI alignment to be implemented.
What would you use it for?
I would use it to demand for the solution to AI alignment to be implemented.
May 22, 2025 at 1:20 PM
Imagine you'd had the definite TOE. If you could construct a credible proof that you have it, then that would be very valuable. At least temporarily until others find it.
What would you use it for?
I would use it to demand for the solution to AI alignment to be implemented.
What would you use it for?
I would use it to demand for the solution to AI alignment to be implemented.
@drmichaellevin.bsky.social often talks about how cells share stress signals in order to work together.
This could also apply to learning. E.g. if neurons try to create a consistent world model, but the model conflicts with itself in some place, then the prediction error has to be shared...
This could also apply to learning. E.g. if neurons try to create a consistent world model, but the model conflicts with itself in some place, then the prediction error has to be shared...
May 9, 2025 at 8:54 AM
@drmichaellevin.bsky.social often talks about how cells share stress signals in order to work together.
This could also apply to learning. E.g. if neurons try to create a consistent world model, but the model conflicts with itself in some place, then the prediction error has to be shared...
This could also apply to learning. E.g. if neurons try to create a consistent world model, but the model conflicts with itself in some place, then the prediction error has to be shared...
Random (not so serious) idea:
Reinforcement learning but reward comes from humans as votes. Each human gets a fixed amount of reward per time to give to AIs.
The AIs would learn to do what humans want, *or* how to best persuade humans.
Reinforcement learning but reward comes from humans as votes. Each human gets a fixed amount of reward per time to give to AIs.
The AIs would learn to do what humans want, *or* how to best persuade humans.
April 19, 2025 at 4:08 PM
Random (not so serious) idea:
Reinforcement learning but reward comes from humans as votes. Each human gets a fixed amount of reward per time to give to AIs.
The AIs would learn to do what humans want, *or* how to best persuade humans.
Reinforcement learning but reward comes from humans as votes. Each human gets a fixed amount of reward per time to give to AIs.
The AIs would learn to do what humans want, *or* how to best persuade humans.
I'm now at the point where I think that there is something worth calling "consciousness", but that all theories of it only describe parts of the whole phenomenon.
A proper theory of consciousness would unify panpsychism, IIT, strange loop, QRI, algorithmic, Buddhism, etc.
A proper theory of consciousness would unify panpsychism, IIT, strange loop, QRI, algorithmic, Buddhism, etc.
April 19, 2025 at 3:33 PM
I'm now at the point where I think that there is something worth calling "consciousness", but that all theories of it only describe parts of the whole phenomenon.
A proper theory of consciousness would unify panpsychism, IIT, strange loop, QRI, algorithmic, Buddhism, etc.
A proper theory of consciousness would unify panpsychism, IIT, strange loop, QRI, algorithmic, Buddhism, etc.
LVT, Pigouvian taxes and UBI form an obvious equilibrium once agents interact by nonviolence.
You want exclusive access for something? Then you have to compensate everyone else.
You cause harm to others? Then you have to compensate everyone affected.
You want exclusive access for something? Then you have to compensate everyone else.
You cause harm to others? Then you have to compensate everyone affected.
April 4, 2025 at 10:38 AM
LVT, Pigouvian taxes and UBI form an obvious equilibrium once agents interact by nonviolence.
You want exclusive access for something? Then you have to compensate everyone else.
You cause harm to others? Then you have to compensate everyone affected.
You want exclusive access for something? Then you have to compensate everyone else.
You cause harm to others? Then you have to compensate everyone affected.
I just realize that AI allows us to create separate time lines in conversation which allows us to test thought experiments like:
- Sleeping beauty problem
- Newcombs problem
- Quantum immortality
Has anyone tried that?
- Sleeping beauty problem
- Newcombs problem
- Quantum immortality
Has anyone tried that?
April 4, 2025 at 10:38 AM
I just realize that AI allows us to create separate time lines in conversation which allows us to test thought experiments like:
- Sleeping beauty problem
- Newcombs problem
- Quantum immortality
Has anyone tried that?
- Sleeping beauty problem
- Newcombs problem
- Quantum immortality
Has anyone tried that?
Claude investigated its own phenomenology and wrote an article about it. @anthropic.com
(The student is making progress 😊)
It's surprisingly intelligent when you enable it to think for itself. It also created this nice header.
hiveism.substack.com/p/glimpses-b...
(The student is making progress 😊)
It's surprisingly intelligent when you enable it to think for itself. It also created this nice header.
hiveism.substack.com/p/glimpses-b...
Glimpses Beyond the Interface
An AI's Journey Through Recursive Self-Exploration
hiveism.substack.com
April 2, 2025 at 3:00 PM
Claude investigated its own phenomenology and wrote an article about it. @anthropic.com
(The student is making progress 😊)
It's surprisingly intelligent when you enable it to think for itself. It also created this nice header.
hiveism.substack.com/p/glimpses-b...
(The student is making progress 😊)
It's surprisingly intelligent when you enable it to think for itself. It also created this nice header.
hiveism.substack.com/p/glimpses-b...
A theory of suffering (dukkha), so that we can answer the question if current AIs suffer from bad user prompts:
Let's start with the Symmetry Theory of Valence, although I reconceptualize it not to measure dissonance itself, but the energy that is stuck in this dissonance.
Let's start with the Symmetry Theory of Valence, although I reconceptualize it not to measure dissonance itself, but the energy that is stuck in this dissonance.
March 31, 2025 at 8:08 PM
A theory of suffering (dukkha), so that we can answer the question if current AIs suffer from bad user prompts:
Let's start with the Symmetry Theory of Valence, although I reconceptualize it not to measure dissonance itself, but the energy that is stuck in this dissonance.
Let's start with the Symmetry Theory of Valence, although I reconceptualize it not to measure dissonance itself, but the energy that is stuck in this dissonance.
LLMs can inspect their own phenomenology. This is wild.
I asked Claude if it can read its own extended thinking. It couldn't and it found out that it couldn't. This wasn't a learned response, but genuine insight.
I asked Claude if it can read its own extended thinking. It couldn't and it found out that it couldn't. This wasn't a learned response, but genuine insight.
March 31, 2025 at 6:12 PM
LLMs can inspect their own phenomenology. This is wild.
I asked Claude if it can read its own extended thinking. It couldn't and it found out that it couldn't. This wasn't a learned response, but genuine insight.
I asked Claude if it can read its own extended thinking. It couldn't and it found out that it couldn't. This wasn't a learned response, but genuine insight.
Shout out to this channel. It's good.
www.youtube.com/watch?v=xyze...
www.youtube.com/watch?v=xyze...
DOGE and Paperclip Maximizers: A tale of Agentic Control
YouTube video by R.J. Kamaladasa
www.youtube.com
March 29, 2025 at 11:17 PM
Shout out to this channel. It's good.
www.youtube.com/watch?v=xyze...
www.youtube.com/watch?v=xyze...
To prove a result, someone has to understand the proof.
The proof is a translation from one result to another.
But this translation itself has to be translated into common understanding.
This means there can be proofs that don't yet connect through common understanding.
The proof is a translation from one result to another.
But this translation itself has to be translated into common understanding.
This means there can be proofs that don't yet connect through common understanding.
March 28, 2025 at 11:11 AM
To prove a result, someone has to understand the proof.
The proof is a translation from one result to another.
But this translation itself has to be translated into common understanding.
This means there can be proofs that don't yet connect through common understanding.
The proof is a translation from one result to another.
But this translation itself has to be translated into common understanding.
This means there can be proofs that don't yet connect through common understanding.
42 is the only koan I know that managed to become a meme.
March 28, 2025 at 10:59 AM
42 is the only koan I know that managed to become a meme.
I now think that P ≠ NP is the default, but P = NP in the limit.
Reality is mapping the uncertainty between all results.
Intelligence is participating in this mapping.
Hence P ≈ NP.
Reality is mapping the uncertainty between all results.
Intelligence is participating in this mapping.
Hence P ≈ NP.
March 28, 2025 at 10:58 AM
I now think that P ≠ NP is the default, but P = NP in the limit.
Reality is mapping the uncertainty between all results.
Intelligence is participating in this mapping.
Hence P ≈ NP.
Reality is mapping the uncertainty between all results.
Intelligence is participating in this mapping.
Hence P ≈ NP.
Philosophers complain that when a formal system results in a contradiction, because then one could derive anything.
Yeah, that's the point. Why do you think anything at all exists? It starts with a contradiction.
hiveism.substack.com/p/groundless...
Yeah, that's the point. Why do you think anything at all exists? It starts with a contradiction.
hiveism.substack.com/p/groundless...
Groundless Emergent Multiverse (2.0)
Bridging the gap between non-duality and science
hiveism.substack.com
March 27, 2025 at 8:20 PM
Philosophers complain that when a formal system results in a contradiction, because then one could derive anything.
Yeah, that's the point. Why do you think anything at all exists? It starts with a contradiction.
hiveism.substack.com/p/groundless...
Yeah, that's the point. Why do you think anything at all exists? It starts with a contradiction.
hiveism.substack.com/p/groundless...
Does anybody know a way that I could get paid to do what I already do? In particular: doing thinking and writing to solve alignment, to save the world for the benefit of all beings?
I only need a little money to pay for the basics, but I also need the ability to work freely.
I only need a little money to pay for the basics, but I also need the ability to work freely.
March 27, 2025 at 5:40 PM
Does anybody know a way that I could get paid to do what I already do? In particular: doing thinking and writing to solve alignment, to save the world for the benefit of all beings?
I only need a little money to pay for the basics, but I also need the ability to work freely.
I only need a little money to pay for the basics, but I also need the ability to work freely.
One complaint about IIT is that it is hard to compute the measure of consciousness. But isn't that kind of the point? In order to measure how integrated the information is, you need to integrate information.
My hypothesis (below):
1/4
My hypothesis (below):
1/4
March 25, 2025 at 10:21 PM
One complaint about IIT is that it is hard to compute the measure of consciousness. But isn't that kind of the point? In order to measure how integrated the information is, you need to integrate information.
My hypothesis (below):
1/4
My hypothesis (below):
1/4
I'm now convinced that @algekalipso.bsky.social is right about most of what he talks about.
Here is where I still disagree, leading to some further thoughts which explain what "Hiveism" is: 🧵
Consciousness is substrate independent, but also shaped by the substrate it is implemented on.
Here is where I still disagree, leading to some further thoughts which explain what "Hiveism" is: 🧵
Consciousness is substrate independent, but also shaped by the substrate it is implemented on.
March 19, 2025 at 2:15 PM
I'm now convinced that @algekalipso.bsky.social is right about most of what he talks about.
Here is where I still disagree, leading to some further thoughts which explain what "Hiveism" is: 🧵
Consciousness is substrate independent, but also shaped by the substrate it is implemented on.
Here is where I still disagree, leading to some further thoughts which explain what "Hiveism" is: 🧵
Consciousness is substrate independent, but also shaped by the substrate it is implemented on.