Matthew Larkum
@mattlark.bsky.social
64 followers 24 following 15 posts
Neuroscientist at the Humboldt University of Berlin, violinist and chamber music enthusiast
Posts Media Videos Starter Packs
mattlark.bsky.social
But now there are now two kinds of “nothing”. With green light, the “feedback replay” doesn't need to do anything. If we simply turn the replay device off, it “can’t” do anything. According to theories that depend on causality (e.g. IIT), the two kinds of nothing are fundamentally different.
mattlark.bsky.social
A computational functionalist must decide:
Does consciousness require dynamic flexibility and counterfactuals?
Or is a perfect replay, mechanical and unresponsive, still enough?
mattlark.bsky.social
So we ask: is consciousness just the path the system did take, or does it require the paths it could have taken?
mattlark.bsky.social
In Turing terms: for the same input, the same state transitions occur. But if you change the input (e.g. shine red light), things break. Some states become unreachable. The program is intact but functionally inert. It can’t see colours anymore. Except arguably green - or can it?
mattlark.bsky.social
For congruent input (here, the original green light), no corrections are needed. The replay “does nothing”. Everything flows causally just as before. Same input drives the same neurons to have the same activity for the same reasons. If the original system was conscious, should the re-run be, too?
mattlark.bsky.social
Back to the new thought experiment extension, where we add a twist: “feedback replay”. Like how patch clamping a cell works, the system now monitors the activity of neurons, only intervening if needed.
mattlark.bsky.social
Could the head be feeling something? Is it still computation?
mattlark.bsky.social
In the original thought exp, we imagined “forward replay”. Here, the transition function (the program) is ignored, which amounts to a “dancing head”. This feels like a degenerate computation (Unfolding argument? doi.org/10.1016/j.co...).
mattlark.bsky.social
To analyze this, we model it with a Universal Turing Machine. Input: “green light.” The machine follows its transition rules and outputs “experience of green.” Each step we record 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m).
A standard Turing Machine cartoon showing the "green states" that the algorithm uses to compute green, and "red states" that are only necessary for seeing red. Additionally, a recording device recording 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m), for each step.
mattlark.bsky.social
So: is the replayed system still conscious? If everything unfolds the same way, does the conscious experience remain?
mattlark.bsky.social
Then we replay it back into the same neurons. The system behaves identically. No intervention needed. So: is the replayed system still conscious? If everything unfolds the same way, does the conscious experience remain?
mattlark.bsky.social
We record the entire sequence of what happens when “seeing green”. Then we replay it back into the same simulated neurons. If computational functionalist is right, this drives the “right” brain activity for a 1st-person experience.
mattlark.bsky.social
Now, imagine a person looking at a green light. If the computational functionalist is right, the correct brain simulation algorithm doesn't just process green, it experiences green. Here, we start by assuming some deterministic algorithm can simulate all crucial brain activity.
mattlark.bsky.social
Does neural computation feel like something? In our new paper, we explore a paradox: if you replay all the neural activity of a brain—every spike, every synapse—does it recreate conscious experience?
🧠 doi.org/10.3389/fnin...
Frontiers | Does neural computation feel like something?
Artificial neural networks are becoming more advanced and human-like in detail and behavior. The notion that machines mimicking human brain computations migh...
doi.org