Owen Marschall
omarschall.bsky.social
Owen Marschall
@omarschall.bsky.social
Postdoc in the Litwin-Kumar lab at the Center for Theoretical Neuroscience at Columbia University.

I'm interested in multi-tasking and dimensionality.
37/X Overall, really enjoyed working on this with @david-g-clark.bsky.social and Ashok, and I’d love to chat about any part of it—the details behind the theory, the experimental implications, etc. Thanks for reading!
December 15, 2025 at 7:41 PM
36/X Pooled across behavioral syllables, we should recover high dimensionality, with higher dimension the more types of behavioral syllables that are observed. But we should observe lower growth of dimension per unit recording time, compared to pooling across periods of no behavior.
December 15, 2025 at 7:41 PM
35/X Borrowing the language of “behavioral syllables” (Markowitz et al. Nature 2023), we can formulate a few predictions. Measured during periods of no behavior, neural activity should be fairly high-D. Measured over many repeats of a single behavioral syllable, neural activity should be low-D.
December 15, 2025 at 7:41 PM
34/X In the former, high dimensionality emerges simply because the network is big. In the latter, high dimensionality emerges because the network is doing a lot of different things. Both are viable!
December 15, 2025 at 7:41 PM
33/X This model elucidates clearly two hypotheses for the origin of high-dimensional neural activity. One is spontaneous fluctuations in the absence of some coherent behavior. Another is switching among many different, individually low-dimensional, behavioral states.
December 15, 2025 at 7:41 PM
32/X We conservatively used fairly quick task-switching intervals to not artificially magnify this effect, and we still see slower dimension growth in this setup than in the spontaneous state. Spontaneous activity maximally quickly explores the dimensions available to it.
December 15, 2025 at 7:41 PM
31/X Even in cases when the “switching among different tasks” setup has higher overall dimension than the spontaneous state, the rate at which this measurement grows wrt recording time is slower (previous post). This is especially true the longer the network lingers in each task-specific subspace.
December 15, 2025 at 7:41 PM
30/X This can even exceed the dimension of the spontaneous state, of course depending on a few things (how big N is, how many different task-selected states are chosen, etc.).
December 15, 2025 at 7:41 PM
29/X Although any one task component generates low-D activity when selected, recall that these task manifolds are randomly oriented with respect to one another. If we measure over sequential activation of many different, individually low-D task-selected states, we recover high-D activity overall.
December 15, 2025 at 7:41 PM
28/X This suggests that, even if we include trial-to-trial fluctuations that are approximately independent between neurons, we won’t get high dimensionality just from the existence (and measurement) of a large number of neurons, if we measure while restricted to just a single task context.
December 15, 2025 at 7:41 PM
27/X The task-selected states are much lower D, as so much of their variance is captured by just the handful of selected-task dimensions. In the chaotic task-selected states, fluctuations lead to marginally higher (less low?) dim. that can exceed task dim. R, but not in a way that scales with N.
December 15, 2025 at 7:41 PM
26/X Nonetheless, in our spontaneous state each neuron adds a new (fraction of a) dimension since neurons fluctuate approximately independently of one another, with a proportionality constant that is a nonlinear function of the network parameters and can be quite small (see Clark et al PRX 2025).
December 15, 2025 at 7:41 PM
25/X A different but similar-in-vibe observation has been made in experimental work, that *measuring* more neurons leads to higher dimension (figure from Manley et al Neuron 2024), which admittedly isn’t quite the same as increasing the number of neurons that exist.
December 15, 2025 at 7:41 PM
24/X As promised, let’s examine the dimension (participation ratio) of these states' activity patterns. The spontaneous state is high-dimensional, in the sense that its dimension scales with the size of the network N. For larger and larger networks, this dimension can grow without bound.
December 15, 2025 at 7:41 PM
23/X But 1) Maybe the brain does this? 2) The modulation required is *subtle,* a vanishingly small fraction of the overall weight matrix, and is itself low-dimensional. And yet is sufficient to induce large-scale activity changes, because it operates via a phase transition.
December 15, 2025 at 7:41 PM
22/X About the selection mechanism: yes we are modulating connectivity itself. Yes this is arguably "cheating" the multi-task challenge, traditionally thought of as a fixed-connectivity network prompted by inputs to do different things (Yang et al Nat Neuro 2019, Driscoll et al Nat Neuro 2024).
December 15, 2025 at 7:41 PM
21/X In both task-selected states, there is strong activity in the subspace of the selected task. The chaotic task-selected state features both coherent task dynamics (noiseless to leading order) as well as fluctuations in single-neuron rates comparable in magnitude to their task-related tuning.
December 15, 2025 at 7:41 PM
20/X This provides a mechanism for selecting dynamics. We identify 3 regimes *per task,* over modulation of the overall strength of that task’s connectivity component: the spontaneous state, then the chaotic and nonchaotic task-selected states.
December 15, 2025 at 7:41 PM
19/X This can be achieved through modulating the strength of the associated connectivity component. Because any one connectivity component is low rank, this can be biologically implemented via gain modulation of an external loop, eg through thalamus (as in Logiaco et al Cell Reports 2021).
December 15, 2025 at 7:41 PM
18/X We think of this chaotic state as the “spontaneous” state of the network, where no task is activated. A task activates when the noisy, linearized dynamics lose stability, so that the associated latent variables grow exponentially (before nonlinearly self-stabilizing) to dominate the network.
December 15, 2025 at 7:41 PM
17/X Interestingly, these chaotic dynamics arise just from summing many task-related components—there is no unstructured background connectivity in this model. Moreover, the chaos itself isn’t unstructured but has signatures of the associated task dynamics in each subspace simultaneously.
December 15, 2025 at 7:41 PM
16/X The fluctuations in each subspace can be described by our theory as a subspace-specific linear dynamical system driven by noise. But there is no explicitly added noise—it emerges from a large number of task-related subspaces slightly overlapping and producing effectively random cross-talk.
December 15, 2025 at 7:41 PM
15/X Detour: if we add in more and more task-related components to the network connectivity, i.e. make the number of them comparable to the number of neurons N, we can enter a regime where not even one task dominates, and instead there are chaotic fluctuations in every task subspace simultaneously.
December 15, 2025 at 7:41 PM
14/X From the perspective of flexible behavior, this is a problem. If for a given connectivity, one task always wins, how does a network use dynamics A for task A and dynamics B for task B? We’ll get there after a brief detour, but sneak peek: it’s through modulating connectivity.
December 15, 2025 at 7:41 PM
13/X This interference occurs for two reasons: 1) the two task manifolds share a common pool of neurons and 2) the network is nonlinear. The task-related dynamics share an average neuronal gain factor. When one task is active, the average neuronal gains decrease, weakening the other task’s dynamics.
December 15, 2025 at 7:41 PM