Owen Marschall
omarschall.bsky.social
Owen Marschall
@omarschall.bsky.social
Postdoc in the Litwin-Kumar lab at the Center for Theoretical Neuroscience at Columbia University.

I'm interested in multi-tasking and dimensionality.
34/X In the former, high dimensionality emerges simply because the network is big. In the latter, high dimensionality emerges because the network is doing a lot of different things. Both are viable!
December 15, 2025 at 7:41 PM
30/X This can even exceed the dimension of the spontaneous state, of course depending on a few things (how big N is, how many different task-selected states are chosen, etc.).
December 15, 2025 at 7:41 PM
27/X The task-selected states are much lower D, as so much of their variance is captured by just the handful of selected-task dimensions. In the chaotic task-selected states, fluctuations lead to marginally higher (less low?) dim. that can exceed task dim. R, but not in a way that scales with N.
December 15, 2025 at 7:41 PM
26/X Nonetheless, in our spontaneous state each neuron adds a new (fraction of a) dimension since neurons fluctuate approximately independently of one another, with a proportionality constant that is a nonlinear function of the network parameters and can be quite small (see Clark et al PRX 2025).
December 15, 2025 at 7:41 PM
25/X A different but similar-in-vibe observation has been made in experimental work, that *measuring* more neurons leads to higher dimension (figure from Manley et al Neuron 2024), which admittedly isn’t quite the same as increasing the number of neurons that exist.
December 15, 2025 at 7:41 PM
24/X As promised, let’s examine the dimension (participation ratio) of these states' activity patterns. The spontaneous state is high-dimensional, in the sense that its dimension scales with the size of the network N. For larger and larger networks, this dimension can grow without bound.
December 15, 2025 at 7:41 PM
21/X In both task-selected states, there is strong activity in the subspace of the selected task. The chaotic task-selected state features both coherent task dynamics (noiseless to leading order) as well as fluctuations in single-neuron rates comparable in magnitude to their task-related tuning.
December 15, 2025 at 7:41 PM
20/X This provides a mechanism for selecting dynamics. We identify 3 regimes *per task,* over modulation of the overall strength of that task’s connectivity component: the spontaneous state, then the chaotic and nonchaotic task-selected states.
December 15, 2025 at 7:41 PM
19/X This can be achieved through modulating the strength of the associated connectivity component. Because any one connectivity component is low rank, this can be biologically implemented via gain modulation of an external loop, eg through thalamus (as in Logiaco et al Cell Reports 2021).
December 15, 2025 at 7:41 PM
17/X Interestingly, these chaotic dynamics arise just from summing many task-related components—there is no unstructured background connectivity in this model. Moreover, the chaos itself isn’t unstructured but has signatures of the associated task dynamics in each subspace simultaneously.
December 15, 2025 at 7:41 PM
16/X The fluctuations in each subspace can be described by our theory as a subspace-specific linear dynamical system driven by noise. But there is no explicitly added noise—it emerges from a large number of task-related subspaces slightly overlapping and producing effectively random cross-talk.
December 15, 2025 at 7:41 PM
15/X Detour: if we add in more and more task-related components to the network connectivity, i.e. make the number of them comparable to the number of neurons N, we can enter a regime where not even one task dominates, and instead there are chaotic fluctuations in every task subspace simultaneously.
December 15, 2025 at 7:41 PM
10/X In this example, the two connectivities we superposed would have produced stable limit cycle dynamics and bistable dynamics, respectively, if each were the sole network connectivity. When combined (previous post), the limit cycle "wins" because it's marginally stronger in this case.
December 15, 2025 at 7:41 PM
9/X The answer is surprisingly simple: the “strongest” (in a sense we make precise) latent dynamical system wins and controls the dynamics of the whole network, while the weaker dynamics are suppressed to the origin. This happens for any initial condition.
December 15, 2025 at 7:41 PM
8/X In particular, we ask what happens when we linearly superpose different connectivity matrices, each of which is constrained to be low rank (R) and would generate, on its own, some nonlinear dynamical system on some low-dimensional manifold.
December 15, 2025 at 7:41 PM
6/X Let’s take this picture at face value: within a given task, neural activity occupies a low-D manifold, but the network switches to different manifolds to perform different tasks. How is this possible? How do the connectivity structures supporting these dynamics avoid interfering with each other?
December 15, 2025 at 7:41 PM
5/X These subspaces are heterogeneously oriented and contain fundamentally different dynamics, yet involve overlapping sets of neurons. (Figure from Amematsro et al.)
December 15, 2025 at 7:41 PM