Maxine 💃🏼
banner
maxine.science
Maxine 💃🏼
@maxine.science
🔬 Looking at the brain’s “dark matter”
🤯 Studying how minds change
👩🏼‍💻 Building science tools

🦋 ♾️ 👓
🌐 maxine.science
no lie I am deep Uexküll-pilled.
January 19, 2026 at 11:11 PM
Exactly—the interesting thing isn’t that we can represent the invariants of the niche (we can by supposition), but instead the regularities of the dynamics of niche construction.
January 19, 2026 at 11:10 PM
I've got a Claude project brewing on the backburner to make, in lieu of a monograph, a website and API for people to query and have an LLM chat with my Zettelkasten of kooky-but-interesting ideas from PhD lol
January 19, 2026 at 10:04 PM
cf. e.g. the talk I gave to @johanneskleiner.bsky.social et al. at the AMCS conf in Bamberg, where I argued that nearly all trad systems neuroscience is epiphenomenal of the chosen task structure by the experimenter, and the use of only animals correctly invariant to it.

youtu.be/TumupyEAwDc?...
Maxine Collard - How would we know what an astrocyte knows?
YouTube video by Models of Consciousness Conferences
youtu.be
January 19, 2026 at 9:54 PM
These imply, with enough abstract nonsense jargon, the transformations for relational knowledge (called “conversion rules” or “computations”), which thus specify what we would classically call “cognition”—all in a substrate-independent way!

ncatlab.org/nlab/show/co...
conversion rule in nLab
ncatlab.org
January 19, 2026 at 9:46 PM
3. The architecture of the ways in which one relational knowledge repository may transform into another (that is, the theory of actions / modules and profunctors / lax morphisms): ncatlab.org/nlab/show/mo...

+
module in nLab
ncatlab.org
January 19, 2026 at 9:46 PM
2. The architecture of the ways one relational knowledge repository may represent another relational repository (that is, representation theory, and its CT generalization, the study of functor categories) ncatlab.org/nlab/show/re...

+
representation in nLab
ncatlab.org
January 19, 2026 at 9:46 PM
Cognition then to me boils down to

1. The architecture of knowledge within systems, which, from a relational monist metaphysics like mine, is formalized as a category en.wikipedia.org/wiki/Categor...

+
Category theory - Wikipedia
en.wikipedia.org
January 19, 2026 at 9:46 PM
In a more recognizable form, we have Wikipedia’s:

“Cognitions are mental activities that deal with knowledge. They encompass psychological processes that acquire, store, retrieve, transform, or apply information.” +
January 19, 2026 at 9:46 PM
to whether* your topos of
January 19, 2026 at 6:46 PM
An important way we must temper expectations of genuinely new ideas from LLMs: they are heavily regularized by using latent structure from our language, making the problem *way* easier than general structure learning (‘cause human history already did most of the job!), but reducing what we can learn
January 19, 2026 at 6:46 PM
Language provides a durable structural image of important features of this latent structure. This enables the general structure-learning problem to factor through the particular latent decomposition of language, making reconstitution and coordination of world-models significantly easier. +
January 19, 2026 at 6:46 PM
“Cognition” is a specific arrangement of latent structure. For the category-theorists, its various features correspond to your topos of internal states is sufficiently well-structured to support various logics. +
January 19, 2026 at 6:46 PM
worked decently well for the Nicaea council.
January 19, 2026 at 12:46 AM
I know, it was a joke from my end :P

I think that there is something to transport-layer stuff + (a small extension of) atproto. The notion of “just let my trusted group splinter off full private infra” I think has applications.

But you lose fine-grained authz which is a huge goal for atproto.
January 18, 2026 at 8:31 PM
And so to the question about concentration of measure and averaging—actually, yes!
Can you say more about exactly what mechanism of tail-truncation you have in mind? Is this the softmax bottleneck, or optimization dynamics, or something about attention? And do you mean that it affects the concentration-of-measure in high dimensions and makes "averaging" a good metaphor, or +
January 18, 2026 at 7:14 PM
It’s actually a fairly deep and wide-reaching problem lying in the information geometry of broad classes of neural networks—many are at foundation quite limited in latent distribution fitting. I’m missing the transformer paper, but results of this shape are sobering to me: arxiv.org/abs/2501.07763
On the Statistical Capacity of Deep Generative Models
Deep generative models are routinely used in generating samples from complex, high-dimensional distributions. Despite their apparent successes, their statistical properties are not well understood. A ...
arxiv.org
January 18, 2026 at 7:11 PM
@atprotocol.dev private data ends up actually just being vanilla atproto over tailnet
January 18, 2026 at 6:21 PM
(The truncation of heavy tails built into how transformers fit distributions is the reason mechanistically for “model collapse” btw: eventually, all non-central modes die off with iterative resampling if you can’t preserve the tails.)
January 18, 2026 at 6:16 PM
I agree greater # of small fine-tuned models can help—but pessimistic about this, w/ drive toward oligopoly a consistent theme of unfettered capital & current political economy.

I do not agree that retrieval helps—also have good theory that vector DB power is *very* limited even in best case.
January 18, 2026 at 6:15 PM
The problem is that there are actually good theoretical results that this model class truncates heavy tails, so you lose a lot of the behavior you’re talking about as the caveat and the homogenization argument carries much more weight. +
January 18, 2026 at 6:15 PM
the eigencameron
January 17, 2026 at 8:11 PM