Toviah Moldwin
@tmoldwin.bsky.social
290 followers 350 following 340 posts
Computational neuroscience: Plasticity, learning, connectomics.
Posts Media Videos Starter Packs
Reposted by Toviah Moldwin
tmoldwin.bsky.social
(At high spatiotemporal resolution.)
tmoldwin.bsky.social
In the grand scheme of things the main thing that matters is advances in microscopy and imaging methods. Almost all results in neuroscience are tentative because we can't see everything that's happening at the same time.
tmoldwin.bsky.social
I also have a stack of these, I call it 'apocalypse food'.
tmoldwin.bsky.social
You are correct about this.
tmoldwin.bsky.social
But so is every possible mapping,so the choice of a specific mapping is not contained within the data. Even the fact that the training data comes in X,y pairs is not sufficient to provide a mapping that generalizes in a specific way. The brain chooses a specific algorithm that generalizes well.
tmoldwin.bsky.social
(Consider that one can create an arbitrary mapping between a set of images and a set of two labels, thus the choice of a specific mapping is a reduction of entropy and thus constitutes information.)
tmoldwin.bsky.social
The set of weights that correctly classifies images as cats or dogs contains information that is not contained either in the set of training images or in the set of labels.
tmoldwin.bsky.social
Learning can generate information about the *mapping* between the object and the category. It doesn't generate information about the object (by itself) or the category (by itself) but the mapping is not subject to the data processing inequality for the data or the category individually.
tmoldwin.bsky.social
GPT is already pretty good at this. Maybe not perfect, but possibly as good as the median academic.
tmoldwin.bsky.social
What do you mean by 'generate information'? What is an example of someone making this sort of claim?
tmoldwin.bsky.social
Paying is best. Reviews should mostly be done by advanced grad students/postdocs who could use the cash.
tmoldwin.bsky.social
Why wouldn't you want your papers to be LLM-readable?
tmoldwin.bsky.social
If such a value to society exists, it should not be difficult for the PhD student to figure out how to articulate it themselves. A lack of independence of thought when it comes to this sort of thing would be much more concerning.
tmoldwin.bsky.social
Oh you were on that? Small world.
tmoldwin.bsky.social
But I do think in our efforts to engage with the previous work on this, we made this paper overly long and technical. We present the bottom-line formulation of the plasticity rule in the Calcitron paper.
tmoldwin.bsky.social
I know that e.g. Yuri Rodrigues has a paper that incorporates second messengers but at that point it's not really parsimonous any moren
tmoldwin.bsky.social
The leading theory for plasticity is calcium control, which I've done some work on. I do think that I've contributed on that front with the Calcitron and the FPLR framework which came out in the past few months. Anything beyond calcium control gets into simulation territory.
tmoldwin.bsky.social
The reason why it's less active now is because people kind of feel that single neuron theory has been solved. Like the LIF/Cable theory models are still pretty much accepted. Any additional work would almost necessarily add complexity and that complexity is mostly not needed for 'theory' questions.
tmoldwin.bsky.social
Hebbian learning? Associative attractor networks (e.g. Hopfield)? Calcium control hypothesis? Predictive coding? Efficient coding? There are textbooks about neuro theory.
tmoldwin.bsky.social
I kind of like the size of the single neuron theory community, it's the right size. The network theory community is IMHO way too big, there are like thousands of papers about Hopfield networks, that's probably too much.
tmoldwin.bsky.social
Not really true, there are a bunch of people doing work on e.g. single neuron biophysics, plasticity models, etc. Definitely not as big of a field but we exist.