Brad Aimone
@jbimaknee.bsky.social
570 followers 150 following 260 posts
Computational neuroscientist-in-exile; computational neuromorphic computing; putting neurons in HPC since 2011; dreaming of a day when AI will actually be brain-like.
Posts Media Videos Starter Packs
jbimaknee.bsky.social
We are trying hard to get Science Bluesky to the network effects of the good old days!
jbimaknee.bsky.social
Neuroscientists allowing other fields to define what details are relevant has been a total disaster. No other discipline would outsource a question of such great importance.
jbimaknee.bsky.social
Machine learning is kind of a hack here, because it says "we can't actually define what language processing/image classification/etc is, but we can use data to give us that constraint. So let us just approximate this unknown function that we can't define well"

It works, but it is inefficient.
jbimaknee.bsky.social
Incidentally, my personal favorite interpretation of advancements in computer science is that most of the progress we've made has been in defining the problems themselves. Many successes correlate to clearer definitions of what "good" means or abstracting the task away from weak definitions.
jbimaknee.bsky.social
This is a great thread, and I think it hits on one of the biggest challenges in neuroscience that I hope NeuroAI can impact.

Until we can rigorously define what computations are occurring in the brain, we can't make any real progress in our functional understanding. We need constraints. 🧠🤖🧪
gershbrain.bsky.social
I once saw a (very interesting) talk about sleep in which the speaker started by saying that we don't really know how to define sleep, and then proceeded to operationalize sleep in flies as basically periods when they are still for a long time. This got me thinking...
jbimaknee.bsky.social
I wouldn't call myself a Markramian but I am most certainly an anti-Marrian. The Marr tradition has allowed cog sci and AI to ignore biology and biologists to ignore computation. So while we know a lot about AI and a lot about the biology, we know little about the Brain. It's been a total disaster.
jbimaknee.bsky.social
I don't think it is quite a fair comparison. The vast majority of systems neuroscientists have been banging some version of the Marr drum for 50 years. Maybe a few dozen have really tried the Markram approach?

Take away Markram's personality clashes, I think the jury is out on that approach
jbimaknee.bsky.social
I was amazed when I learned a few years ago that many English academics have an "at work accent" and a "relaxed at home around family and local friends accent" which is extremely different and that they can turn them on and off as needed.
jbimaknee.bsky.social
Wikipedia, which is probably the best internet consolidation of science for the masses, is heavily biased at the edges towards those self-servicing scientists who want to hype themselves.
jbimaknee.bsky.social
So clearly they shouldn't replicate and sell copyrighted material. But if you read my paper and incorporate that into your thinking, that isn't something you need to ask permission

Accessing illegally is a different story. But this isn't an assault on Science; I'd argue it is almost ideal...
jbimaknee.bsky.social
To be fair, isn't this why we publish? To get out ideas integrated into the universal human knowledge-base?

It would be far worse if Meta's AI was trained *without* your papers, wouldn't it be?
jbimaknee.bsky.social
We comp neuros can't make anyone happy can we?
jbimaknee.bsky.social
I'm just bitter about all of the AI crowd and their strawman about planes not having feathers.

The modern AI approach would be to put a nuclear fueled engine on the back of that wooden airplane. And then condescendingly mock neuroscientists by saying the wings aren't actually important either.
jbimaknee.bsky.social
Yeah, models don't have to be perfect. But details aren't inherently bad, they just may not be relevant to your task and may be needed elsewhere

We see this acutely in AI. Turns out the brain's details weren't that important for image classification. So now we have LLMs that are 1,000,000x too big.
jbimaknee.bsky.social
So the value function often becomes something like "simpler is better" or "focus on what you understand" which are orthogonal concepts to "important for the brain"
jbimaknee.bsky.social
This is always a dangerous point when it comes to the brain, as this assumes you have a clear and proper objective function

My opinion is that we have little idea what flight even means for neural computation, so we have little basis on determining which details are important.
jbimaknee.bsky.social
We aren't that far removed from neuroscientists celebrating the perceived failure of the Human Brain Project and its computational goals.
jbimaknee.bsky.social
I'm not sure, there is definitely a similarity between neuromorphic and FPGAs in terms of how algorithms should be represented but I don't know much about the theory around FPGAs
jbimaknee.bsky.social
From a serial complexity on a conventional platform I think you're right. For a true event driven data flow architecture like neuromorphic, I don't think there should have to be a program counter or similar (there isn't a program per se). Not sure what is really under the hood on today's platforms
jbimaknee.bsky.social
Sitting at the ModSim meeting this week, it is clear that every other scientific field has benefited more from advanced computing than neuroscience. Those of us who have tried to do large scale modeling know who has been dismissive in the past. The lack of serious computing in neuro is neuro's fault
jbimaknee.bsky.social
An interesting comment and discussion that gets to the deeper question of whether NeuroAI is a move towards something new or repackaging of old tired approaches with a shiny AI paint job.
jdcrawford.bsky.social
This statement is frustrating because systems neuroscientists (e.g. @gunnarblohm.bsky.social) have spent many years trying to build biologically realistic, mechanistic network models that are largely ignored outside a small community. And now NeuroAI is going to discover this is important?
dlevenstein.bsky.social
“NeuroAI should not remain limited to learning statistical relationships, but should also help in building mechanistic and causal models of neural activity. These models will incorporate biological properties of neural circuits, including cellular characteristics and network properties.”

💯💯💯
jbimaknee.bsky.social
Traffic in Seattle is really bad...
jbimaknee.bsky.social
I worry that serious neuroscientists avoid "emotion" as a rigorous area of study because it seems too close to the quack scientists who fight about consciousness

Yet emotion is central to the major neural health challenges we face as a society. People aren't on SSRIs because their V2 is misbehaving
jbimaknee.bsky.social
You can get really good store guac at a few places in Texas. Sometimes safer than buying avocados which can be random.

I feel this way though about all pasta sauces. A clove of garlic and a can of tomatoes makes a better marinara than any jarred stuff