Bradley Love
@profdata.bsky.social
4.1K followers 740 following 51 posts
Senior research scientist at Los Alamos National Laboratory. Former UCL, UTexas, Alan Turing Institute, Ellis EU. CogSci, AI, Comp Neuro, AI for scientific discovery https://bradlove.org
Posts Media Videos Starter Packs
profdata.bsky.social
[email protected], send to me, or send directly to the Met (London police) who are investigating www.met.police.uk. I could see this being super distressing for a vulnerable person, so hope this does not become more common. For me, it's been an exercise in rapidly learning to not care! 2/2
Home
Your local police force - online. Report a crime, contact us and other services, plus crime prevention advice, crime news, appeals and statistics.
www.met.police.uk
profdata.bsky.social
Some UK dude is trying to extort me, demanding money to not spread made-up stories. I reported to the poilice after getting flooded with phone messages I never listen to, etc. @bsky.app has been good about deleting his posts and accounts. If contacted, don't interact, but instead report to...1/2
profdata.bsky.social
New blog w @ken-lxl.bsky.social, “Giving LLMs too much RoPE: A limit on Sutton’s Bitter Lesson”. The field has shifted from flexible data-driven position representations to fixed approaches following human intuitions. Here’s why and what it means for model performance bradlove.org/blog/positio...
Giving LLMs too much RoPE: A limit on Sutton’s Bitter Lesson — Bradley C. Love
Introduction Sutton’s Bitter Lesson (Sutton, 2019) argues that machine learning breakthroughs, like AlphaGo, BERT, and large-scale vision models, rely on general, computation-driven methods that prior...
bradlove.org
profdata.bsky.social
New blog, "Backwards Compatible: The Strange Math Behind Word Order in AI" w @ken-lxl.bsky.social It turns out the language learning problem is the same for any word order, but is that true in practice for large language models? paper: arxiv.org/abs/2505.08739 BLOG: bradlove.org/blog/prob-ll...
https://bradlove.org/blog/prob-llm-consistency
profdata.bsky.social
When LLMs diverge from one another because of word order (data factorization), it indicates their probability distributions are inconsistent, which is a red flag (not trustworthy). We trace deviations to self-attention positional and locality biases. 2/2 arxiv.org/abs/2505.08739
Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies
Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability ...
arxiv.org
profdata.bsky.social
"Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies"
Oddly, we prove LLMs should be equivalent for any word ordering: forward, backward, scrambled. In practice, LLMs diverge from one another. Why? 1/2 arxiv.org/abs/2505.08739
Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies
Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability ...
arxiv.org
profdata.bsky.social
"Coordinating multiple mental faculties during learning" There's lots of good work in object recognition and learning, but how do we integrate the two? Here's a proposal and model that is more interactive than perception provides the inputs to cognition. www.nature.com/articles/s41...
Coordinating multiple mental faculties during learning - Scientific Reports
Scientific Reports - Coordinating multiple mental faculties during learning
www.nature.com
Reposted by Bradley Love
eringrant.me
Last year, we funded 250 authors and other contributors to attend #ICLR2024 in Vienna as part of this program. If you or your organization want to directly support contributors this year, please get in touch! Hope to see you in Singapore at #ICLR2025!
iclr-conf.bsky.social
Financial Assistance applications are now open! If you face financial barriers to attending ICLR 2025, we encourage you to apply. The program offers prepay and reimbursement options. Applications are due March 2nd with decisions announced March 9th. iclr.cc/Conferences/...
ICLR 2024 Financial Assistance
iclr.cc
profdata.bsky.social
Thanks @hossenfelder.bsky.social for covering our recent paper, doi.org/10.1038/s415... Also, I want to spotlight this excellent podcast (19 minutes long) with Nicky Cartridge covering how AI will impact science and healthcare in the coming years, touchneurology.com/podcast/brai...
profdata.bsky.social
A 7B is small enough to train efficiently on 4 A100s (thanks Microsoft) and at the time Mistral performed relatively well for its size.
profdata.bsky.social
Yes, the model weights and all materials are openly available. We really want to offer easy to use tools people can use through the web without hassle. To do that, we need to do more work (will be announcing an open source effort soon) and need some funding for hosting a model endpoint.
profdata.bsky.social
While BrainBench focused on neuroscience, our approach is science general, so others can adopt our template. Everything is open weight and open source. Thanks to the entire team and the expert participants. Sign up for news at braingpt.org 8/8
BrainGPT
This is the homepage for BrainGPT, a Large Language Model tool to assist neuroscientific research.
BrainGPT.org
profdata.bsky.social
Finally, LLMs can be augmented with neuroscience knowledge for better performance. We tuned Mistral on 20 years of the neuroscience literature using LoRA. The tuned model, which we refer to as BrainGPT, performed better on BrainBench. 7/8
profdata.bsky.social
In the Nature HB paper, both human experts and LLMs were well calibrated - when they were more certain of their decisions, they were more likely to be correct. Calibration is beneficial for human-machine teaming. 5/8
profdata.bsky.social
All 15 LLMs considered crushed human experts at BrainBench's predictive task. LLMs correctly predicted neuroscience results (across all sub areas) dramatically better than human experts, including those with decades of experience. 3/8
profdata.bsky.social
To test, we created BrainBench, a forward-looking benchmark that stresses prediction over retrieval of facts, avoiding LLM's "hallucination" issue. The task was to predict which version of a Journal of Neuroscience abstract gave the actual result. 2/6
profdata.bsky.social
"Large language models surpass human experts in predicting neuroscience results" w @ken-lxl.bsky.social
and braingpt.org. LLMs integrate a noisy yet interrelated scientific literature to forecast outcomes. nature.com/articles/s41... 1/8
profdata.bsky.social
Thanks Gary! I have no idea because I don't see how we get anyone to learn over more than a billion tokens. Maybe one could bootstrap some estimate from the perplexity difference between forward and backward, assuming we can get a sense of how that affects learning? Just off the top of my head...
profdata.bsky.social
i am not seeing the issue. every method is the same, but the text is reversed. we even tokenize separately for forward and backward to make comparable. Perplexity is calculated over the entire option for the benchmark items. The difficulty doesn't have to be the same - it just turned out that way.
profdata.bsky.social
For backward: Everything is reversed at the character level, including the benchmark items. So, the last character of the last word for each passage is the first and the first character of the first word is last. On the benchmark, as in the forward case, the option with lower perplexity is chosen.