Gregory Marton
banner
gregory-marton.bsky.social
Gregory Marton
@gregory-marton.bsky.social
Reposted by Gregory Marton
🚨 New paper alert 🚨

Ever asked an LLM-as-Marilyn Monroe who the US president was in 2000? 🤔 Should the LLM answer at all? We call these clashes Concept Incongruence. Read on! ⬇️

1/n 🧵
May 27, 2025 at 1:59 PM
Reposted by Gregory Marton
Do LLMs Think Like Humans?

They find that,

While LLMs achieve broad categorical alignment with human judgment, they falter in capturing fine-grained semantic nuances such as typicality and, critically, exhibit vastly different representational efficiency profiles.
May 26, 2025 at 2:19 PM
Reposted by Gregory Marton
Kudos to @nytimes.com for covering ARC-AGI in such an exquisite example of interactive data journalism. Amazing spot for @fchollet.bsky.social as well.
www.nytimes.com/interactive/...
Are You Smarter Than A.I.?
Some experts predict that A.I. will surpass human intelligence within the next few years. Play this puzzle to see how far the machines have to go.
www.nytimes.com
April 1, 2025 at 12:41 AM
Reposted by Gregory Marton
The funny thing about multimodal image generation as released in the last week by Google and OpenAI is that now LLM image generation works like how most people using LLMs for the past two years always thought LLM image generation works.
March 26, 2025 at 1:17 AM
Reposted by Gregory Marton
If you have used LLM image generators, you know they are hard to control: LLM had to send a prompt to a separate image generation tool, it did not not make the image.

Gemini is the first public release of a full multimodal LLM that can directly make images. This allows the systems to do detail work
March 13, 2025 at 12:49 AM
Reposted by Gregory Marton
📊 #dataviz Putting the people 👨‍👨‍👧‍👦back into charts that talk about them.

An interesting histogram of numeracy scores for U.S. vs. some other countries,
using Alberto Cairo's [weepeople font](github.com/propublica/w...) to show the people involved in these distributions.
Src: bit.ly/3FrDq0v
👇
March 9, 2025 at 5:47 PM
Reposted by Gregory Marton
As part of the Science Homecoming project I wrote an opinion piece for the Albuquerque Journal on the dangers of cuts to federal science funding:

www.abqjournal.com/opinion/arti...

@sciencehomecoming.bsky.social
@cantlonlab.bsky.social
@spiantado.bsky.social
COLUMN: Science funding cuts threaten economy
If you’ve ever been treated for a medical problem, used a cellphone or a computer, or been excited by robots exploring Mars or gene-editing therapy, then your life has been
www.abqjournal.com
March 9, 2025 at 6:05 PM
Reposted by Gregory Marton
The public really needs to understand this. Every university system in the world rests on public funding, there has never been an alternative model at any time in history.

We have universities for literally the same reason that we have roads and armies.
I don't disagree that the US university business model was/is fragile, but surely not for this reason?

There is no society in the history of humanity that has successfully built a good university system without massive govt subsidies. US had already pushed the idea very far.
March 8, 2025 at 8:02 PM
Reposted by Gregory Marton
1/13 LLM circuits tell us where the computation happens inside the model—but the computation varies by token position, a key detail often ignored!
We propose a method to automatically find position-aware circuits, improving faithfulness while keeping circuits compact. 🧵👇
March 6, 2025 at 10:15 PM
Reposted by Gregory Marton
This is a crazy paper. Fine-tuning a big GPT-4o on a small amount of insecure code or even "bad numbers" (like 666) makes them misaligned in almost everything else. They are more likely to start offering misinformation, spouting anti-human values, and talk about admiring dictators. Why is unclear.
February 25, 2025 at 9:01 PM
Removing the gears part results in better performance, and that's surprising because it feels different from how humans learn. Perhaps relatedly, though anecdotally, telling e.g. an image generator what you didn't like about the previous response results in more, not less, of what you didn't like.
Do LLMs need rationales for learning from mistakes? 🤔
When LLMs learn from previous incorrect answers, they typically observe corrective feedback in the form of rationales explaining each mistake. In our new preprint, we find these rationales do not help, in fact they hurt performance!

🧵
February 14, 2025 at 10:10 AM
Reposted by Gregory Marton
you fucked up a perfectly good computer is what you did. look at it. it's got innumeracy
February 12, 2025 at 7:36 PM
«Kim pointed to newer introductory offerings such as “Python for Humanities and Social Sciences,” “AI for Future Presidents” and “C Programming Language and Linux.”» and it's still available free online www.edx.org/cs50
Love the homage to Richard Muller, too!
“This was CS50”: Yale ends largest computer science course
After a decade of partnership with Harvard, Yale’s CS50 course will no longer be offered starting in fall 2025 due to limited funding and an expanding computer science department.
yaledailynews.com
February 6, 2025 at 9:47 AM
Reposted by Gregory Marton
Are linguists paying a lot of attention to LLMs? Because this seems like a fascinating finding with large implications: LLMs share highly abstract grammatical concept representations, even across unrelated languages, so even models trained mostly on English do well in other languages.
February 6, 2025 at 2:24 AM
Reposted by Gregory Marton
Tech oligarchs made their fortunes thanks in large part to government funded research done by scientists based in universities. The tech industry’s complicity in dismantling these govt agencies and higher ed is not only immoral, it’s also shortsighted. Where will new science breakthroughs come from?
February 5, 2025 at 10:46 PM
Reposted by Gregory Marton
We launched a bunch of Gemini 2.0 models today. Compared to the 1.5 series models, each of the 2.0 models is generally better than the "one size up" model in the 1.5 series.

2.0 Flash & Flash-Lite set new standards in the quality/cost Pareto frontier.

More details:
blog.google/technology/g...
February 5, 2025 at 11:45 PM
Reposted by Gregory Marton
open-Deep-Research by huggingface
as posted by @aymeric-roucher.bsky.social

An entirely open agent that can: navigate the web autonomously, scroll and search through pages, download and manipulate files, run calculation on data...
February 4, 2025 at 8:47 PM
Reposted by Gregory Marton
February 3, 2025 at 9:28 PM
Reposted by Gregory Marton
Exa & Deepseek R1 Chat App

Exa & Deepseek Chat App is a free and open-source chat app that uses Exa's API for web search and Deepseek R1 LLM for reasoning.

github.com/exa-labs/exa...
GitHub - exa-labs/exa-deepseek-chat: A simple open-source chat app that uses Exa's API for web search and Deepseek R1 for reasoning
A simple open-source chat app that uses Exa's API for web search and Deepseek R1 for reasoning - exa-labs/exa-deepseek-chat
github.com
February 1, 2025 at 8:01 AM
Reposted by Gregory Marton
The Internet Archive has to date downloaded 500 terabytes of US government websites, which it crawls at the end of every presidential term. The whole archive is fully searchable. This effort's housed by a donation-funded nonprofit, not a branch of the US government. blog.archive.org/2024/05/08/e...
End of Term Web Archive – Preserving the Transition of a Nation | Internet Archive Blogs
blog.archive.org
February 1, 2025 at 12:58 AM
Reposted by Gregory Marton
Researchers claim Linux kernel tweak could reduce data center energy use by 30% https://www.techspot.com/news/106501-linux-kernel-upgrade-promises-up-30-energy-savings.html #AI #climate
January 30, 2025 at 1:31 AM
Reposted by Gregory Marton
As someone who has reported on AI for 7 years and covered China tech as well, I think the biggest lesson to be drawn from DeepSeek is the huge cracks it illustrates with the current dominant paradigm of AI development. A long thread. 1/
January 27, 2025 at 2:12 PM
Reposted by Gregory Marton
Explainer: What's R1 and Everything Else

This is an attempt to consolidate the dizzying rate of AI developments since Christmas. If you're into AI but not deep enough, this should get you oriented again.

timkellogg.me/blog/2025/01...
January 26, 2025 at 3:17 AM
Reposted by Gregory Marton
I'm not sure if people realize how quickly the Trumpzis can do enormous damage to US science, from basic research to translation. Really fast. REALLY fast. Labs with decades of irreplaceable domain and technique knowledge can break apart with a surprisingly short funding gap. When they're gone...1/
January 23, 2025 at 10:13 PM