Greg Veramendi
@greg-veramendi.bsky.social
1.5K followers 1.5K following 60 posts
Labor economist. Assoc. Prof. at @RHULECON . previously experimental particle physicist @Fermilab . He/him/his gregveramendi.github.io
Posts Media Videos Starter Packs
Reposted by Greg Veramendi
trevondlogan.bsky.social
I read @itsafronomics.bsky.social new book “The Double Tax” this afternoon. It’s engaging and one of the best popular ways to bring social science to the public and policy that I’ve read in a long while. Black women are uniquely disadvantaged in our economy. 😍 she gave Janelle James her flowers!
greg-veramendi.bsky.social
Saying the quiet part out loud, again.
mclem.org
This is the official spokesperson for the White House, stating that the President hopes to deploy the United States military to occupy every city controlled by the political party that opposes him. Explicitly.
atrupar.com
Leavitt: "These are the bad guys that we are picking up in Washington DC every day. The president would love to do this in every Democrat-run city across the country."
Reposted by Greg Veramendi
kaseybuckles.bsky.social
Please join me in signing and sharing:
tderyugina.bsky.social
The letter is ready, thanks to all those who helped out! Starting to gather signature now, please consider signing (link at top of letter) & spread the word.

docs.google.com/document/d/1...
Reposted by Greg Veramendi
kaseybuckles.bsky.social
For Day 2 of our celebration of work using the Census Tree we highlight @lukestein.com & co's work on the gendered impacts of perceived skin tone. They find that among African Amer. sisters, women perceived to be darker-skinned were disadvantaged.

*They also cite a great QJE pub by Lisa Cook et al*
greg-veramendi.bsky.social
Worth a read if you don't already follow Phil. TLDR:
greg-veramendi.bsky.social
what gif pops up when you type your name?
greg-veramendi.bsky.social
Hey Luna, what do you mean when you say "i got too excited"?
Reposted by Greg Veramendi
ebharrington.bsky.social
"Young political vigilantes" have historically been very useful to dictatorships--from Berlin to Beijing--because they gleefully engage in mass violent destruction of social institutions whose value and purpose they do not understand.

Ex: Mao's Red Guard:
www.theguardian.com/world/2023/j...
Reposted by Greg Veramendi
economeager.bsky.social
i understand the american people on average not reading history books what with the failing education system etc but what the fuck is the university leadership's excuse
brendannyhan.bsky.social
Christina Paxson, Brown's president, signed the AACU letter in April denouncing "the unprecedented government overreach and political interference now endangering American higher education" www.browndailyherald.com/article/2025...

She took a "deal" with Trump anyway.
Reposted by Greg Veramendi
gillwyness.bsky.social
📝New @cepeo-ucl.bsky.social working paper

Why are students from elite high schools much more likely to go to high ranked university courses than equally qualified students from the state sector? 🤔

w @opmc1.bsky.social @lindseymacmillan.bsky.social & Jo Blanden

econpapers.repec.org/RePEc:ucl:ce...
greg-veramendi.bsky.social
When an English person shares how they feel about good news: "I feel, um, unnecessarily emotional about it."

from Thames Water documentary. Worth a watch.
greg-veramendi.bsky.social
This 👇
snig.bsky.social
The top skills for students today are critical thinking and fact checking, not just to deal with media misinformation, but also to verify AI outputs.
Reposted by Greg Veramendi
gillwyness.bsky.social
Recognising the growing difference between predicted grades and achieved grades over time, UCAS are piloting personalised reports to schools

These will show schools how their UCAS predicted grades compare to achieved results

www.ucas.com/corporate/ne...
UCAS pilots new reports to help teachers strengthen grade predictions and support student choice | UCAS
www.ucas.com
Reposted by Greg Veramendi
aaronsojourner.org
Author of "One Long Night: A Global History of Concentration Camps"
👇
andreapitzer.bsky.social
"The Nazis... imagined their targets would self-deport. Once the myth of self-deportation collapsed, they turned to more punitive measures. On Tuesday, Noem similarly noted the Everglades camp was meant to frighten immigrants into self-deporting. 'If you don’t,' she said, 'you may end up here.'"
Opinion | Don’t call it ‘Alligator Alcatraz.’ Call it a concentration camp.
This facility’s purpose fits the classic model, and its existence points to serious dangers ahead for the country.
www.msnbc.com
Reposted by Greg Veramendi
selfdz.bsky.social
Yesterday, @carlbergstrom.com presented his course "Modern-day Oracles or BS Machines? How to thrive in a ChatGPT world". Neat way to make students aware of the capacities and limitations of LLMs👇🎰

Recording: www.youtube.com/watch?v=TZC0...

Course: thebullshitmachines.com

@unswbabs.bsky.social
Modern Day Oracles or Bullshit Machines? Seminar with Prof. Carl Bergstrom, University of Washington
YouTube video by UNSW eLearning
www.youtube.com
Reposted by Greg Veramendi
gillwyness.bsky.social
Is it better to have standardised exams or teacher assessments?

Myself and my colleague @opmc1.bsky.social have read the literature so you don't have to. Our @iza.org world of labour summarises below #econsky
greg-veramendi.bsky.social
This study by MIT authors shows students using LLMs exhibit decreased cognitive activity. I admit that it is a small sample and confirms my priors. Lots of crazy shit happening atm, but this still worries me a lot as an educator and when thinking about the future. www.brainonllm.com
arxiv.org
Reposted by Greg Veramendi
astrokatie.com
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
Reposted by Greg Veramendi
carlbergstrom.com
This is the third story I've read in a month about how AI chatbots are leading people into psychological crises.

Gift link
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.
www.nytimes.com
greg-veramendi.bsky.social
I wonder what this guy is thinking as he takes a potshot at a journalists. Where do they find people happy to violate their oath?
luckytran.com
LAPD fired rubber bullets at Australian journalist @laurentomasi.bsky.social
Reposted by Greg Veramendi
carlbergstrom.com
If I have time I'll put together a more detailed thread tomorrow, but for now, I think this new paper about limitations of Chain-of-Thought models could be quite important. Worth a look if you're interested in these sorts of things.

ml-site.cdn-apple.com/papers/the-i...
The Illusion of Thinking:
Understanding the Strengths and Limitations of Reasoning Models
via the Lens of Problem Complexity
Parshin Shojaee∗† Iman Mirzadeh∗ Keivan Alizadeh
Maxwell Horton Samy Bengio Mehrdad Farajtabar
Apple
Abstract
Recent generations of frontier language models have introduced Large Reasoning Models
(LRMs) that generate detailed thinking processes before providing answers. While these models
demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scal-
ing properties, and limitations remain insufficiently understood. Current evaluations primarily fo-
cus on established mathematical and coding benchmarks, emphasizing final answer accuracy. How-
ever, this evaluation paradigm often suffers from data contamination and does not provide insights
into the reasoning traces’ structure and quality. In this work, we systematically investigate these
gaps with the help of controllable puzzle environments that allow precise manipulation of composi-
tional complexity while maintaining consistent logical structures. This setup enables the analysis
of not only final answers but also the internal reasoning traces, offering insights into how LRMs
“think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs
face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter-
intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then
declines despite having an adequate token budget. By comparing LRMs with their standard LLM
counterparts under equivalent inference compute, we identify three performance regimes: (1) low-
complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity
tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks
where both models experience complete collapse. We found that LRMs have limitations in exact
computation: they fail to use explicit …