Agus 🔎🔸
@agucova.bsky.social
890 followers 750 following 400 posts
Accelerate AI safety. 🔗 agus.sh
Posts Media Videos Starter Packs
Pinned
agucova.bsky.social
just created an EA/EA-adj starter pack: go.bsky.app/HYSG3Hr

(will grow with time)
Reposted by Agus 🔎🔸
eugenevinitsky.bsky.social
This site does unfortunately disabuse you of the notion that careless thinking is confined to a particular ideology
Reposted by Agus 🔎🔸
dystopiabreaker.xyz
anyway, here is 2024 Nobel Prize in Physics winner Geoffrey Hinton discussing what we know about large AI models on 60 Minutes.
agucova.bsky.social
clearly all part of an evil marketing move to… uhm… drain more water?
agucova.bsky.social
It’s ridiculous to pull credentials in this context when literally many if not most of the people who created modern AI disagree with you. Most people working at frontier labs would too.
Reposted by Agus 🔎🔸
dystopiabreaker.xyz
things we know about LLMs and large DL models in general:

- how they are trained (gradient descent)
- the structure into which they are placed (architecture)
- the base arithmetic (matmul, norm, batch norm, and so on)
oxinabox.bsky.social
as a girl with a PhD in natural language processing and machine learning it's actually offensive to me when you say "we don't know how LLMs work so they might be conscious"

I didn't spend 10 years in mines of academia to be told ignorance is morally equal knowledge.

We know exactly how LLMs work.
agucova.bsky.social
Seeing you both here in the trenches makes me want to go back to bluesky to join in the noble fight
Reposted by Agus 🔎🔸
gracekind.net
Another victim of AI psychosis. Really sad 😔
Post from Terence Tao:

“I was able to use an extended conversation with an Al (link) to help answer a MathOverflow question (link) I had already conducted a theoretical analysis suggesting that the answer to this question was negative, but needed some numerical parameters verifying certain inequalities in order to conclusively build a counterexample.
Initially I sought to ask Al to supply Python code to search for a counterexample that I could run and adjust myself, but found that the run time was infeasible and the initial choice of parameters would have made the search doomed to failure anyway. I then switched strategies and instead engaged in a step by step conversation with the Al where it would perform heuristic calculations to locate feasible choices of parameters. Eventually, the Al was able to produce parameters which I could then verify separately (admittedly using Python code supplied by the same Al, but this was a simple 29-line program that I could visually inspect to do what was asked, and also provided numerical values in line with previous heuristic predictions).
Reposted by Agus 🔎🔸
dystopiabreaker.xyz
one thing that has remained true throughout time is that any assertion or evidence that runs counter to human uniqueness is invariably met with strong (often incoherent/misdirected) anger. Jane Goodall wrote about this wrt. chimpanzees and tool-making.
agucova.bsky.social
Wow, this seems great. Will try it out
agucova.bsky.social
a lot of silence from the stochastic parrots crowd
agucova.bsky.social
this is like my life philosophy at this point
agucova.bsky.social
(Arguably, the EU AI Act was mostly negative, though I’d say the specific sections of it that were inspired by AI policy work from EA circles was broadly good and important)
agucova.bsky.social
And finally on AI safety, the one with the biggest policy efforts, I’d say EA has been fairly successful. It’s hard to trace back some of the wins, but aspects of the EU AI Act, the US EO on AI, and many other ongoing legislation efforts were downstream from EA-funded policy work
agucova.bsky.social
On biosecurity, there have been efforts to lobby for pandemic prevention funding in the US, to prevent gain of function research from being approved and to improve policy around antimicrobial resistance. There’s been some major early wins, but most of it is still in progress.
agucova.bsky.social
On the animal suffering side, there was some non trivial efforts (alongside other stakeholders) to push for better animal welfare laws, particularly in the US, but I think the results were mostly unsuccessful
agucova.bsky.social
There’s also a few projects working on public health policy in developing countries, for example lobbying for higher taxes on tobacco and alcohol where that would make a huge dent on the total disease burden
agucova.bsky.social
In GHD you can find things like the Lead Exposure Elimination Project which lobbies for effective lead reduction policies in developing countries (and was wildly successfully, recently evolving into a massive collaboration with USAID and UNICEF)
agucova.bsky.social
There isn’t much of an EA lobbying complex, but there are a handful of orgs that do lobbying for specific cause areas.
agucova.bsky.social
Yeah, it was the first time I was even hearing about the quoted account
agucova.bsky.social
Could you expand on “impedance mismatch between people who make abstract points and those who think every abstract point is really a veiled statement about the specific thing”?
agucova.bsky.social
I’ve had people message me days after being like “I’m glad to have talked to you, I’m not used to people taking my arguments seriously, and I think you were right about some things”