Marco Zocca
banner
ocramz.bsky.social
Marco Zocca
@ocramz.bsky.social
ML, λ • language and the machines that understand it • https://ocramz.github.io
Pinned
CERN for frontier AI >>>
boo
November 24, 2025 at 11:22 PM
November 24, 2025 at 12:19 AM
the new world of work
November 23, 2025 at 4:17 AM
semantic parsing is one of those ideas that seem obvious* but are fiendishly hard to get right**

* mapping (a subset of) natural language to logic
** LLMs doing it sort of well sometimes doesn't count

#nlp #nlproc #mlsky
November 20, 2025 at 3:56 AM
if programming is theory-building, what is programming with an agent that doesn't necessarily care about it?
November 19, 2025 at 10:46 AM
"Trade your blood plasma for a monthly hour of ButlerBot, and we'll throw in credits for Weyland-Yutani breathable air* and mycoprotein!

*while supplies last
i don't think you can plan it all out, but i agree that there isn't even a vision or dream of what the end result ought to look like, in some coherent way
It really, really does not feel like many of the people pushing AI and robotics to go as fast as possible have a good model of how it's going to go well
November 19, 2025 at 3:58 AM
Reposted by Marco Zocca
I don't want AI generated art, I want AI emptied dishwasher
November 17, 2025 at 4:36 AM
Read and Show are dual
Quote-post with your favourite bit of #HaskellDisinformation. We'll start: It is well-grounded in Category Theory, and you must learn it in order to program with the language.
i really wish people that don't actively use haskell understood haskell, like, at all
November 17, 2025 at 8:54 AM
building with AI, with AI

still nowhere close to the singularity tho
November 17, 2025 at 12:50 AM
surfers on a rainy morning
November 16, 2025 at 8:54 AM
Reposted by Marco Zocca
dont sleep on fusion. fusion has the juice. look at this shit. impeccable sci fi vibe. alien curvature for reasons u wouldnt understand bc u dont know what poincare sections of the plasma beam are. it's even greebled with ports and domes and shit. u can tell the physics nerds are really cooking here
November 16, 2025 at 5:06 AM
is there any published evidence on this accident other than Anthropic's self reporting?
November 15, 2025 at 12:17 AM
ngl doing review cycles with an AI agent while on the go is nothing short of magic
November 13, 2025 at 1:47 AM
love to "as per my previous email" with copilot
November 13, 2025 at 1:40 AM
Reposted by Marco Zocca
We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:

a 🧵 1/n

Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...
November 11, 2025 at 11:52 AM
not meant as a dunk, but this is a purely artificial "emergency". Especially for ARR which has submission cycles.
Hey :) I'm looking for 5 emergency reviewers for ARR submissions🚨📷 they are all in "Resources and Evaluation"!

The reviews need to be submitted within the next 4 days, i.e., Sun 16 Nov EoD 🙃. If you are interested, please DM or email me!

#NLProc #NLP #LLM
November 12, 2025 at 11:08 AM
the entire information infrastructure of the EU can be turned off by the US and they are c++pping themselves over Chinese buses
November 10, 2025 at 11:33 AM
a panoramic collage of waves rolling in trades time for space
November 10, 2025 at 8:37 AM
timeline cleanse 🐬
November 10, 2025 at 7:44 AM
trying to get typed output from LLMs is about as fun as chewing tinfoil

nonetheless, Qwen2.5 is pretty useful
November 9, 2025 at 12:59 AM
I don't think the parallel holds at all.

Not only you can unplug and reset an LM, but you can engineer its knowledge and steer it more or less at will, repeatably and without memory/histeresis.

I agree a precautionary principle should hold in general, but the particular differences matter a lot.
When I started my doctorate, I had to complete the standard research ethics training. Particularly with prisoner populations, I saw many parallels to AI. #ai #artificialintelligence #llms #largelanguagemodels #aiethics
When We Decide Who Can Feel
Should AI be protected by ethical research guidelines?
open.substack.com
November 8, 2025 at 3:34 AM
gods forbid i consider chatgpt as having legal personhood or indeed the framing of the 1A to be as universal as USians make it to be
November 8, 2025 at 3:26 AM
a nice autum-themed japanese starter: marron glacé in an egg pudding #foodsky
November 3, 2025 at 2:55 PM
even your favorite author writes a platitude sometimes 🥶
Like all walls it was ambiguous, two-faced. What was inside it and what was outside it depended upon which side of it you were on.
November 3, 2025 at 2:45 PM
having deep dives on stuff I know close to nothing about, getting what looks like well structured sensible answers I can study, with examples that make sense in turn

I’m still not close to being able to explain the output to someone else, but I guess this counts as progress?
October 29, 2025 at 5:59 AM