Lê Nguyên Hoang
banner
science4all.org
Lê Nguyên Hoang
@science4all.org
CEO of Calicarpa. President of Tournesol🌻. ML security researcher. Science4All.
@Polytechnique X07, @polymtl PhD, ex @mit @epfl.
Writer, @Orange AI ethics.
Vous voulez combattre MAGA ?

Cet article suggère qu'éclater la bulle de l'IA, en soulignant les faibles gains de productivité, les failles sévères de cybersécurité, les nombreux procès en cours, les montages financiers douteux ou la régulation, peut être utile.
www.theguardian.com/commentisfre...
This is Europe's secret weapon against Trump: it could burst his AI bubble | Johnny Ryan
Growth in the US economy – and the president’s political survival – rest on AI. The EU must use its leverage and stand up to him, says the Irish Council for Civil Liberties’ Johnny Ryan
www.theguardian.com
December 20, 2025 at 9:14 AM
Clairement ! Mais l'IA va leur ouvrir de nouvelles capacités insoupçonnées...
www.forbes.com/sites/tonybr...
Windows Is Becoming An Operating System For AI Agents
Microsoft’s Windows updates introduce a foundation for secure, governed AI agents with standardized interfaces, clear permissions, and isolated workspaces.
www.forbes.com
December 7, 2025 at 7:27 PM
L'article de Forbes cite ses sources :
blog.deadbits.ai/p/indirect-p...
mindgard.ai/blog/google-...
www.promptarmor.com/resources/go...

Y compris un blog de Google même :
bughunters.google.com/learn/invali...

L'attaque est triviale. Ça devrait être avant tout très embarassant pour son concepteur.
Google Antigravity Exfiltrates Data
An indirect prompt injection in an implementation blog can manipulate Antigravity to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
www.promptarmor.com
December 6, 2025 at 7:07 PM
Medical interventions are subject to very high regulation standards, including having to prove their compliance with the law to obtain a right of commercialization.

I'd argue that any (AI) company that claims to have medical benefits must be subjected to the demanding standards of medicine.
December 3, 2025 at 8:27 PM
This is a nice entry on the incident and the takeaways.
forum.cspaper.org/topic/191/ic...

I strongly recommend acceptance ;)
ICLR = I Can Locate Reviewer: How an API Bug Turned Blind Review into a Data Apocalypse
On the night of November 27, 2025, computer-science Twitter, Rednote, Xiaohongshu, Reddit and WeChat group lit up with the same five words: “ICLR can open t...
forum.cspaper.org
November 30, 2025 at 7:37 PM
It's scary how #AIHype is taking over academia.

This @nature.com paper found out that LLM suck at tabular tasks (which should not be surprising, unless they memorized the test set).

Yet the abstract is still phrased as if they were some kind of breakthrough (WTF?!?).
www.nature.com/articles/s41...
November 28, 2025 at 7:31 PM
I should stress that this is not limited to AI research though. Papers have been found to contain hidden instructions designed to hack AI-generated reviews.

(to be fair, I in fact believe that the publication standards in Computer Science are higher than elsewhere)
www.nature.com/articles/d41...
Scientists hide messages in papers to game AI peer review
Some studies containing instructions in white text or small font — visible only to machines — will be withdrawn from preprint servers.
www.nature.com
November 28, 2025 at 10:36 AM
Any powerful system should pay a lot of attention (& money) to corruption risks. It's costly, but essential.

AI research has become an extremely powerful system, as it is now affecting trillion-dollar valuations & geopolitical decisions.

But it has not given itself the means to prevent corruption.
November 28, 2025 at 10:31 AM
I believe that the AI research failures exposed by the #ICLRLeaks are illustrations of broader societal concerns.

While "innovation" is glorified, with their authors earning millions, regulation (here in the form of reviewing) is botched, automated and under-funded.

This cannot be sustainable.
November 28, 2025 at 10:31 AM
In 2025, I myself reviewed a submission, whose sole theorem and proof were clearly AI-generated.

Embarrassingly, the theorem was uninteresting, its assumptions were ill-justified and its proof was flawed.

The paper got rejected.

But could it have been accepted by AI-generated (or lazy) reviews?
November 28, 2025 at 10:31 AM
Fears include:
- Author retaliations against negative reviews.
- Bribery (evidence already emerging).

Findings (or rather, confirmed long-standing suspicions):
- Massive abuse of (fully) AI-generated reviews.
- Conflicts of interest (e.g. reviewers rejecting papers that compete with their own).
November 28, 2025 at 10:31 AM
At some point, we agreed that our most beautiful masterpiece was the 30-page proof of the strategyproofness guarantees of the geometric median.
proceedings.mlr.press/v206/el-mham...

I made two videos on this in French:
tournesol.app/entities/yt:...
tournesol.app/entities/yt:...
November 21, 2025 at 9:09 PM
I have been extremely lucky to be his closest collaborators during his first two years, with publications at the world's top #AI conferences (NeurIPS, ICML, AISTATS, AAAI).

I have extremely fond memories of the two of us pushing the frontier of mathematics for AI security.

Such an amazing time!
November 21, 2025 at 9:09 PM