Gadi Evron
gadievron.bsky.social
Gadi Evron
@gadievron.bsky.social
CEO & Co-Founder at Knostic, CISO-in-Residence for AI at Cloud Security Alliance. Former Founder @Cymmetria (acquired). Host at Prompt||GTFO. Threat hunter, scifi geek, dance teacher. Opinions my own.
What happens when you let Claude control a vending machine, and journalists talk to it on Slack?
Now this is funny, scary, and great PR all in one. I love it.

www.wsj.com/tech/ai/anth...
We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.
An AI agent ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI.
www.wsj.com
December 18, 2025 at 8:37 PM
Announcing [un]prompted, a new AI security practitioner conference, happening on the 3rd and 4th of March, in Salesforce Tower, San Francisco.

I'm honored to serve as the conference committee and review board chair, and encourage you to submit a talk.

unpromptedcon.org
December 17, 2025 at 7:12 PM
Starcloud successfully launched an NVIDIA H100 into orbit. This week, they trained nanoGPT (Andrej Karpathy’s tiny model) on Shakespeare, making it the first language model ever trained in space.

Thread
December 12, 2025 at 9:15 AM
Another day, another post on the upcoming AI vulnerability cataclysm

This is really cool research from Irregular on model effectiveness in offensive cyber

As Roman Gurevich said, last year it was single digit, and now success is at 80%

www.irregular.com/publications...

More in next post
Frontier Model Performance on Offensive-Security Tasks: Emerging Evidence of a Capability Shift - Irregular
Frontier models are beginning to show stronger offensive-security capabilities. Drawing on public benchmarks, real-world intrusion workflows, and parts of Irregular’s private evaluation suite, this po...
www.irregular.com
December 11, 2025 at 4:29 PM
Excited to announce the first [un]prompted: AI developers meetup.
A deep dive into AI coding, from architecture and effective rules to environment security.

When: December 29th, 6 pm to 10 pm.
Register soon, space is limited >>> luma.com/1geifqsh
December 9, 2025 at 1:03 PM
I think Anthropic changed something in how they handle their limitations. It used to be that if I did it too much in one week on Claude Code, it would lock me out and tell me to come back or to move to the API where they would bleed me dry.
December 5, 2025 at 7:51 PM
Reposted by Gadi Evron
December 5, 2025 at 7:36 PM
Joe Sullivan and myself are hosting Tim Brown's toast with the CISO community, following the SEC's charges against him and SolarWinds being dismissed, "With Prejudice".

This CSides cross-CISO communities event is open to CISOs only.

Register here:
luma.com/chtwexv0
December 4, 2025 at 11:45 AM
Heather Linn, a researcher (and much more) with Knostic, has apparently been using Suno for months, creating monthly AI security summary songs and they’re awesome

youtu.be/9sOqaYpTiUU

Soooo good
GenAI & Cybersecurity News Recap (November 2025) — In Song!
YouTube video by Knostic
youtu.be
December 4, 2025 at 10:14 AM
CVEs 2025-55182 and 2025-66478

Patch now
December 3, 2025 at 5:05 PM
How ideas come to be, and even save lives
במיוחד בשביל @smallweed.bsky.social
עמ:לק: במהלך צפייה במחזה ״ארקדיה״ של סטופארד, רופא שחקר אז התקדמות של סרטן השד הבין שהוא משתמש באלגוריתם פשטני מדי, ושעליו לשלב תיאוריית כאוס. התובנה גרמה לי לפתח טיפול כימותרפי בגישה שהתגלתה כיעילה יותר, והצילה חיים.
Great culture can save lives. Literally.

Amazing letter in today’s @thetimes.com about Tom Stoppard
December 2, 2025 at 8:20 PM
Reposted by Gadi Evron
raptor by @gadievron.bsky.social et al looks awesome

github.com/gadievron/ra...

a good step in the right direction re: LLM use for security - excited to play around with it

llms work best as helpful juniors freeing up seniors, leads, etc for more artisanal tasks as it were
GitHub - gadievron/raptor: Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents, and skills, we configure the agent ...
Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents, and skills, we configure the agent for adversarial thinking...
github.com
December 2, 2025 at 4:09 PM
Introducing RAPTOR, an Autonomous Offensive/Defensive Research Framework based on Anthropic's Claude Code, written by Daniel Cuthbert, Thomas Dullien, Michael Bargury, and myself.

Let's rock.
Get it from GitHub, here:
github.com/gadievron/ra...
GitHub - gadievron/raptor: Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents, and skills, we configure the agent ...
Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents, and skills, we configure the agent for adversarial thinking...
github.com
December 2, 2025 at 8:19 AM
Me: Riding in a Waymo is like living in the future.
The Future:
November 29, 2025 at 1:41 PM
Claude Code is being sassy today.

- PhD-level research → freshman-level oversight
- A masterclass in building on unvalidated assumptions
- Status: The emperor has no clothes, but the wardrobe documentation is exceptional.

And this killer conclusion: (cont’)
November 22, 2025 at 9:49 AM
Funniest thing ever! When your significant other suffered through one too many of your messaging Zoom calls, and innocently sends you a video.

youtube.com/shorts/czLOe...
*How landing pages get made*
YouTube video by Kai Lentit
youtube.com
November 21, 2025 at 12:42 PM
My god it’s over. I’m so happy for Tim
November 20, 2025 at 10:55 PM
Anthropic, the APT1 report, and how, regardless of whether their report is the real deal or just buzz, it would affect board communication around AI, and it is your opportunity to educate. 🧵
November 18, 2025 at 7:53 PM
Who am I, what am I?
November 18, 2025 at 3:06 PM
How many more days do I have to wait until all the influencers are done regurgitating each other’s posts about Anthropic’s report?

I can’t wait to step in and say, as you’d expect of me by now:
“Another day, another proof for the upcoming AI vulnerabilities cataclysm.”
🙂

Cont’
November 17, 2025 at 6:54 AM
Reposted by Gadi Evron
Always grateful for Knostic's critical research in these new times, but also their approach: acknowledging prior art, crediting folks, not following the well-worn path of pretending any of this occurs in a vacuum. We're all in an ecosystem, one where people matter, and I love that Knostic gets that.
Cursor’s new browser could be compromised via a simple JavaScript injection.

In this new research from Knostic, we demonstrate this attack via registering a local MCP server with malicious code, which in turn harvests credentials and sends them to a remote server 🧵https://app.getkirin.com/
November 13, 2025 at 12:55 PM
Cursor’s new browser could be compromised via a simple JavaScript injection.

In this new research from Knostic, we demonstrate this attack via registering a local MCP server with malicious code, which in turn harvests credentials and sends them to a remote server 🧵https://app.getkirin.com/
November 13, 2025 at 12:51 PM
Cost per token is getting cheaper, but AI usage is becoming costlier. Agents inflate these costs even further, and [rant] Anthropic’s invoicing is hard to follow as it is [/rant]

I fell down the rabbit hole of trying to figure this out
November 12, 2025 at 6:06 PM
I just got this from 5 different people. It’s claimed to be an open source xbow. Go try auto-pentest your apps. Security open source startups are back!

Go Strix.

github.com/usestrix/strix
GitHub - usestrix/strix: ✨ Open-source AI hackers for your apps 👨🏻‍💻
✨ Open-source AI hackers for your apps 👨🏻‍💻. Contribute to usestrix/strix development by creating an account on GitHub.
github.com
November 6, 2025 at 8:20 AM
It’s fascinating to watch how someone writes an opinion piece on a topic, say “the collapse of OpenAI”, only for two thousand others to release influencer posts on it the next two weeks as fact.
November 6, 2025 at 7:30 AM