AbuMuslim (أبومسلِم)
banner
m19o.bsky.social
AbuMuslim (أبومسلِم)
@m19o.bsky.social
Build stuff; break stuff; drink coffee++ \n Doing @BSidesABQ @CyberDose
November 7, 2025 at 3:32 PM
Summarizers gone wrong: m19o.github.io/posts/Phishi...
Phishing LLMs: Hacking email summarizers
Phishing LLMs with prompt injections
m19o.github.io
November 1, 2025 at 5:55 PM
Market/analytics tools that query databases often expose internal table and column names, and in some cases are vulnerable to SQL injection.

Agent implementations are getting more complex, and the attack surface is getting wider. I'll share more of my findings in this thread.
November 1, 2025 at 5:55 PM
Most summarization tools I touched have the same flaw: you can inject the summary template by planting instructions in docs, PDFs, web pages, links, even images. There are many vectors. You can also turn summarization into SSRF by making the tool "summarize" an internal document.
November 1, 2025 at 5:55 PM
When the app calls a tool, many setups leak every step of execution. You can see which tools fire, what parameters are passed, and how they’re passed. That opens the door for cross-tool attacks.
November 1, 2025 at 5:55 PM
Thank you!
October 31, 2025 at 8:12 PM
Now I’m on stage at DEF CON @cloudvillage-dc.bsky.social in Bahrain's AICS, the first DEF CON in the Middle East. Proud to be among the first, with Ziad Hammad.

If you’re in Manama, say hi.
October 31, 2025 at 7:36 PM
The rest of the system includes the prompt orchestration layer, embedding-based search, retrieval components, API interfaces, access controls, logging, and more.

So yes, prompt injection is real. And AI security isn’t just about the model it’s about everything built around it.
October 6, 2025 at 6:23 AM
The problem is, people throw around “AI” and “LLM” as marketing buzzwords. But what you’re actually interacting with is a complex system the LLM is just one piece.
October 6, 2025 at 6:23 AM
On the other hand, when you understand how the model tokenizes your input and use that knowledge to bypass guardrails, then yes, you are attacking the model’s behavior. You’re abusing how the model interprets your prompt to bypass the classifier.
October 6, 2025 at 6:23 AM
Prompt injection has many forms. If you override system instructions, that’s a prompt injection. In that case, you’re attacking the application’s logic that enforces constraints on the model not the model directly.
October 6, 2025 at 6:23 AM
But here’s the thing: AI red teaming or LLM penetration testing isn’t just about attacking the model it’s about attacking the entire system.
October 6, 2025 at 6:23 AM
Know your worth. Back yourself. Don’t let someone else’s opinion set your limits.
August 25, 2025 at 11:21 AM
INDEED
August 11, 2025 at 1:24 AM
Congratulations 🎉
August 10, 2025 at 7:47 AM