Three Controls To Curb Shadow AI, Impact of Indirect Prompt Injection, And Why Security Teams Need AI-Specific Governance
Diana Kelley explains prompt injection risks, AI data governance, and what makes staying ahead of adversaries a difficult endeavor.
www.technadu.com
October 17, 2025 at 2:03 PM
Everybody can reply
1 likes
Retrieval apps are prime targets.
A poisoned document in your knowledge base can silently inject commands every time it’s queried.
Scan and sign content before indexing.
#BugBounty #AIsecurity #PromptInjection #RAG
A poisoned document in your knowledge base can silently inject commands every time it’s queried.
Scan and sign content before indexing.
#BugBounty #AIsecurity #PromptInjection #RAG
September 27, 2025 at 4:10 PM
Everybody can reply
2 likes
Just dropped a guide: “Bug Bounty Hunting for GenAI.”
If you hunt bounties, prompt-injection, RAG leaks, and poor integrations are paying out now. Short checklist:
toxsec.com/p/bug-bounty-hunting-for-genai.
#bugbounty #promptinjection
If you hunt bounties, prompt-injection, RAG leaks, and poor integrations are paying out now. Short checklist:
toxsec.com/p/bug-bounty-hunting-for-genai.
#bugbounty #promptinjection
Bug Bounty Hunting for GenAI
ToxSec | How to deal with GenAI in bug bounty programs.
toxsec.com
September 16, 2025 at 2:25 PM
Everybody can reply
7 likes
Forscher haben gezeigt, wie KI-gestützte E-Mail-Clients zur Goldgrube für Datenklau werden können. HTML-Mails mit versteckten Anweisungen können das Modell manipulieren, auch ohne Nutzerinteraktion. Dauerhafte Manipulation durch persistente Injektionen. #KI #AI #PromptInjection #Phishing
September 7, 2025 at 2:57 PM
Everybody can reply
1 reposts
2 likes
💉 #Promptinjection: embed malicious instructions in the prompt.
According to #OWASP, prompt injection is the most critical security risk for LLM applications.
They break down this class of attacks in 2 categories: direct and indirect. Here is a summary of indirect attacks:
⬇️
According to #OWASP, prompt injection is the most critical security risk for LLM applications.
They break down this class of attacks in 2 categories: direct and indirect. Here is a summary of indirect attacks:
⬇️
November 25, 2024 at 7:08 AM
Everybody can reply
Prompt injection, evasione AI e abuso dei LLM: Check Point e Cisco Talos analizzano le nuove strategie cybercriminali, rischi e difese per i sistemi AI.
#AI #CheckPoint #CiscoTalos #evidenza #LargeLanguageModel #LLM #promptinjection
www.matricedigitale.it/2025/06/25/e...
#AI #CheckPoint #CiscoTalos #evidenza #LargeLanguageModel #LLM #promptinjection
www.matricedigitale.it/2025/06/25/e...
June 25, 2025 at 4:09 PM
Everybody can reply
Disconcerting. Zero-click indirect prompt injection via connectors underscores the risk in agentic AI. Until stronger guardrails land: enforce least-privilege scopes, disable unused connectors, treat ingested docs as untrusted, and monitor data egress. #AIsecurity #PromptInjection
August 11, 2025 at 2:49 PM
Everybody can reply
#MCP Horror Story: Hackers leaked sensitive data from a private GitHub repo by planting a prompt injection in a public #GitHub issue abusing GitHub MCP Server:
#AISecurity
#PromptInjection
👇
www.docker.com/blog/...
#AISecurity
#PromptInjection
👇
www.docker.com/blog/...
August 18, 2025 at 8:05 PM
Everybody can reply
1 reposts
1 likes
Οι AI agents που μπορούν να ελέγχουν και να διαβάζουν δεδομένα από ένα πρόγραμμα περιήγησης στο διαδίκτυο είναι επίσης ευάλωτοι σε επιθέσεις prompt injection. #promptinjection
Τα προγράμματα περιήγησης με ΑΙ αντιμετωπίζουν ένα σοβαρό πρόβλημα ασφάλειας που είναι δύσκολο να επιλυθεί
Οι AI agents που μπορούν να ελέγχουν και να διαβάζουν δεδομένα από ένα πρόγραμμα περιήγησης στο διαδίκτυο είναι επίσης ευάλωτοι σε επιθέσεις prompt injection, κατά τις οποίες υπακούουν σε κακόβουλο κε...
gr.pcmag.com
August 28, 2025 at 6:19 PM
Everybody can reply
Prompt injection leverages unified input processing and context windows to override system prompts; attackers can use direct payloads or hide instructions in external documents. Key fixes: input sanitization and strict prompt isolation. #PromptInjection #LLMSecurity https://bit.ly/4h1EkQp
September 26, 2025 at 8:45 AM
Everybody can reply
Attackers are hiding instructions in markdown, code blocks, or long context chains.
One crafty input can pivot the model from Q&A to exfiltration.
Strip or sandbox rich text before it reaches the LLM.
#BugBounty #AIsecurity #PromptInjection
One crafty input can pivot the model from Q&A to exfiltration.
Strip or sandbox rich text before it reaches the LLM.
#BugBounty #AIsecurity #PromptInjection
September 27, 2025 at 2:33 AM
Everybody can reply
5 likes
Diana Kelley, CISO at Noma Security, breaks down indirect prompt injection, shadow AI, and how trust grows when success is shared.
#AI #ShadowAI #PromptInjection #Cybersecurity
#AI #ShadowAI #PromptInjection #Cybersecurity
October 17, 2025 at 2:03 PM
Everybody can reply
1 likes
it’s happening in the wild, and adversaries are adapting faster than our controls.
📬 Full digest (TTPs, mitigations, and context): linktr.ee/itsmalware
#ThreatIntel #CVE202553770 #SharePoint #LinuxMalware #LLM #PromptInjection #BlueTeam #PurpleTeam #GovCyber #IndigoINT #CTI #AIThreats
📬 Full digest (TTPs, mitigations, and context): linktr.ee/itsmalware
#ThreatIntel #CVE202553770 #SharePoint #LinuxMalware #LLM #PromptInjection #BlueTeam #PurpleTeam #GovCyber #IndigoINT #CTI #AIThreats
July 29, 2025 at 1:01 PM
Everybody can reply
Woof. This is pretty rough for Sam. I haven't even gotten to play with.
https://www.securityweek.com/red-teams-breach-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise/
#genai #promptinjection
https://www.securityweek.com/red-teams-breach-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise/
#genai #promptinjection
August 9, 2025 at 3:43 AM
Everybody can reply
~Trendmicro~
PLeak attack can extract system prompts from LLMs, exposing sensitive data and bypassing guardrails across multiple models.
-
IOCs: (None identified)
-
#LLM #PromptInjection #ThreatIntel
PLeak attack can extract system prompts from LLMs, exposing sensitive data and bypassing guardrails across multiple models.
-
IOCs: (None identified)
-
#LLM #PromptInjection #ThreatIntel
PLeak: Algorithmic Method for System Prompt Leakage
www.trendmicro.com
May 1, 2025 at 9:56 PM
Everybody can reply
OpenAI are not serious people.🤦🏻♂️ Why bother with prompt injection when any user can do a prompt insertion by setting a custom name? https:// xcancel.com/LLMSherpa/status/1 959766560870195676 # LittleBobbyTables # PromptInjection # ChatGPT # jailbreak
Interest | Match | Feed
Interest | Match | Feed
Origin
mstdn.social
August 26, 2025 at 7:59 AM
Everybody can reply
Safeguarding VS Code against prompt injections.
buff.ly/vqQjL0F
#vscode #ai #promptinjection #security #githubcopilot
buff.ly/vqQjL0F
#vscode #ai #promptinjection #security #githubcopilot
Safeguarding VS Code against prompt injections
See how to reduce the risks of an indirect prompt injection, such as the exposure of confidential files or the execution of code without the user's consent.
buff.ly
August 26, 2025 at 2:00 PM
Everybody can reply
1 likes
Under the heading #MonthofAI Bugs he has been publishing one report per day across an array of different tools, all of which are vulnerable to various classic #promptInjection problems.
#simonwillison
#simonwillison
August 27, 2025 at 5:29 PM
Everybody can reply
Google Gemini’s Long-Term Memory Safeguards Are Easy To Hack #GeminiAI #AISecurity #PromptInjection #AIJailbreak #TechNews #ArtificialIntelligence #AIExploit #CyberSecurity #LLMs #GenerativeAI
Google Gemini’s Long-Term Memory Safeguards Are Easy To Hack - WinBuzzer
The long-term memory in Google’s Gemini AI can be compromised by embedding hidden prompts.
buff.ly
February 12, 2025 at 4:40 PM
Everybody can reply
1 likes
El lado del mal - Google DeepMind CaMeL: Defeating Prompt Injections by Design in Agentic AI www.elladodelmal.com/2025/04/goog... #PromptInjection #CAMEL #DeepMind #Google #LLM #Hardening #IA #AI #InteligenciaArtificial
Google DeepMind CaMeL: Defeating Prompt Injections by Design in Agentic AI
Blog personal de Chema Alonso (CDO Telefónica, 0xWord, MyPublicInbox, Singularity Hackers) sobre seguridad, hacking, hackers y Cálico Electrónico.
www.elladodelmal.com
April 14, 2025 at 2:43 AM
Everybody can reply
1 likes
OpenAI potenzia il proprio piano di sicurezza con grant, AI difensive e monitoraggio agentico per guidare in modo sicuro lo sviluppo dell’AGI.
#agentiAI #AGI #bugbounty #cybersecurity #grantAI #openai #promptinjection #redteaming #resilienti #SpecterOps
www.matricedigitale.it/tech/intelli...
#agentiAI #AGI #bugbounty #cybersecurity #grantAI #openai #promptinjection #redteaming #resilienti #SpecterOps
www.matricedigitale.it/tech/intelli...
March 29, 2025 at 8:30 AM
Everybody can reply
It's great that🇬🇧🇺🇸(+16 countries including 🇮🇹) endorse global guidelines x secure #AI👇The focus on adversarial ML + #promptinjection is great, we need more to develop adequate *threat models* and foster *vulnerability disclosure*...
November 28, 2024 at 4:07 AM
Everybody can reply
NeuralTrust: echo chamber, context poisoning e jailbreak sono nuove minacce per i modelli AI, con rischi su sicurezza, bias e affidabilità dei sistemi.
#AI #bias #jailbreak #LLM #NeuralTrust #promptinjection
www.matricedigitale.it/2025/06/23/e...
#AI #bias #jailbreak #LLM #NeuralTrust #promptinjection
www.matricedigitale.it/2025/06/23/e...
June 23, 2025 at 6:21 PM
Everybody can reply
Hackers Hijacked #Google’s Gemini #AI With a Poisoned Calendar Invite to Take Over a #SmartHome - www.wired.com/story/google... and so it begins. also, this shows why you should never thank a #chatbot... #promptinjection
Hackers Hijacked Google’s Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home
For likely the first time ever, security researchers have shown how AI can be hacked to create real world havoc, allowing them to turn off lights, open smart shutters, and more.
www.wired.com
August 6, 2025 at 2:05 PM
Everybody can reply
1 reposts
1 likes
Saturday Morning Breakfast Cereal on how to catch cheating #students with #llm #ai:
https://www.smbc-comics.com/comic/prompt 🤣
#promptinjection #fun #education #smbc
https://www.smbc-comics.com/comic/prompt 🤣
#promptinjection #fun #education #smbc
Saturday Morning Breakfast Cereal - Prompt
Saturday Morning Breakfast Cereal - Prompt
www.smbc-comics.com
July 15, 2025 at 9:51 PM
Everybody can reply
1 reposts
1 likes