#promptinjection
Retrieval apps are prime targets.

A poisoned document in your knowledge base can silently inject commands every time it’s queried.

Scan and sign content before indexing.

#BugBounty #AIsecurity #PromptInjection #RAG
September 27, 2025 at 4:10 PM Everybody can reply
2 likes
Just dropped a guide: “Bug Bounty Hunting for GenAI.”

If you hunt bounties, prompt-injection, RAG leaks, and poor integrations are paying out now. Short checklist:

toxsec.com/p/bug-bounty-hunting-for-genai.

#bugbounty #promptinjection
Bug Bounty Hunting for GenAI
ToxSec | How to deal with GenAI in bug bounty programs.
toxsec.com
September 16, 2025 at 2:25 PM Everybody can reply
7 likes
Forscher haben gezeigt, wie KI-gestützte E-Mail-Clients zur Goldgrube für Datenklau werden können. HTML-Mails mit versteckten Anweisungen können das Modell manipulieren, auch ohne Nutzerinteraktion. Dauerhafte Manipulation durch persistente Injektionen. #KI #AI #PromptInjection #Phishing
September 7, 2025 at 2:57 PM Everybody can reply
1 reposts 2 likes
💉 #Promptinjection: embed malicious instructions in the prompt.

According to #OWASP, prompt injection is the most critical security risk for LLM applications.

They break down this class of attacks in 2 categories: direct and indirect. Here is a summary of indirect attacks:

⬇️
November 25, 2024 at 7:08 AM Everybody can reply
Prompt injection, evasione AI e abuso dei LLM: Check Point e Cisco Talos analizzano le nuove strategie cybercriminali, rischi e difese per i sistemi AI.

#AI #CheckPoint #CiscoTalos #evidenza #LargeLanguageModel #LLM #promptinjection
www.matricedigitale.it/2025/06/25/e...
June 25, 2025 at 4:09 PM Everybody can reply
Disconcerting. Zero-click indirect prompt injection via connectors underscores the risk in agentic AI. Until stronger guardrails land: enforce least-privilege scopes, disable unused connectors, treat ingested docs as untrusted, and monitor data egress. #AIsecurity #PromptInjection
August 11, 2025 at 2:49 PM Everybody can reply
#MCP Horror Story: Hackers leaked sensitive data from a private GitHub repo by planting a prompt injection in a public #GitHub issue abusing GitHub MCP Server:
#AISecurity
#PromptInjection
👇
www.docker.com/blog/...
August 18, 2025 at 8:05 PM Everybody can reply
1 reposts 1 likes
Prompt injection leverages unified input processing and context windows to override system prompts; attackers can use direct payloads or hide instructions in external documents. Key fixes: input sanitization and strict prompt isolation. #PromptInjection #LLMSecurity https://bit.ly/4h1EkQp
September 26, 2025 at 8:45 AM Everybody can reply
Attackers are hiding instructions in markdown, code blocks, or long context chains.

One crafty input can pivot the model from Q&A to exfiltration.

Strip or sandbox rich text before it reaches the LLM.

#BugBounty #AIsecurity #PromptInjection
September 27, 2025 at 2:33 AM Everybody can reply
5 likes
Diana Kelley, CISO at Noma Security, breaks down indirect prompt injection, shadow AI, and how trust grows when success is shared.

#AI #ShadowAI #PromptInjection #Cybersecurity
October 17, 2025 at 2:03 PM Everybody can reply
1 likes
it’s happening in the wild, and adversaries are adapting faster than our controls.
📬 Full digest (TTPs, mitigations, and context): linktr.ee/itsmalware
#ThreatIntel #CVE202553770 #SharePoint #LinuxMalware #LLM #PromptInjection #BlueTeam #PurpleTeam #GovCyber #IndigoINT #CTI #AIThreats
July 29, 2025 at 1:01 PM Everybody can reply
August 9, 2025 at 3:43 AM Everybody can reply
~Trendmicro~
PLeak attack can extract system prompts from LLMs, exposing sensitive data and bypassing guardrails across multiple models.
-
IOCs: (None identified)
-
#LLM #PromptInjection #ThreatIntel
PLeak: Algorithmic Method for System Prompt Leakage
www.trendmicro.com
May 1, 2025 at 9:56 PM Everybody can reply
OpenAI are not serious people.🤦🏻‍♂️ Why bother with prompt injection when any user can do a prompt insertion by setting a custom name? https:// xcancel.com/LLMSherpa/status/1 959766560870195676 # LittleBobbyTables # PromptInjection # ChatGPT # jailbreak

Interest | Match | Feed
Origin
mstdn.social
August 26, 2025 at 7:59 AM Everybody can reply
Under the heading #MonthofAI Bugs he has been publishing one report per day across an array of different tools, all of which are vulnerable to various classic #promptInjection problems.

#simonwillison
August 27, 2025 at 5:29 PM Everybody can reply
OpenAI potenzia il proprio piano di sicurezza con grant, AI difensive e monitoraggio agentico per guidare in modo sicuro lo sviluppo dell’AGI.

#agentiAI #AGI #bugbounty #cybersecurity #grantAI #openai #promptinjection #redteaming #resilienti #SpecterOps
www.matricedigitale.it/tech/intelli...
March 29, 2025 at 8:30 AM Everybody can reply
It's great that🇬🇧🇺🇸(+16 countries including 🇮🇹) endorse global guidelines x secure #AI👇The focus on adversarial ML + #promptinjection is great, we need more to develop adequate *threat models* and foster *vulnerability disclosure*...
November 28, 2024 at 4:07 AM Everybody can reply
NeuralTrust: echo chamber, context poisoning e jailbreak sono nuove minacce per i modelli AI, con rischi su sicurezza, bias e affidabilità dei sistemi.

#AI #bias #jailbreak #LLM #NeuralTrust #promptinjection
www.matricedigitale.it/2025/06/23/e...
June 23, 2025 at 6:21 PM Everybody can reply