Umar Iqbal
@umariqbal.bsky.social
78 followers 130 following 14 posts
Assistant professor at the Washington University in St. Louis. I research computer security and privacy.
Posts Media Videos Starter Packs
umariqbal.bsky.social
There are a lot more details to our approach, including several open problems. If you’re interested in learning more, we encourage you to read the paper (arxiv.org/pdf/2403.04960)! You can also catch Yuhao's talk about IsolateGPT at NDSS next week in San Diego!
arxiv.org
umariqbal.bsky.social
Our evaluation demonstrates that it is indeed feasible to isolate execution in LLM agentic computing paradigm: it mitigated security and privacy issues without loss of functionality and its performance overhead is under 30% for 2/3rd of tested queries
umariqbal.bsky.social
IsolateGPT runs individual tools in isolated containers, to ensure that tools cannot interact with components outside of their execution environments. Then to enable interaction between sandboxed tools, it allows apps to exchange messages only via a central trustworthy module
umariqbal.bsky.social
Our security architecture, named IsolateGPT, tackles these challenges. IsolateGPT assumes an LLM-based digital assistant that supports third-party tools for tasks, such as online shopping, email management, etc. and aims to secure adversarial manipulations between tools
umariqbal.bsky.social
Access control & isolation have existed in prior systems but their application to LLM computing paradigm is non-trivial: isolated environments need to be securely provided access to broader system context, & secure interfaces need to be defined for natural language interactions
umariqbal.bsky.social
To that end, our research has focused on adapting systems security principles in improving the security of LLM integrations and LLM-based agentic systems. We recently explored the feasibility of access control and isolation in improving the security of LLM interfacing with tools
umariqbal.bsky.social
While there is a serious emphasis on making LLMs robust, e.g., training LLMs to prioritize privileged instructions openai.com/index/the-in..., we believe that tried and-tested systems security principles have not been given similar attention in securing LLM integrations
The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions
Today's LLMs are susceptible to prompt injections, jailbreaks, and other attacks that allow adversaries to overwrite a model's original instructions with their own malicious prompts.
openai.com
umariqbal.bsky.social
These issues can manifest in conventional computing systems where LLMs are getting deeply integrated, such as OSs and mobile apps, and also in agentic systems (or AI agents) which interface with various tools and content on the internet
umariqbal.bsky.social
Fundamentally the key issue is that LLMs load instructions from various sources (system, user, tools) in a shared context window where without safeguards, LLMs treat them with the same privileges
umariqbal.bsky.social
At a high level, the interfacing between system components in determined at runtime based on instructions from system components – which can be untrustworthy, malicious, or compromised – such as tools developed by third-party services or arbitrary content hosted on the internet
umariqbal.bsky.social
While this execution paradigm has the potential to fundamentally transform computing, there are serious security, privacy, and safety risks!
umariqbal.bsky.social
For example if a user prompts their LLM-based personal assistant to download and store email attachments in a cloud drive, the LLM can predict the necessary interfacing between email and cloud drive tools to carry out the task
umariqbal.bsky.social
LLMs have enabled a new computing paradigm, where the system relies on ML models to resolve user queries expressed in natural language! In this paradigm, new features can be implemented via natural language specs, without requiring explicit implementation from software developers
Reposted by Umar Iqbal
httparchive.org
The Privacy chapter was written by a cornucopia of experts: Yash Vekaria, Benjamin Standaert, @maxostapenko.com , Abdul Haddi Amjad,
Yana Dimova, Shaoor Munir, Chris Böttger, and Umar Iqbal

almanac.httparchive.org/en/2024/priv...

Catch up on for the latest on a very important topic for the web!
Privacy | 2024 | The Web Almanac by HTTP Archive
Privacy chapter of the 2024 Web Almanac covers the adoption and impact of online tracking, privacy preference signals, and browser initiatives for a privacy-friendlier web.
almanac.httparchive.org