Torr Vision Group Oxford
banner
oxfordtvg.bsky.social
Torr Vision Group Oxford
@oxfordtvg.bsky.social
Torr Vision Group (TVG) In Oxford @ox.ac.uk

We work on Computer Vision, Machine Learning, AI Safety and much more

Learn more about us at: https://torrvision.com
Reposted by Torr Vision Group Oxford
www.scientificamerican.com/article/hack...

New article by Deni Bechard at Scientific America covering our work on hijacking Multimodal computer agents published on Arxiv earlier this year. A massive effort by Lukas Aichberger, supported by myself Yarin Gal, Philip Torr, FREng, FRS & Adel Bibi
September 4, 2025 at 3:32 PM
Reposted by Torr Vision Group Oxford
Excited to share our paper: "Chain-of-Thought Is Not Explainability"! We unpack a critical misconception in AI: models explaining their steps (CoT) aren't necessarily revealing their true reasoning. Spoiler: the transparency can be an illusion. (1/9) 🧵
July 1, 2025 at 3:41 PM
Reposted by Torr Vision Group Oxford
Youtuber Sabine Hossenfelder has picked up on our paper on how to fool AI agents, you can watch her desrcibing our work here: www.youtube.com/watch?v=KY7_....
AI is becoming dangerous. Are we ready?
YouTube video by Sabine Hossenfelder
www.youtube.com
June 10, 2025 at 4:47 PM
Reposted by Torr Vision Group Oxford
🚨 New paper alert: Our recent work on LLM safety has been accepted to ICLR 2025 🇸🇬

We propose a new framework for LLMs safety. 🧵

(1/7)

#LLM #AISafety #ICLR2025 #Certification #AdversarialRobustness #NLP #Shhhhhh #DomainCertification #AI
a man in a suit and tie is sitting at a desk in front of a computer screen that says founder of the office .
ALT: a man in a suit and tie is sitting at a desk in front of a computer screen that says founder of the office .
media.tenor.com
April 4, 2025 at 8:12 PM
Reposted by Torr Vision Group Oxford
Excited to be working with the UK Excited to be work with the UK AI Security Institute, even more important than normal in these turbulent times.

www.linkedin.com/posts/philip...
Strengthening AI Resilience | AISI Work | Philip Torr
Excited to be work with the UK AI Security Institute, even more important than normal in these turbulent times.
www.linkedin.com
April 3, 2025 at 9:52 AM
Reposted by Torr Vision Group Oxford
⚠️ Beware: Your AI assistant could be hijacked just by encountering a malicious image online!

Our latest research exposes critical security risks in AI assistants. An attacker can hijack them by simply posting an image on social media and waiting for it to be captured. [1/6] 🧵
March 18, 2025 at 6:25 PM
Reposted by Torr Vision Group Oxford

🚨 New Paper Alert: Open Problem in Machine Unlearning for AI Safety 🚨

Can AI truly "forget"? While unlearning promises data removal, controlling emergent capabilities is a inherent challenge. Here's why it matters: 👇

Paper: arxiv.org/pdf/2501.04952
1/8
January 10, 2025 at 4:58 PM
Reposted by Torr Vision Group Oxford
🧵 [1/3] Heading to #Vancouver 🇨🇦 tomorrow to present our latest work in @OxfordTVG #UniversityOfOxford at #NeurIPS2024 🧠:
- 💥 Improving on #StylizedImageNet, use #IllusionBench: can you see the cat 🐈‍⬛ Hidden in Plain Sight in the picture 🖼️?

Paper: arxiv.org/abs/2411.06287
December 9, 2024 at 9:04 PM