Parastoo Abtahi
@parastooabtahi.bsky.social
300 followers 130 following 15 posts
Assistant Professor of Computer Science at Princeton | #HCI #AR #VR #SpatialComputing parastooabtahi.com hci.social/@parastoo
Posts Media Videos Starter Packs
Pinned
parastooabtahi.bsky.social
Thrilled that Reality Promises received a best paper award at #UIST2025.

Come see Mo Kari’s talk on the last day of the conference!

📍Wed, at 11:00 AM, in the Sydney room
parastooabtahi.bsky.social
In #VR, users can experience “magical” interactions, such as moving distant virtual objects with the Go-Go technique. How might we similarly extend people’s abilities in the physical world? 🪄

Excited to share Reality Promises, our #UIST2025 paper, led by the amazing Mo Kari ✨
parastooabtahi.bsky.social
Thrilled that Reality Promises received a best paper award at #UIST2025.

Come see Mo Kari’s talk on the last day of the conference!

📍Wed, at 11:00 AM, in the Sydney room
parastooabtahi.bsky.social
In #VR, users can experience “magical” interactions, such as moving distant virtual objects with the Go-Go technique. How might we similarly extend people’s abilities in the physical world? 🪄

Excited to share Reality Promises, our #UIST2025 paper, led by the amazing Mo Kari ✨
Reposted by Parastoo Abtahi
jonfroehlich.bsky.social
With the CHI deadline fast approaching, I'm resharing our lab's resource on making figures for HCI papers: docs.google.com/presentation...

New content suggestions always appreciated. Don't be shy to promote your own work!
Makeability Lab - How to Figures
How to figures makeabilitylab.cs.uw.edu
docs.google.com
Reposted by Parastoo Abtahi
uploadvr.com
Researchers made a robot that can make deliveries to VR. They call it Skynet.

Details here: www.uploadvr.com/invisible-mo...
parastooabtahi.bsky.social
“Reality Promises: Virtual-Physical Decoupling Illusions in Mixed Reality via Invisible Mobile Robots”

Paper: hci.princeton.edu/wp-content/u...
Full Video: youtu.be/SdDXvIB79j0
Project Page: mkari.de/reality-prom...

See you in Busan! 🇰🇷

#HCI #HRI
parastooabtahi.bsky.social
In #AR, using real-time on-device 3D Gaussian splatting, we create the illusion that physical changes occur instantaneously, while a hidden robot fulfills the “reality promise” moments later, updating the physical world to match what users already perceive visually. 🤖
parastooabtahi.bsky.social
Even virtual agents’ actions can have physical effects, with motion paths that divert attention from the hidden robot. 🐝
parastooabtahi.bsky.social

Beyond materializing physical objects (seemingly out of thin air), users can manipulate out-of-reach objects via RealityGoGo — creating the illusion of telekinesis. 🪴
parastooabtahi.bsky.social
In #VR, users can experience “magical” interactions, such as moving distant virtual objects with the Go-Go technique. How might we similarly extend people’s abilities in the physical world? 🪄

Excited to share Reality Promises, our #UIST2025 paper, led by the amazing Mo Kari ✨
parastooabtahi.bsky.social
Check out Lauren Wang’s #UIST2025 poster on GhostObjects: life-size, world-aligned virtual twins for fast and precise robot instruction, with real-world lasso selection, multi-object manipulation, and snap-to-default placement.

This is the first piece in her ongoing work on #AR for #HRI 🤖👓
parastooabtahi.bsky.social
📢 Find Judy Fan (@judithfan.bsky.social) at #CogSci2025 during Poster Session 1 (⏰Tomorrow, 1–2:15 PM | 📍Salon 8) to learn about our work on understanding multimodal communication and how people form linguistic and gestural abstractions in collaborative physical tasks.
Poster for the Cognitive Tools Lab at CogSci 2025, scheduled for Thursday, July 31. The poster is titled “Using gesture and language to establish multimodal conventions in collaborative physical tasks.” It features an image of a hand pointing to a 2×2 grid, with an arrow indicating movement from the bottom-left square to the top-left square. A quote reads, “... the green block pointing this way,” and the gesture is labeled “Complementary position & orientation.” Headshots of the four authors, Maeda, Tsai, Fan, and Abtahi, appear at the bottom. The session is listed as Poster Session 1 at 1:00 pm.
Reposted by Parastoo Abtahi
sunniesuhyoung.bsky.social
📢 I successfully defended my PhD dissertation! Huge thanks to my committee (Olga @andresmh.com @jennwv.bsky.social @qveraliao.bsky.social @parastooabtahi.bsky.social) & everyone who supported me ❤️

📢 Next I'll join Apple as a research scientist in the Responsible AI team led by @jeffreybigham.com!
Sunnie standing in front of her presentation celebrating the successful defense 🎉 Vera, Andrés, Sunnie, Olga, and Jenn (on Sunnie’s laptop screen) celebrating Group photo of everyone who joined Sunnie’s dissertation defense Lauren, Sunnie, and Jeff (photo taken at CHI 2025)
Reposted by Parastoo Abtahi
sunniesuhyoung.bsky.social
Tue April 29: I'll be cheering Indu Panigrahi present our LBW on interactive AI explanations (w/ Amna, Rohan, Olga, Ruth, @parastooabtahi.bsky.social) in the 10:30-11:10am and 3:40-4:20pm poster sessions (North 1F)

🧵 bsky.app/profile/para...
📌 programs.sigchi.org/chi/2025/pro...
parastooabtahi.bsky.social
Check out Indu Panigrahi’s LBW at #CHI2025: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.”

🔗 Project Page: ind1010.github.io/interactive_XAI
📄 Extended Abstract: arxiv.org/abs/2504.10745
Title: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.” Authors: Indu Panigrahi, Sunnie S. Y. Kim*, Amna Liaqat*, Rohan Jinturkar, Olga Russakovsky, Ruth Fong, Parastoo Abtahi. Logos: Princeton University, NSF, OpenPhil, Princeton HCI, Open Glass Lab, and Princeton Visual AI Lab. CHI 2025, April 26–May 1, 2025, Yokohama, Japan, including illustrations of Yokohama’s skyline, ferris wheel, and a pink sailboat labeled “CHI.”
parastooabtahi.bsky.social
In collaboration with @sunniesuhyoung.bsky.social, Amna Liaqat, Rohan Jinturkar, Olga Russakovsky, and Ruth Fong.

Excited to share that Indu will be starting as a PhD student at UIUC this fall! 🎉
parastooabtahi.bsky.social
This is a qualitative study of how simple interactive mechanisms—filtering, overlaid annotations, and counterfactual image edits—might address existing challenges with static CV explanations, such as information overload, semantic-pixel gap, and limited opportunities for exploration.
A 3×4 grid showing bird images with visual explanations for Static, Filtering, Overlays, and Counterfactuals across three types: Heatmap, Concept, and Prototype.

Heatmap row:
Color heatmaps over birds with labels “More Important” and “Less Important.” Filtering separates “Most Important Areas” and “Least Important Areas” with a “Show More” slider. Overlays add a tooltip: “The bird part that you are hovering near is: grey bill.” Counterfactuals include prediction text—“‘Heermann’s gull’”—and editable attributes like “Back Pattern” and “Bill Color.”

Concept row:
Bar charts show the importance of features like “black bill” and “white tail.” Filtering splits “Positive” and “Negative Concepts” with sliders. Overlays label parts like “spotted belly” and “grey wing.” Counterfactuals show the prediction “pine grosbeak” with concept bars and edit options like “Tail Color.”

Prototype row:
Birds are overlaid with patches showing similarity scores (e.g., “0.98 similar”). Filtering compares “Prototypes” and “Criticisms.” Overlays highlight areas with tooltips like “grey crown.” Counterfactuals include the label “Eastern towhee” and editable features like “Belly Color” and “Wing Color.”
parastooabtahi.bsky.social
Check out Indu Panigrahi’s LBW at #CHI2025: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.”

🔗 Project Page: ind1010.github.io/interactive_XAI
📄 Extended Abstract: arxiv.org/abs/2504.10745
Title: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.” Authors: Indu Panigrahi, Sunnie S. Y. Kim*, Amna Liaqat*, Rohan Jinturkar, Olga Russakovsky, Ruth Fong, Parastoo Abtahi. Logos: Princeton University, NSF, OpenPhil, Princeton HCI, Open Glass Lab, and Princeton Visual AI Lab. CHI 2025, April 26–May 1, 2025, Yokohama, Japan, including illustrations of Yokohama’s skyline, ferris wheel, and a pink sailboat labeled “CHI.”
Reposted by Parastoo Abtahi
pedrolopes.org
Boosting this up for a last chance to come join us at #CHI2025
to assist with being an associate chairs (ACs) for the @chi.acm.org Late Breaking Work program! Please forward to anyone that you know might be interested.
pedrolopes.org
Help with CHI Late Breaking Work program! We are looking for associate chairs (ACs). Not only is the LBW an excellent track for our community to show new work at CHI, but it is also an important step to train new ACs in our community: docs.google.com/forms/d/e/1F... #hci #CHI2025
CHI 2025 Late-Breaking Work: Application for AC roles (due: Nov 30)
The CHI 2025 Late-Breaking Work (LBW) Co-chairs invite you to volunteer as an Associate Chair (AC) for the CHI 2023 (LBW) Program Committee! Please use the form below to provide details about yourself...
docs.google.com
Reposted by Parastoo Abtahi
comic-sans-soleil.bsky.social
HCI researchers starter pack. Lets you follow a bunch of HCI people at once (which the HCI list didn't let you do).

Again, ask to be added if I missed you.
go.bsky.app/p3TLwt
parastooabtahi.bsky.social
Thanks for putting this together! I recently joined, and this is very helpful—would love to be added!
parastooabtahi.bsky.social
I’m new here, so would be great to be added—thanks for putting this together!
Reposted by Parastoo Abtahi
andrikos.bsky.social
#HCI Starter Packs:

HCI Researchers:
go.bsky.app/p3TLwt

HCI + Games:
go.bsky.app/CTm8Qea

HCI + Accessibility Researchers:
go.bsky.app/PZxHNqP

XR + HCI Research:
go.bsky.app/8Tdtocp
parastooabtahi.bsky.social
Is there a way to join a starter pack?
Reposted by Parastoo Abtahi
cocoscilab.bsky.social
(1/5) Very excited to announce the publication of Bayesian Models of Cognition: Reverse Engineering the Mind. More than a decade in the making, it's a big (600+ pages) beautiful book covering both the basics and recent work: mitpress.mit.edu/978026204941...
Reposted by Parastoo Abtahi
cfiesler.bsky.social
Since #AcademicBluesky seems to be a bigger thing now, I wanted to share my PhD admissions advice YouTube resources. Please pass this on to anyone you think it might help! Probably most useful for STEM and especially CS adjacent fields, but broadly applicable. cfiesler.medium.com/phd-admissio...
PhD Admissions Advice
Sorry about the hidden curriculum. :(
cfiesler.medium.com