Danqing Shi
@danqingshi.bsky.social
16 followers 34 following 19 posts
Human-Computer Interaction, Human-AI Interaction, Visualization University of Cambridge https://sdq.github.io
Posts Media Videos Starter Packs
danqingshi.bsky.social
Thrilled to share our #UIST2025 research! We investigate how the decomposition principle can improve human feedback for LLM alignment. In a 160-participant study, our tool DxHF increases feedback accuracy by +4.7%
👉 sdq.github.io/DxHF

Furui Tino
@oulasvirta.bsky.social @elassady.bsky.social
Reposted by Danqing Shi
oulasvirta.bsky.social
📢The open access version of our book is available now via OUP's site: global.oup.com/academic/pro...
danqingshi.bsky.social
Our paper has been selected for #CHI2025 Best Paper Honorable Mention recognition 🥳🥳
danqingshi.bsky.social
1/ Why do people make so many errors in touchscreen typing, and how do people fix them?

Our #CHI2025 paper introduces Typoist, the computational model to simulate human errors across perception, motor, and memory. 📄 arxiv.org/abs/2502.03560
danqingshi.bsky.social
Thanks for the invitation! I am honored to serve as #ieeevis PC member
danqingshi.bsky.social
7/ Check out the project page for more details:
🔗 typoist.github.io
👨‍💻 Danqing Shi @danqingshi.bsky.social , Yujun Zhu, Francisco Erivaldo Fernandes Junior, Shumin Zhai , and Antti Oulasvirta @oulasvirta.bsky.social
danqingshi.bsky.social
6/ Typoist marks a notable divergence from the data-driven approaches so popular today: in explicitly modeling the causes of errors, instead of just “parroting” statistically plausible typographical errors in text, the model takes a glass-box rather than a black-box approach.
danqingshi.bsky.social
5/ Why does this matter?

🚀It makes it possible to evaluate designs before undertaking an empirical study.
🚀It affords new methods of data augmentation in conditions where empirical data may be hard to collect.
🚀It opens the door to a new way of theorizing about errors in HCI.
danqingshi.bsky.social
4/ We developed a visualization-based exploration tool based on the model to help practitioners and researchers simulate error behaviors. It allows users to fine-tune the model manually.
danqingshi.bsky.social
3/ How does Typoist work?

Typoist extends the computational rationality framework ( crtypist.github.io ) for touchscreen typing. It simulates eye & finger movements and predicts how users detect & correct errors.
danqingshi.bsky.social
2/ Typing errors are more than just “fat finger” errors. They come in three main forms:

🔹Slips - motor execution deviates from the intended outcome;
🔹Lapses - memory failures;
🔹Mistakes - incorrect or partial knowledge.

Typoist captures them all!
danqingshi.bsky.social
1/ Why do people make so many errors in touchscreen typing, and how do people fix them?

Our #CHI2025 paper introduces Typoist, the computational model to simulate human errors across perception, motor, and memory. 📄 arxiv.org/abs/2502.03560
Reposted by Danqing Shi
oulasvirta.bsky.social
Many thanks to SIGCHI for recognizing our work and to numerous brilliant colleagues. It is a great honor to join the Academy.
acm-sigchi.bsky.social
🎉 We're delighted to announce the recipients of the 2025 ACM SIGCHI Awards! Congratulations to these incredible awardees!
Image of all SIGCHI awardees for 2025
danqingshi.bsky.social
7/ Check out the project page for more details:
🔗 chart-reading.github.io
👨‍💻 Danqing Shi @danqingshi.bsky.social , Yao Wang , Yunpeng Bai, Andreas Bulling, and Antti Oulasvirta @oulasvirta.bsky.social
danqingshi.bsky.social
6/ Applications
🚀Visualization design evaluation → Identify design issues before user testing
🚀Visualization design optimization → Automate feedback on data visualizations
🚀Explainability in chart question answering → Understand how visualizations influence perception
danqingshi.bsky.social
5/ We tested Chartist against real human eye-tracking data. It outperformed existing models ( UMSS, DeepGaze III, and VQA model) in simulating task-driven gaze movement on visualizations.
danqingshi.bsky.social
4/ How does this advance the state-of-the-art?

Chartist integrates task-driven cognitive control and oculomotor control, making it better at simulating how humans actually read charts. Best part? Chartist doesn’t need human eye-tracking data for training!
danqingshi.bsky.social
3/ How does Chartist work?

Chartist uses a hierarchical gaze control model with:
A cognitive controller (powered by LLMs) to reason about the task-solving process
An oculomotor controller (trained via reinforcement learning) to simulate detailed gaze movements
danqingshi.bsky.social
2/ Given a chart + a task,
🧐 Want to find a specific value?
🔍 Need to filter relevant data points?
📈 Looking for extreme values?
Chartist predicts human-like eye movement, simulating how people move their gaze to address these tasks.
danqingshi.bsky.social
1/ How people read charts when they have a specific task in mind? Their gaze isn’t random!
Our #CHI2025 paper introduces Chartist, the first model designed to simulate these task-driven eye movements. 📄 arxiv.org/abs/2502.03575