Danqing Shi
banner
danqingshi.bsky.social
Danqing Shi
@danqingshi.bsky.social
Human-Computer Interaction, Human-AI Interaction, Visualization
University of Cambridge
https://sdq.github.io
Feedback in RLHF is collected from human raters who are prompted by the system to compare pairs and express preferences. What advantages do we get if humans can exhibit higher agency? doi.org/10.1111/cgf....
December 1, 2025 at 9:42 AM
Thrilled to share our #UIST2025 research! We investigate how the decomposition principle can improve human feedback for LLM alignment. In a 160-participant study, our tool DxHF increases feedback accuracy by +4.7%
👉 sdq.github.io/DxHF

Furui Tino
@oulasvirta.bsky.social @elassady.bsky.social
September 17, 2025 at 7:32 PM
Reposted by Danqing Shi
📢The open access version of our book is available now via OUP's site: global.oup.com/academic/pro...
August 20, 2025 at 4:57 AM
AI models are paving the way for explainable AI and better human-computer interaction fcai.fi/news/2025/4/...
AI that mirrors how humans behave can drive better designs for keyboards and charts — FCAI
The human-like performance of these AI models is also transparent, paving the way for explainable AI and better human-computer interaction.
fcai.fi
May 14, 2025 at 6:54 AM
Our paper has been selected for #CHI2025 Best Paper Honorable Mention recognition 🥳🥳
1/ Why do people make so many errors in touchscreen typing, and how do people fix them?

Our #CHI2025 paper introduces Typoist, the computational model to simulate human errors across perception, motor, and memory. 📄 arxiv.org/abs/2502.03560
March 27, 2025 at 9:48 AM
1/ Why do people make so many errors in touchscreen typing, and how do people fix them?

Our #CHI2025 paper introduces Typoist, the computational model to simulate human errors across perception, motor, and memory. 📄 arxiv.org/abs/2502.03560
February 27, 2025 at 7:01 AM
Reposted by Danqing Shi
Many thanks to SIGCHI for recognizing our work and to numerous brilliant colleagues. It is a great honor to join the Academy.
🎉 We're delighted to announce the recipients of the 2025 ACM SIGCHI Awards! Congratulations to these incredible awardees!
February 26, 2025 at 4:36 AM
1/ How people read charts when they have a specific task in mind? Their gaze isn’t random!
Our #CHI2025 paper introduces Chartist, the first model designed to simulate these task-driven eye movements. 📄 arxiv.org/abs/2502.03575
February 24, 2025 at 11:53 AM