Social Computing Group - UZH
@scg-uzh.bsky.social
8 followers 7 following 8 posts
Research group on Social Computing led by Prof. Dr. Anikó Hannák at the University of Zurich. https://www.ifi.uzh.ch/en/scg.html
Posts Media Videos Starter Packs
scg-uzh.bsky.social
Desheng gave a talk on “Auditing Google’s AI Overview and Featured Snippets: A Case Study on Baby care and Pregnancy”
and @aurman21.bsky.social presented a poster about the fact-checking limitations of LLMs, a joint work with @joachimbaumann.bsky.social and colleagues.
scg-uzh.bsky.social
Elsa presented a poster on an interactive study design to unveil users’ underlying motivations in web search, and gave a talk on a conceptualization of web search intent through a user-centered taxonomy .
scg-uzh.bsky.social
About last week's conference 😏
Elsa Lichtenegger, Desheng Hu and @aurman21.bsky.social presented their ongoing work at the Search Engines and Society Network (SEASON) conference in Hamburg last week 🔍 🌐 .
scg-uzh.bsky.social
Dr. Meirav Segal is interested in various aspects of trustworthy AI, including fairness and robustness in dynamic, uncertain environments.
scg-uzh.bsky.social
🚨 New Postdoc joining our group starting from today 🤠
Dr. Meirav Segal earned her PhD from the University of Oslo, where she worked on algorithmic fairness and recourse for allocation policies.
A very warm welcome to the team 😃
scg-uzh.bsky.social
While the outcomes of this one day collective work is currently kept secret 😈, we can still share our initial results recently presented at EWAF 2025 www.canva.com/design/DAGrQ...
Unsupported client – Canva
Unsupported client – Canva
www.canva.com
scg-uzh.bsky.social
About last week’s internal hackathon 😏
Last week, we -- the (Amazing) Social Computing Group, held an internal hackathon to work on our informally called “Cultural Imperialism” project.
scg-uzh.bsky.social
If you thought that LLMs are reliable annotators... we have bad news for you 🫣 🤷
Check out the new paper from our group members @joachimbaumann.bsky.social (freshly graduated 😜), @aurman21.bsky.social and colleagues 😎
joachimbaumann.bsky.social
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.