Stanford Tech Impact and Policy Center
banner
techimpactpolicy.bsky.social
Stanford Tech Impact and Policy Center
@techimpactpolicy.bsky.social
Transforming research into real-world impact to advance human agency and well-being in the era of social media and AI.

🔗 tip.fsi.stanford.edu
Join us on February 10 to explore one of the key questions around tech impact and policy in today’s K-12 schools: the educational effects of #SchoolCellphoneBans.

Guilherme Lichand will present evidence that phone restrictions in schools *causally* boost K–12 learning outcomes.

Details & RSVP ⤵️
Guilherme Lichand | The Educational Impacts of School Phone Bans
Evidence from Brazil
stanford.io
February 2, 2026 at 4:43 PM
Will #AI replace human workers or will it empower them?

Join us for a talk by @robreich.bsky.social, who will examine the distinction between #automation & #augmentation and discuss how design choices, policy decisions, and adoption patterns will determine AI's effects on labor and society.

RSVP ⤵️
January 27 | AI, Automation, and Augmentation
cyber.fsi.stanford.edu
January 23, 2026 at 5:10 PM
The Stanford Report profiled the Empowering Diverse Digital Citizens research project, which is spearheaded by Tech Impact and Policy Center Director Jeff Hancock and supported by @stanfordimpactlabs.bsky.social.

#AILiteracy #DigitalLiteracy
Empowering users to discern fact from fiction in the age of AI
A new project will explore interventions that help individuals effectively use AI while building literacy to avoid scams and abuse.
stanford.io
January 22, 2026 at 4:41 PM
“That’s going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces.”

Jeff Hancock spoke with NBC News about the shifting dynamics of trust and media literacy as deepfakes and other AI-generated images flood digital ecosystems.

Read ⤵️
AI is intensifying a 'collapse' of trust online, experts say
From Venezuela to Minneapolis, the rapid rollout of deepfakes around major news events is stirring confusion and suspicion about real news.
www.nbcnews.com
January 21, 2026 at 5:32 PM
Is making more time and space for being *human* the key to making #AI work in the workplace?

In a follow-up to their groundbreaking article on AI #workslop, Jeff Hancock & co-authors share insight on what's driving the rise in workslop—and how organizational leaders can prevent it.

Read @hbr.org ⤵️
Why People Create AI “Workslop”—and How to Stop It
With the rise of gen AI tools, offices have had to contend with a new scourge: “workslop” or low-effort, AI-generated work that looks plausibly polished, but ends up wasting time and effort as it offl...
hbr.org
January 20, 2026 at 9:00 PM
Starting soon—join us online!

👉 stanford.io/49ott0x
How can we shift from seeing #AI as a cheating tool to a pedagogical partner that fosters creativity, critical thinking, and personalization?

Join us in person or online for the next talk in our #WinterSeminarSeries, featuring Peter Norvig of @stanfordhai.bsky.social!

RSVP ⤵️
stanford.io/49ott0x
January 20, 2026 at 7:29 PM
How can we shift from seeing #AI as a cheating tool to a pedagogical partner that fosters creativity, critical thinking, and personalization?

Join us in person or online for the next talk in our #WinterSeminarSeries, featuring Peter Norvig of @stanfordhai.bsky.social!

RSVP ⤵️
stanford.io/49ott0x
January 16, 2026 at 4:19 PM
Reposted by Stanford Tech Impact and Policy Center
Next was an excellent talk by Jon Krosnick on the history of tech-enabled survey research at @techimpactpolicy.bsky.social. I loved the review of industry and academia's fraught relationship with honestly communicating methodological limits. Highly recommend www.youtube.com/watch?v=8Y4B... (7/8)
January 13 | How Tech Has Enabled Survey Research and Undermined It
YouTube video by Stanford Tech Impact and Policy Center
www.youtube.com
January 14, 2026 at 3:53 AM
In light of Grok's ongoing deepfake nude scandal, Riana Pfefferkorn of @stanfordhai.bsky.social wrote an op-ed for @nytopinion.nytimes.com sharing research she published with Tech Impact and Policy Center last year that found legal risk is impeding AI companies from better safeguarding their models.
Opinion | There’s One Easy Solution to the A.I. Porn Problem
www.nytimes.com
January 13, 2026 at 7:34 PM
Coming up today at 12PM PT — join us!
Join us on Tuesday for the launch of our #WinterSeminarSeries!

Award-winning Stanford professor, research psychologist, and public opinion expert Jon A. Krosnick will discuss the role of #SurveyResearch in modern life and its evolution in the digital era.

RSVP: stanford.io/4qHo626
January 13, 2026 at 4:29 PM
Join us on Tuesday for the launch of our #WinterSeminarSeries!

Award-winning Stanford professor, research psychologist, and public opinion expert Jon A. Krosnick will discuss the role of #SurveyResearch in modern life and its evolution in the digital era.

RSVP: stanford.io/4qHo626
January 8, 2026 at 7:58 PM
Join us today at 12pm PT as our #FallSeminarSeries wraps up with a discussion on school smartphone bans. ⤵️
What does the early evidence tell us about school #SmartphoneBans?

Don’t miss the final seminar in our fall series, featuring Hunt Allcott, professor at @stanforddoerr.bsky.social.

RSVP to join us in person or online on Tuesday, December 2!
🔗 stanford.io/47Sb7o3
December 2, 2025 at 5:34 PM
@robbwiller.bsky.social, Director of our #AI and the Future of #SocialScience program, spoke with the Stanford Report about his recent research showing that AI-generated political messages can be as persuasive as those developed by humans.

Learn more ⤵️
AI rivals humans in political persuasion
New research reveals that people find AI-delivered political arguments convincing. This could help bridge political divides – or fuel polarization.
stanford.io
November 26, 2025 at 4:32 PM
What does the early evidence tell us about school #SmartphoneBans?

Don’t miss the final seminar in our fall series, featuring Hunt Allcott, professor at @stanforddoerr.bsky.social.

RSVP to join us in person or online on Tuesday, December 2!
🔗 stanford.io/47Sb7o3
November 25, 2025 at 5:00 PM
One week left! Submit your paper for the Journal of Online Trust & Safety’s Spring 2026 general issue. ⤵️

🗓 Key Dates

➤ Peer-reviewed research articles due: December 1, 2025
➤ Commentaries due: March 1, 2026
➤ Publication date: April 2026

👉 Submit Your Paper
bit.ly/4oi8b97

#JOTS #TrustAndSafety
November 24, 2025 at 5:25 PM
The Tech Impact and Policy Center is proud to celebrate Dr. Angela Lee, who successfully defended her doctoral dissertation, “Beyond the Digital Town Square: Identifying and Correcting Social Media Distortion Effects,” last week! 🎉
November 20, 2025 at 11:26 PM
Read about our workshop on #AICompanions bringing together academic researchers, civil society experts, and members of industry to consider guidelines for the responsible deployment of #AI roleplaying chatbots and companions. ⤵️
November 20, 2025 at 6:00 PM
Reposted by Stanford Tech Impact and Policy Center
I gave a talk at Stanford @techimpactpolicy.bsky.social on Tuesday, about my book "The Secret Life of Data" (written w @jesgilbert.bsky.social).

Check it out here:
November 13, 2025 at 1:15 PM
Many social media users are stuck in a social trap: they would prefer not to use social media but could only do so if others stopped using it too.

Join us to hear from Leonardo Bursztyn about his research on the #SocialMediaTrap and potential tools to address it.

🎟️ stanford.io/4qWz05d
November 12, 2025 at 9:54 PM
Reposted by Stanford Tech Impact and Policy Center
Cannot wait to meet with the Stanford folks to discuss “The Secret Life of Data"
Join us on Tuesday, November 11 for a seminar with @aramsinn.bsky.social, co-author of "The Secret Life of Data," to explore the unpredictable and often surprising ways in which data surveillance, #AI, and algorithms impact our culture and society.

🎟️ RSVP: stanford.io/3X6b8hM
November 5, 2025 at 8:52 PM
Join us on Tuesday, November 11 for a seminar with @aramsinn.bsky.social, co-author of "The Secret Life of Data," to explore the unpredictable and often surprising ways in which data surveillance, #AI, and algorithms impact our culture and society.

🎟️ RSVP: stanford.io/3X6b8hM
November 5, 2025 at 5:56 PM
New to the #TIPCenter team, 2025-26 Predoctoral Researcher Yuewen Yang spoke with us about her work, her journey into the world of Human-Computer Interaction, and what she hopes to accomplish through her research.

Read the interview ⤵️

#MeetOurScholars
Meet our Scholars: Yuewen Yang 2025-26 Pre-doctoral Fellow
We’re thrilled to introduce one of the newest members of our team, pre-doctoral researcher Yuewen Yang. RT Rogers sat down with Yuewen to learn more about her work, her journey into the world of Human...
stanford.io
November 4, 2025 at 4:48 PM
In the first in a series of posts about data scraping & researcher rights under the EU’s #DigitalServicesAct, Director of Platform Regulation @daphnek.bsky.social outlines who can take advantage of the DSA’s protections, comparing three categories of researchers.

Read @techpolicypress.bsky.social ⤵️
Determining Which Researchers Can Collect Public Data Under the DSA | TechPolicy.Press
The DSA opens important opportunities for researchers collecting publicly available data, but leaves key questions unresolved, writes Daphne Keller.
bit.ly
October 31, 2025 at 7:03 PM
Why do older adults engage more with #misinformation online, even when they often identify falsehoods correctly in surveys?

In our next #FallSeminarSeries talk, @benlyons.bsky.social of the University of Utah will investigate that paradox.

RSVP to join us in person or online on Tuesday, Nov. 4!
November 4 | Dubious News and the Aging American: Understanding Discernment
stanford.io
October 30, 2025 at 3:52 PM
Reposted by Stanford Tech Impact and Policy Center
My latest with @techpolicypress.bsky.social piece on a key theme from the @techimpactpolicy.bsky.social's Trust and Research Conference. Includes @dwillner.bsky.social's keynote and presentations from @gligoric.bsky.social, @thejusticecollab.bsky.social's Matthew Katsaros and more.
Platforms are accelerating the transition to AI for content moderation, laying off trust and safety workers and outsourced moderators in favor of automated systems, writes Tim Bernard. The practice raises a host of questions, some of which are now being studied both in universities and in industry.
Researchers Explore the Use of LLMs for Content Moderation | TechPolicy.Press
The subject was a stand-out theme at the Trust and Safety Research Conference, held last month at Stanford, writes Tim Bernard.
www.techpolicy.press
October 30, 2025 at 2:36 PM