Daniel S. Schiff
banner
dschiff.bsky.social
Daniel S. Schiff
@dschiff.bsky.social
Assist. Professor @purduepolsci & Co-Director of Governance & Responsible AI Lab (GRAIL). Studying #AI policy and #AIEthics. Secretary for @IEEE 7010 standard.
3/7 ✅ Results show AILIT-S delivers:
• ~5 minutes completion time (vs 12+ for full version)
• 91% congruence with comprehensive assessment
• Strong performance for group-level analysis

Trade-off: slightly lower individual reliability (α = 0.61 vs 0.74)
February 17, 2026 at 3:04 PM
2/7 📊 AILIT-S covers 5 core themes:
• What is AI?
• What can AI do?
• How does AI work?
• How do people perceive AI?
• How should AI be used?

Special emphasis on technical understanding—the foundation of true AI literacy.
February 17, 2026 at 3:04 PM
1/7 ⚡ The challenge: Existing AI literacy tests take 12+ minutes, making them impractical for large-scale assessment.

Our solution distills a robust 28-item instrument into 10 key questions—validated with 1,465 university students across the US, Germany, and UK.
February 17, 2026 at 3:04 PM
🚧 Major challenges we identified:
• Keeping up with AI's rapid evolution
• Teaching abstract concepts to diverse audiences
• Shortage of trained educators
• Misalignment between teaching goals & assessment methods
February 11, 2026 at 4:58 PM
🔑 Key finding: The best programs go beyond "rules for algorithms."

They tackle societal issues—bias, fairness, privacy, social justice. Higher-ed leads with comprehensive curricula, but K-12 efforts are still catching up.
February 11, 2026 at 4:58 PM
📚 We analyzed content, pedagogy & assessment practices across AI ethics education (2018-2023).

The results? A field full of promise but grappling with fundamental challenges in what to teach, how to teach it, and whether students are actually learning.
February 11, 2026 at 4:58 PM
3/7 Using clustering techniques, we identified 3 student profiles:

🚀 AI Advocates (48%): Tech-savvy, confident, excited
🤔 Cautious Critics (21%): Skeptical, low confidence, minimal use
⚖️ Pragmatic Observers (31%): Neutral attitudes, moderate interest
February 6, 2026 at 1:59 PM
2/7 Key findings suggest:

✅ Using AI tools (like ChatGPT) boosts interest
✅ Positive attitudes predict higher engagement
✅ Interest acts as the bridge connecting attitudes, literacy, and confidence

Our validated path model:
February 6, 2026 at 1:59 PM
4/7 📚 Over 70% of students across all countries haven't taken an AI-related course. Germany leads with 26.5% participation, but that's still inadequate. Most learning happens informally through tools like ChatGPT, but few dive deeper into foundational concepts.
January 27, 2026 at 8:26 PM
2/7 📊 Germany leads in actual AI knowledge, outscoring UK and US students. The reason? Better integration of AI coursework. But the US shows highest self-confidence in AI skills, while UK students express the most skepticism toward AI.
January 27, 2026 at 8:26 PM
Are university students ready for an AI-driven world? 🤔 Our study of 1,465 students across Germany, UK, and US reveals significant gaps in AI literacy among future professionals. Published in Computers in Human Behavior: Artificial Humans 🧵

www.sciencedirect.com/science/art...
January 27, 2026 at 8:26 PM
5/7 Baseball Hall of Fame (2009-2016): Writers debated PED-linked players.

SCA uncovered growing rift:
⚾ Pro-forgiveness coalition
⚾ Purist coalition

Schism reinforced decision silos over time
January 20, 2026 at 3:04 PM
4/7 Wikipedia 2013: Visual Editor rollout sparked editor rebellion.

SCA revealed two warring coalitions:
🛠️ Technical design focus
🌍 Cultural norms focus

Leadership missed these divisions—strategy disaster followed
January 20, 2026 at 3:04 PM
2/7 SCA works like smarter preference-based clustering. Groups actors by maximizing shared utility.

Result? Clear maps of internal divisions, even with sparse data 📊
January 20, 2026 at 3:04 PM
8/9 🚨 By 2022, partisan cleavages hardened:

• Republicans: Pro-AI in defense, law enforcement
• Democrats: Focused on risks, equity concerns

This mirrored broader ideological divides over regulation and government intervention.
January 14, 2026 at 8:32 PM
4/9 🔑 Trigger 1—Problem definitions:

Early AI framing was broad: transparency, privacy, ethics. But when linked to racial equity or redistribution, partisan divides flared.

• Dems: Equity-focused reforms
• GOP: Industry self-regulation
January 14, 2026 at 8:32 PM
3/9 🤔 But by 2022, bipartisan windows narrowed.

We identify 4 key triggers of polarization:
1️⃣ Competing problem definitions
2️⃣ Divergent policy tools
3️⃣ Stakeholder dynamics
4️⃣ Strategic "subsystem shopping"
January 14, 2026 at 8:32 PM
2/9 💡 Early wins focused on "soft governance"—transparency requirements, research initiatives—mostly avoiding polarizing debates.
January 14, 2026 at 8:32 PM
4/4 The path forward 🛤️

We need to:
• Integrate ethics into core STEM curricula
• Leverage peer-based learning (it works!)
• Connect coursework to real challenges
• Rethink "STEM readiness"

Technical competence without social consciousness isn't enough.

@purduepolsci.bsky.social @GRAILcenter.bsky
January 9, 2026 at 2:04 PM
3/4 Students described their STEM courses as:
• Too technical (little ethics integration)
• Disconnected from societal dimensions
• Career-focused rather than impact-focused

Meanwhile, STEM professionals shape our future—from AI to climate tech ⚡
January 9, 2026 at 2:04 PM
1/4 📊 The data are stark:

• Professional Connectedness scores dropped significantly (5.65 → 5.43, p < 0.001)
• Students increasingly prioritized salary over societal impact
• Self-efficacy to drive social change declined

Published in International Journal of STEM Education
January 9, 2026 at 2:04 PM
5/7 📈 Finding 2 (The Catch): This public influence ONLY holds for the innovation frame. When the public discusses AI's economic potential, policymakers listen. When the public discusses AI ethics or security, we see no statistically significant influence on policymakers.
August 4, 2025 at 1:57 PM
4/7 📈 Finding 1: Public attention does predict policymaker attention.

The time-series analysis (ARIMA + VAR) shows that a one standard deviation increase in public tweets about AI is associated with a 22.4% increase in Congressional messaging on AI that same week.
August 4, 2025 at 1:57 PM
2/7 I focused on three dominant ways people frame AI:

📈 Innovation: AI as a driver of economic growth & productivity.
🙏 Ethics: AI's impact on fairness, rights, bias, and safety.
⚔️ Competition: AI in the context of the US-China race.
August 4, 2025 at 1:57 PM
8/12 Crucially, this latter effect was especially pronounced for non-White students, suggesting peer interaction can be a key driver of equity-related development.

Another note: Women have higher SR scores than men consistently
July 11, 2025 at 2:27 PM