Daniel S. Schiff
banner
dschiff.bsky.social
Daniel S. Schiff
@dschiff.bsky.social
Assist. Professor @purduepolsci & Co-Director of Governance & Responsible AI Lab (GRAIL). Studying #AI policy and #AIEthics. Secretary for @IEEE 7010 standard.
7/7 Curious what you think—does this match what you're seeing in AI education assessment?

For researchers and educators working on AI literacy:

www.sciencedirect.com/science/art...
Development and validation of a short AI literacy test (AILIT-S) for university students
Fostering AI literacy is an important goal in higher education in many disciplines. Assessing AI literacy can inform researchers and educators on curr…
www.sciencedirect.com
February 17, 2026 at 3:04 PM
6/7 🔬 Next steps: Validation beyond Western university samples, workplace applications, and cross-cultural AI literacy research.

With Arne Bewersdorff and Marie Hornberger. Thanks to Google Research for funding a portion of this work

@purduepolsci.bsky.social @GRAILcenter.bsky.social
February 17, 2026 at 3:04 PM
5/7 🌍 Why this matters for AI governance:
Scalable assessment tools are essential for evaluating education programs, informing policy decisions, and ensuring citizens can navigate an AI-driven world.

AILIT-S makes systematic evaluation feasible.
February 17, 2026 at 3:04 PM
4/7 🎯 Best use cases:
✔️ Program evaluation
✔️ Group comparisons
✔️ Trend analysis
✔️ Large-scale research

❌ Avoid for individual diagnostics

The speed enables broader participation and better population-level insights.
February 17, 2026 at 3:04 PM
3/7 ✅ Results show AILIT-S delivers:
• ~5 minutes completion time (vs 12+ for full version)
• 91% congruence with comprehensive assessment
• Strong performance for group-level analysis

Trade-off: slightly lower individual reliability (α = 0.61 vs 0.74)
February 17, 2026 at 3:04 PM
2/7 📊 AILIT-S covers 5 core themes:
• What is AI?
• What can AI do?
• How does AI work?
• How do people perceive AI?
• How should AI be used?

Special emphasis on technical understanding—the foundation of true AI literacy.
February 17, 2026 at 3:04 PM
1/7 ⚡ The challenge: Existing AI literacy tests take 12+ minutes, making them impractical for large-scale assessment.

Our solution distills a robust 28-item instrument into 10 key questions—validated with 1,465 university students across the US, Germany, and UK.
February 17, 2026 at 3:04 PM
Published in Computers and Education: Artificial Intelligence with my brilliant collaborators & PhD students Lucas Wiese and Indira Patil

www.sciencedirect.com/science/art...

@purduepolsci.bsky.social @GRAILcenter.bsky.social
AI ethics education: A systematic literature review
The potential of AI technology to transform human life, well-being, and daily work is faced with numerous risks and challenges yet to be fully account…
www.sciencedirect.com
February 11, 2026 at 4:58 PM
🌟 AI ethics education has grown rapidly but is still finding its footing.

By focusing on interdisciplinary teaching, hands-on learning & better assessments, we can prepare the next generation to build AI systems that serve humanity responsibly.
February 11, 2026 at 4:58 PM
🛠️ What needs to happen:

✅ Develop tools measuring behavioral impact of ethics education
✅ Integrate ethics across all levels (K-12 to university)
✅ Fund initiatives prioritizing formative assessments
✅ Align assessments with real-world skills
February 11, 2026 at 4:58 PM
🚧 Major challenges we identified:
• Keeping up with AI's rapid evolution
• Teaching abstract concepts to diverse audiences
• Shortage of trained educators
• Misalignment between teaching goals & assessment methods
February 11, 2026 at 4:58 PM
❌ The assessment gap: Programs aim to develop ethical reasoning & communication skills, but few measure if students are actually learning.

Summative assessments dominate (grades), but formative feedback—the kind that drives growth—is rare.
February 11, 2026 at 4:58 PM
🎓 Pedagogy that works? Forget boring lectures.

Most impactful methods are hands-on:
• Case studies
• Group projects
• Gaming & storytelling

These engage students in real-world ethical dilemmas, making abstract principles tangible.
February 11, 2026 at 4:58 PM
🔑 Key finding: The best programs go beyond "rules for algorithms."

They tackle societal issues—bias, fairness, privacy, social justice. Higher-ed leads with comprehensive curricula, but K-12 efforts are still catching up.
February 11, 2026 at 4:58 PM
📚 We analyzed content, pedagogy & assessment practices across AI ethics education (2018-2023).

The results? A field full of promise but grappling with fundamental challenges in what to teach, how to teach it, and whether students are actually learning.
February 11, 2026 at 4:58 PM
🌐 AI is everywhere—your workplace, social feeds, doctor's office. With this power comes ethical responsibility.

Bias, misinformation, privacy risks are just the beginning. How do we teach future engineers, policymakers & citizens to navigate these complexities?
February 11, 2026 at 4:58 PM
7/7 How can educators better engage "Cautious Critics" and "Pragmatic Observers"?

For policy practitioners and educators working on AI literacy—curious what you're seeing?

www.sciencedirect.com/science/art...

#AIGovernance #ResponsibleAI #AILiteracy
AI advocates and cautious critics: How AI attitudes, AI interest, use of AI, and AI literacy build university students' AI self-efficacy
This study investigates how cognitive, affective, and behavioral variables related to artificial intelligence (AI) build AI self-efficacy among univer…
www.sciencedirect.com
February 6, 2026 at 1:59 PM
6/7 Results suggest AI literacy isn't just about knowledge. It's about fostering interest, building confidence, and earning trust. Without addressing these factors, we risk leaving entire student groups behind.

@purduepolsci.bsky.social @GRAILcenter.bsky.social
February 6, 2026 at 1:59 PM
5/7 Implications: AI programs need tailored approaches:

🚀 Advocates: Encourage critical thinking about ethical AI
🤔 Critics: Demystify AI, make it relevant to non-technical fields
⚖️ Observers: Use hands-on experiences to spark engagement
February 6, 2026 at 1:59 PM
4/7 Demographics matter: AI Advocates are mostly male STEM students, while Cautious Critics are overrepresented in humanities and predominantly female.

Access to AI education varies widely—Critics report the least exposure 📈
February 6, 2026 at 1:59 PM
3/7 Using clustering techniques, we identified 3 student profiles:

🚀 AI Advocates (48%): Tech-savvy, confident, excited
🤔 Cautious Critics (21%): Skeptical, low confidence, minimal use
⚖️ Pragmatic Observers (31%): Neutral attitudes, moderate interest
February 6, 2026 at 1:59 PM
2/7 Key findings suggest:

✅ Using AI tools (like ChatGPT) boosts interest
✅ Positive attitudes predict higher engagement
✅ Interest acts as the bridge connecting attitudes, literacy, and confidence

Our validated path model:
February 6, 2026 at 1:59 PM
1/7 We surveyed 1,465 students across the US, UK, and Germany to understand how cognitive (AI literacy), affective (interest/attitudes), and behavioral (usage) factors build AI self-efficacy (with Arne Bewersdorff and Marie Hornberger)

Published in Computers and Education: Artificial Intelligence 📊
February 6, 2026 at 1:59 PM