David Nelson
thedavenelson.bsky.social
David Nelson
@thedavenelson.bsky.social
200 followers 30 following 170 posts
Associate Director at Purdue's Center for Instructional Excellence. I share resources about teaching, SoTL, & the impact of AI on learning.
Posts Media Videos Starter Packs
Repression breeds resistance.
After multiple months using Copilot pro on trial (normally $30/month per user) I would say its ability to search all your Microsoft stuff at once is the killer app. Its chatbot inference and output abilities are as bad as an airline phone bot that makes you say all your questions.
Sycophancy in bots is an inimical part of AI in Teaching and Learning. When the bot wants to tell you that you are right, high dependence almost certainly means you will inculcate incorrect knowledge. Love papers like this who explore sycophancy in the discipline arxiv.org/abs/2510.04721
BrokenMath: A Benchmark for Sycophancy in Theorem Proving with LLMs
Large language models (LLMs) have recently shown strong performance on mathematical benchmarks. At the same time, they are prone to hallucination and sycophancy, often providing convincing but flawed ...
arxiv.org
I also brought them up multiple times, to multiple students in class and in reflection feedback.
I was surprisingly disappointed (after I library reserved and even bought landmark SciFi books on AI for suggested readings in my AI in Teaching and Learning Course) when not a single student bothered with Murderbot, Moon is a Harsh Mistress or others. Need to assign them next time. :(
How well does it work when prompted only to assess or evaluate the root cause? Is tuning or exemplar code helpful at all? Are the edge cases simply so varied that an LLM as root causal evaluator is doomed to fail?
Try Copilot 365 if that is not sufficiently inaccurate or opaque for you. That $30/user/month fee pays for itself when your goal is to project existential dread into rage against a machine. Bonus points - it chides you for expletive use in emails :)
what did it sound like?
So, we can just keep on thinking Isak and Wirtz just wont work out for some reason, right???
Love it. But I think you mispelled "Quadruple"
1) AI claims fast learning; but learning is slow
2) Inauthentic or irrelevant work -> students outsourcing thinking work
3) We have MUCH less institutional infrastructure conveying value, authenticity and utility of what we teach
4) Chatbots do not care if we learn
5) AI detectors are horrid judges
Our five big assumptions that shaped the week /
Purdue's AI Academy finished with 70+ instructors creating projects, plans, tools or critical approaches around and in response to AI. I was particularly enthused when multiple participants said "I thought I was gonna learn the tech, but I learned about learning"
Reposted by David Nelson
Today on the podcast: Study Hall! @leaton01.bsky.social @michellemillerphd.bsky.social and @thedavenelson.bsky.social and I discuss three recent studies exploring the intersection of AI and teaching. Cognitive offloading, chatbot sycophancy, & more! intentionalteaching.buzzsprout.com/2069949/epis...
Democratizing prompt for LLMs:

Read and review new terms of service for X company. Compare and contrast with previous versions. What should I be aware of? What might any consumer be wary of or concerned about?
Reminds me of the specter of internet throttling before net neutrality. Don’t want to pay us extra for our tech? Fine. You just might not like what you get. No transparency, varying quality of an information commodity on every use. I’d guess an upswing in paid subs.
Likely Microsoft wants to gain benefits of Chat but not replace any of its own software that could then hinder enterprise level negotiations