🚨 New episode! youtu.be/NFKAUv7zxwo What becomes newly possible (and newly questionable) when thinking is distributed across humans and machines? 🎧 My Robot Teacher Episode 2: “Higher Education in the Age of AI”
I've found that simply asking LLMs to be disagreeable in a prompt is a fairly reliable way of ensuring that they will be. As in, "What do you think of this tweet? Please, no sycophancy!"
It is not the same. If you juggle while writing an essay, different parts of your brain will talk to each other. This does not mean juggling makes you smarter. Educators and learners haven't yet explored and reported on the possibilities of learning with LLM with enough depth to conclude this.
I agree that that writing was the most impactful technology so far, but I'd prefer to leave my mind open with respect to future technologies like AI and/or things that haven't been yet invented.
That paper does not imply that using LLMs makes you dumber. Those are the hysterics of people who didn't actually read the paper. The claim of that paper is that that the different parts of your brain "talk less to each other" when using LLMs to write essays. This is not the same as "dumber."
Back in pre-literate times, there was evidence that writing and literacy made you "dumber." See Plato's Phaedrus for example. It's just that what was thought to be "intelligence" in purely oral cultures was reshaped by the advent of literacy. AI might do the same.
What happens when large language models meet public education? We’re @calstate faculty. We recorded the fallout. 🎧 My Robot Teacher — Watch Episode 1 → www.youtube.com/watch?v=H2Ta... @OpenAI @CA_LearningLab #MyRobotTeacher#AI#CSU