LLMs and the Social Science Classroom
My university central administration has basically given up having any kind of a policy on AI and university courses, particularly assessment, before it had even started, and has downloaded all responsibility to individual lecturers. This would not be so bad if all faculty members were fully prepared and understood how LLMs (and Generative AI systems more broadly) work, and what the implications are for their teaching and assessment. But the vast majority are not, and for those who are critical, the only recourse seems to be either the use of AI detectors, which are as problematic as the products they so imperfectly detect, or to “critically engage” with AI, which means basically using the products, with all the attendant harms involved, and then reflecting afterwards.
Instead I want to reaffirm the value of the process of researching and writing as thinking and the development of critical faculties in the face of the increasing automation and commodification of education. No individual lecturer can come up with an ideal response (which is just one of the reasons why I think the attitude of my university’s central administration sucks), but here is what I am putting in my syllabus for this term.
**_Note 1:_** This policy is for a 4th Year undergraduate specialism / option in a qualitative transdisciplinary social science field. This may not be an appropriate policy for every other particular course. Because it is course with theoretically smaller numbers –although in practice numbers are increasing, which means up to 50 these days– both full class group writing workshops and group writing tutorials to offer proper guidance away from LLMs, are _possible._ I will try to do both this term
**_Note 2:_** This is on ongoing work-in-progress, and I will continue to update the further readings, and make additions and changes in response to discussion and suggestions. Eagle-eyed readers will note that there is now a split between an outright prohibition of generative AI / LLMs in writing assignments, and and strong discouragement of their use in researching assignments. This is simply because there is not practical way of detecting the use of such tools for research, even if I wanted to pursue that kind of surveillance (which I don’t!).
**_Fair Use:_** This policy is made available under a CC BY-NC-SA 4.0 license: feel free to use it, share it, adapt it, improve it, but please give me credit if you do, and you share it further, it must be under the same terms. Also, please feel free to criticize it (in the comments or elsewhere),
https://creativecommons.org/licenses/by-nc-sa/4.0/
This is different to my general terms for re-use of material on this site (see here).
# AI Policy
The use of so-called “Artificial Intelligence” (AI) tools has become increasingly common in universities. uOttawa has been unable or unwilling to develop a central policy and has left it up to individual Module Leaders to decide what to do about it. This is my policy for this course.
****The use of “Generative AI” or “Large Language Models” (LLMs), including but not limited to ChatGPT, Gemini, Claude, Copilot, Grok, & etc. for writing of assignments is strictly prohibited. Written work that is submitted that has obviously used such tools will be given a failing grade. _Research using such tools is also strongly discouraged._ ****
There are several reasons for this policy:
* First and most importantly, social science education is about thinking and learning to think critically. This includes learning to do your own academic research, developing your own academic writing style, and being able to communicate through writing (and to be sure, in other courses, speaking and visual presentation). Research and writing _are_ thinking. **You aren’t thinking and you aren’t learning to think critically if you aren’t doing the research and writing**. We will be running group tutorials to help you do this in this course.
* **LLMs are not search engines, databases or indexes.** They are systems for generating sequences of words and phrases from a massive corpus of data, that have a statistical probability to appear comprehensible and coherent, given the inputs (prompts). In other words, they produce patterns, which sometimes (increasingly frequently) appear good enough to fool pattern-recognizing brains like our own. Instead, please use the readings provided in the course, and learn to use the library catalogue, its many linked databases, and (academic) search engines.
* **LLMs are not “intelligent.”** They have no reasoning capability or understanding. As purely stochastic systems, they are unable to differentiate between facts and falsehoods, reality and unrealities, truth and lies. If their output is factual, real or true, it is by statistical chance. The risk you take in using an LLM to write an essay, is that it could range from, _at best_ , a generic, bland C grade-level piece, not even a good imitation of what you could have done by making some effort yourself, to, ** _at worst_ ,** **producing straight-up bullshit with nonsensical arguments and made-up references** , and if you haven’t done the research and reading, you will have no way of knowing which you are handing in.
* The economic model of Generative AI / LLMs is to make profit for private capitalist corporations from plagiarized and stolen intellectual property and ideas – therefore, I would argue that **the use of LLMs is _de facto_ benefitting from academic fraud **(see Academic Fraud policy).
* **The economic model of Generative AI also involves many exploited, low-paid workers, often in the global south** , who do much of the background work, particularly work on data quality, that is supposedly magically done by AI.
* **Generative AI / LLMs are contributing in outsize ways to the intensification of the climate crisis** through massive drains on energy and resources.
**Further Resources, Reading and Viewing**
Jane Rosenzweig – Rules for Writing in the Age of AI: https://writinghacks.substack.com/p/four-rules-for-writing-in-the-age
Eryk Salvaggio – Challenging the Myths of Generative AI: https://www.techpolicy.press/challenging-the-myths-of-generative-ai/
If you want to watch and listen to a critical expert, check out this recent interview with Abeba Birhane (a former Mozilla and Deep Mind fellow): https://www.youtube.com/watch?v=416Rve8ZWeY
If you really want to understand this in more depth, you should try to appreciate how LLMs work. This is from Timothy B. Lee and Sean Trott: https://www.understandingai.org/p/large-language-models-explained-with
And if you are interested in AI, thinking and creativity, read this wonderful essay by the great science fiction author, Ted Chiang, who wrote the story that was turned into the film, _Arrival_ : https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art
For more on the environmental impacts of AI, see here: https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/ (h/t to Christabelle Sethna)
Black teens more likely to be accused of cheating using AI in homework: https://www.semafor.com/article/09/17/2024/black-teenagers-twice-as-likely-to-be-falsely-accused-of-using-ai-tools-in-homework
On the enshittification of Google Scholar, https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/
The growing shadow economy of fake citations for sale: https://www.nature.com/articles/d41586-024-01672-7
### Share this:
* Click to email a link to a friend (Opens in new window) Email
* Click to share on X (Opens in new window) X
* Click to print (Opens in new window) Print
* Click to share on Reddit (Opens in new window) Reddit
* Click to share on Facebook (Opens in new window) Facebook
* Click to share on LinkedIn (Opens in new window) LinkedIn
*
Like Loading...
### _Related_
## Author: David
I'm David Murakami Wood. I'm currently Canada Research Chair at the University of Ottawa. I like reading, cycling, running, and I am slowly coming to the realization of the limits of getting older. View all posts by David