CALI Roundup, May 23, 2025
-
Posted by Luke Waltzer (he/him) on May 23, 2025 at 2:08 pm
Greetings, All,
Our semester is winding down, but the AI hot takes machine continues to churn, unabated. Many of the pieces I’ve shared in these roundups have been focused on what our responsibilities are as educators towards our institutions, our disciplines, our communities, and, most of all, our students. This will continue to be a key question, but this week we’ll focus on the related question of rights, inside and outside of educational processes. What does the right to an education mean? Is there such a thing as a right to truth? And, should teachers have the right to conditions that enable them to best meet their responsibilities as educators?
Marc Watkins, who directs the AI Institute for Teachers and is a Lecturer in Writing and Rhetoric at the University of Mississippi responded to last week’s piece about professors using ChatGPT and the previous week’s piece claiming that “everybody is cheating their way through college” by foregrounding which students are left out of dualistic framings that reify education as a set of contested transactions. He credits the NYT for documenting the labor created for students who have to take extraordinary measures to prove they’re not cheating, and centers students who are not completing their degrees due to factors—cost, disability, complex lives, inadequate support systems—that have long been present.
Watkins settles upon transparency and disclosure as key values and pedagogical commitments that are necessary to break through the friction genAI has surfaced in educational spaces. These values are notably eschewed and evaded by the companies that are developing and peddling AI and by a state apparatus that is actively hostile to regulation. So it’s left to citizens to assert this right to know what’s informing AI and if and how its being used, deepening the challenge educators face in working with students to develop their critical literacies.
Too much disclosure comes far too late these days. The most discussed AI story this week was the circulation of an insert in multiple major newspapers of a summer reading list featuring fake books by real authors. The journalist who created the list was contrite, but as Henry Grabar noted at Slate we need to understand this fiasco within multiple, connected histories: the generational, labor-alienating decline of the news business following the emergence of the internet and the accompanying growth of non-human readers on the internet as data thirsty LLMs increasingly govern and distort search. This has massive implications on our information ecosystems– and on our right as citizens in a putatively democratic society to be informed.
Those information ecosystems aren’t just being flooded with AI slop and reporting about genAI and its social impacts; there’s a ton of academic studies arriving that generate splashy headlines about AI’s transformative potential which, upon scrutiny, fail to hold water. Benjamin Riley has an overview here, prompted by MIT’s withdrawal of a doctoral researcher’s study (falsely) demonstrating increased productivity in scientific research, digging into the meta analyses of ChatGPT’s impact on education I shared last week, and looking at multiple studies about genAI tutoring initiatives that over or misstate its impact.
Being informed is a necessary precondition to being an educator and, ultimately, being educated. Education is enshrined in the United Nation’s “Universal Declaration of Human Rights.” This of course means different things in different cultures at different moments; i.e., it’s contested. Researcher and educator Helen Beethem offers a powerful response to UNESCO’s call for reflections on AI and the Future of Education, tracing how data and algorithmic platforms may infringe upon the right to be educated, and articulating the risk in the consolidation through unreliable technologies of so much power over in the hands of unaccountable corporate entities. She doesn’t only focus on risk (see here for a “Risk Table” she’s developing), she notes documented harms that range from exposure to bias, hate, and misinformation, to developmental and social impacts, to the monopolization of educational publishing and knowledge lock in, to the undermining of expertise throughout the educational system. A criticality that affirms and protects the right to an education requires us, in Beethem’s words, to “Repurpose AI technologies, where possible, for projects of authentic learning and human flourishing;” to “Rebuild cultural systems, practices and archives with resilience to synthetic media;” and to “Refuse the impoverished, unjust and unsustainable educational future being offered by the AI industry.”
Such an approach is hard slog, and requires discipline, inventiveness, principle, courage, wisdom, and commitment. Who else can do this but teachers and students together?
Best,
Luke
—
Watkins, Marc. 2024. “The Stories (and Students) Forgotten in the AI Panic.” May 18, 2025. https://marcwatkins.substack.com/p/the-stories-and-students-forgotten.
Holtermann, Callie. “A New Headache for Honest Students: Proving They Didn’t Use A.I.” The New York Times, May 17, 2025, sec. Style. https://www.nytimes.com/2025/05/17/style/ai-chatgpt-turnitin-students-cheating.html.
Edwards, Benj. “Chicago Sun-Times Prints Summer Reading List Full of Fake Books.” Ars Technica, May 20, 2025. https://arstechnica.com/ai/2025/05/chicago-sun-times-prints-summer-reading-list-full-of-fake-books/.
Grabar, Henry. 2025. “We’re Focused on the Wrong A.I. Problem in Journalism.” Slate, May 21, 2025. https://slate.com/technology/2025/05/ai-chatgpt-controversy-fake-books-chicago-sun-times-philadelphia-inquirer.html.
Riley, Benjamin. “Something Rotten in AI Research.” Substack newsletter. Cognitive Resonance (blog), May 20, 2025. https://buildcognitiveresonance.substack.com/p/something-rotten-in-ai-research?utm_medium=ios.
Pershan, Michael. 2025. “AI Is Maybe Sometimes Better than Nothing.” May 21, 2025. https://pershmail.substack.com/p/ai-is-maybe-sometimes-better-than?utm_medium=web.
Wang, Rose E., Ana T. Ribeiro, Carly D. Robinson, Susanna Loeb, and Dora Demszky. “Tutor CoPilot: A Human-AI Approach for Scaling Real-Time Expertise.” arXiv, January 26, 2025. https://doi.org/10.48550/arXiv.2410.03017.
Beetham, Helen. “How the Right to Education Is Undermined by AI.” Substack newsletter. Imperfect Offerings (blog), May 15, 2025. https://helenbeetham.substack.com/p/how-the-right-to-education-is-undermined.
You must be logged in to reply to this topic.