CALI Roundup, June 6, 2025
-
Posted by Luke Waltzer (he/him) on June 6, 2025 at 10:47 am
Greetings, All,
This week writer Amanda Guinzburg shared screenshots of an extended conversation she had with ChatGPT, which she invited to help her think through how to prepare a letter pitching her work to a literary agent. What she got was a bluffing, obsequious robot that repeatedly misleads her about its capacities while continuously attempting to seduce her into accepting its advice by telling her what it predicts she wants to hear. Many of the concerns that permeate our classrooms are laid bare by the exchange: the charisma and faux professionalism of the tool, its appearance of helpfulness via truthy responses, its hostility to reading, and the likelihood that it will create additional work vetting and verifying its responses for those who ultimately care about what they produce.
As we move into the summer, teachers are feeling it. Jason Koebler at 404 Media shared well over a dozen submissions from high school and college teachers detailing how the presence of genAI has impacted every aspect of their work. Their observations are familiar yet revealing: instructors, fully committed to their work, are struggling with how genAI has inserted itself into their relationships with students among many of whom they see a cultural shift towards, in the words of Nathan Schmidt, “passive consumption and regurgitation of content.” The teachers represented here take their craft seriously, and see connecting with students as an evolving pedagogical challenge. The divergent understandings, ethics, and practices that have emerged around genAI among students, teachers, and school/college administrators have exponentially increased the complexity of that challenge. Particularly telling is that few of the teachers cited in the piece seem to think the administrators managing their institutions are on the same page as them about what the challenges are and how they should be addressed.
Yesterday, Ohio State’s College of Engineering announced a “bold AI Fluency initiative to redefine learning and innovation” that will take shape over the coming months. This represents one model for how a large research university imagines integrating instruction about AI into the curriculum. “Launching this fall for first-year students, Ohio State’s AI Fluency initiative will embed AI education into the core of every undergraduate curriculum, equipping students with the ability to not only use AI tools, but to understand, question and innovate with them — no matter their major.” Details remain unclear, but do include integration of AI-related curricula into the 1-credit “General Education Launch Seminar,” workshops integrated into the “First Year Success Series” (which orients new students to university life), a new “Unlocking Generative AI” course, and an investment in faculty development around AI through the Drake Institute for Teaching and Learning. More about the initiative here. “Through AI Fluency, Ohio State students will become ‘bilingual’ — fluent in both their major field of study and the application of AI in that area,” says OSU Provost Ravi V. Bellamkonda. What strikes me about this initiative, beyond the bold claims in the press release, is the implicit affirmation of the general education curriculum as fertile soil for nurturing new literacies among students and the accompanying “we’ll deal with the disciplines later” approach. I interpret this in the context of a multigenerational effort to minimize the notion of the liberal arts and shift attention towards more vocational emphases, but wonder what kind of critical foundations a one-credit course with some possible workshops are capable of creating if not directly articulated into work in the disciplines.
All of this work is happening downstream of debate within the power structures that will determine how and even if the public will assert its interest over how AI flows through our society. The House GOP has produced a budget bill that, in addition to horrifying revanchist cuts to social services, proposes a ten-year moratorium on state regulation of AI. Massachusetts Senator Ed Markey is leading the opposition, documented here. One of the core struggles faculty attest to is asserting and preserving human agency in our interactions with AI. How much more difficult will that struggle become if we completely cede public responsibility for oversight of these tools to the corporate entities who stand to profit from them? It’s such a bad idea that even Anthropic’s CEO Dario Amodei has come out against it!
Amodei argues that “a focus on transparency is the best way to balance the considerations in play,” and links to Anthropic’s “Transparency Hub,” where the company shares a fair amount of information on its latest models and how and when they were built and tested, what kind of security is in place around their tools, and what commitments the company makes to continually evaluate and monitor their tools for safety. This is not nothing. But it’s also more translucency than transparency: as long as training data and processes, tokens, processes, and weights are not detailed (and compensated, when required by law), there is a limit to how much trust the public can and should have in these companies and the tools they produce. Especially when so much of the transparency reveals protections put in place to mitigate bad outcomes from the use of Anthropic’s tools. This is related to a dynamic that Oliver Schilke and Martin Reimann study in the paper “The Transparency Dilemma: How AI Disclosure Erodes Trust” released in May, in which they found that disclosure of AI use erodes trust in the AI user. It’s worth digging into that study to think about how to discuss transparency with students, what the connections are between AI literacy and perceptions of its use, and how we think and talk about what constitutes “legitimate” and “illegitimate” uses of AI in different contexts.
I’ve just started digging into Karen Hao’s Empire of AI, and can’t recommend it enough. Hao has produced a deeply reported and researched history of the past decade of work on AI, it’s social consequences and impacts, and the imaginaries of the entrepreneurs, scientists, and politicians driving its development and deployment. It’s also super readable.
Happy Summer, All-
Luke
Guinzburg, Amanda. “Diabolus Ex Machina,” June 1, 2025. https://amandaguinzburg.substack.com/p/diabolus-ex-machina?utm_medium=ios.
Koebler, Jason. “Teachers Are Not OK,” June 2, 2025. https://www.404media.co/teachers-are-not-ok-ai-chatgpt/.
COLLEGE OF ENGINEERING. “Ohio State Launches Bold AI Fluency Initiative to Redefine Learning and Innovation,” June 5, 2025. https://engineering.osu.edu/news/2025/06/ohio-state-launches-bold-ai-fluency-initiative-redefine-learning-and-innovation.Micek, John. “Markey, Advocates Call out Ban on States’ AI Oversight in Trump’s ‘Big Beautiful Bill.’” Mass Live, June 4, 2025. https://www.masslive.com/politics/2025/06/markey-advocates-call-out-ban-on-states-ai-oversight-in-trumps-big-beautiful-bill.html.
Amodei, Dario. “Opinion | Anthropic C.E.O.: Don’t Let A.I. Companies off the Hook.” The New York Times, June 5, 2025, sec. Opinion. https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html.
Schilke, Oliver, and Martin Reimann. “The Transparency Dilemma: How AI Disclosure Erodes Trust.” Organizational Behavior and Human Decision Processes 188 (May 1, 2025): 104405. https://doi.org/10.1016/j.obhdp.2025.104405.
Hao, Karen. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. New York: Penguin Press, 2025.
You must be logged in to reply to this topic.