Public Group active 2 days, 7 hours ago

Critical AI Literacy Interest Group

Public Interest Group associated with the GC TLC Critical AI Literacy Institute

Admins:

CALI Roundup, May 16, 2025

  • Greetings, All,

    Sharing a few pieces this week that span genAI’s impact on our work in universities, highlight some smart responses and intentional uses, and reveal much about how the federal government is perceiving the space.

    Kashmir Hill writes in the New York Times about college professors who use genAI, and students who are calling them on it. The piece offers a revealing inversion of the story we’ve been reading for two years now: faculty are taking shortcuts in preparing course materials or responding to student work, and students rightly see this as a violation of the compact they thought they had entered. Some are demanding tuition relief. Some faculty are sheepish when confronted; others assert that they have been able to bend genAI to their instructional purposes and are unbothered by the labor politics undergirding their approach: “‘Is there going to be a point in the foreseeable future that much of what graduate student teaching assistants do can be done by A.I.?’ she said. ‘Yeah, absolutely.'”

    Like other recent exposés, this piece reveals important truths while missing crucial context. The faculty member who most egregiously outsourced his prep to genAI is an adjunct instructor, raising questions about the working conditions for the majority of faculty and the kinds of pedagogy those constraints produce. One of the colleges profiled as testifying to the transformative potential of genAI is Southern New Hampshire University, which has been the most aggressively expansive online university in the nation. And at the heart of all of the arguments in the piece for and against faculty use of genAI in their work is the presumption that interactions between students and faculty should primarily be transactional. In seeking to expose unsustainable tensions within higher education, reporting like this risks reifying the conditions that contribute to the production of those problems. For more on those conditions and AI solutionism, read Juan Pablo Pardo-Guerra.

    John Warner, whose More than Words offers a bold call to defend the act of writing as thinking in the age of AI, has long railed against transactionalism in the classroom. In this post he responds to last week’s “everyone is cheating” piece in New York Magazine, arguing this is an overstated conclusion that stokes moral panic, and that the real issue is that many students have internalized a system that positions them as consumers. But people who actually interact with large numbers of students—you know, teachers—remain confident that a critical mass of students genuinely want to learn and have no desire to outsource their intellectual development to genAI. Teachers continue to find inventive ways to “attack thoughtless use of ChatGPT from the demand side.”

    One such example comes from a speculative fiction class at Princeton, where Mary Naydan designed a first-year literature seminar around the question of “What does it mean to be human in the age of artificial intelligence?” Students thought through the ethical responsibilities of those who make new technologies by reading Frankenstein and I, Robot; explored the capacities and limitations of AI writing assistance tools; toured a high-performance computing center to better understand the ecological impact of our use of these tools; and engaged with speculative fiction authors about how AI is influencing their work.

    At the Columbia Journalism Review, several journalists have detailed how they use (and resist) genAI in their work. Overall, they remain skeptical and concerned about AI’s impacts, even as many of the journalists interviewed share well-scoped, purposeful uses of AI that support and are subordinate to their expertise.

    At Nature Jin Wang and Wenxiang Fan have published a meta-analysis of 51 research studies into the effectiveness of ChatGPT in “improving students’ learning performance, learning perception, and higher-order thinking.” I have not dug deeply into their methods or analysis, but on a cursory read I think it is mostly useful as a pathway to engaging with existing studies; their recommendations that “ChatGPT should be flexibly integrated into teaching as an intelligent tutor, learning partner, and educational tool” are in tension with the lack lack of consensus they admittedly find. There is also some disagreement about approaches within the educational researcher community; this preprint from Joshua Weidlich and colleagues takes issue with many recent studies of ChatGPT’s impact on learning, finding methodological imprecision and hasty research design.

    I’m observing resonances between genAI research and the spread of LLMs through society and our economy: both are characterized by speed, imprecision, and what seems like motivated reasoning at times. I fear those dynamics will not change any time soon, given efforts that the Tr*mp administration and its friends in congress are making to remove protections for intellectual property holders from the exploitation of their work by LLMs and to ban states from regulating AI.

    All the more reason to continue to engage with scholars like Emily Bender and Alex Hanna, whose The AI Con was released this week. There’s lots of opportunities to hear them talk about their work; one I’m looking forward to is coming June 6th with Data and Society: “Challenging AI Hype and Tech Industry Power.”

    Best,

    Luke

You must be logged in to reply to this topic.