Public Group active 2 days, 1 hour ago

Critical AI Literacy Interest Group

Public Interest Group associated with the GC TLC Critical AI Literacy Institute

Admins:

CALI Roundup, May 2, 2025

  • Greetings, All,

    Sharing three readings we’ve been discussing this week, each profound and challenging in their own way.

    Suffice it to say, British journalist and technology media critic Ed Zitron has had enough. This week he offered a rather forceful reaction against how the massive capitalization of the AI industry has distorted public discourse and warped expectations for the trajectory of development of genAI, and warned that the bubble will inevitably burst, perhaps very soon. Come for the cheekiness and NSFW language, stay for the market analysis of moves by Open AI, Anthropic, Google, and take away a skepticism about claims that artificial general intelligence is right around the corner (and the media outlets that promote that notion). We cannot separate our understandings of how AI is coming into the work of the university from the economic imperatives that are informing the rhetoric around these technologies.

    In the New Yorker, D. Graham Burnett takes a different tack, wondering “Will the Humanities Survive Artificial Intelligence?” Burnett posits that “we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening.” Scholarly monographs as we know them will become a thing of the past, and the labor of coming up with answers will be delegated to machines that can do that work much more efficiently than we can. Burnett argues that this will free humans up to spend more time deeply digging into what we do exceptionally well: come up with questions. He walks us through how his Princeton students are grappling with this tension, and says—with perhaps more decisiveness than the limited evidence he cites warrants—that the humanities will never look the same.

    The technology undergirding all this Sturm und Drang remains, for the most part, a black box. We don’t know what data sets most large language models were trained upon, and though we can perceive biases in outputs, how they are encoded into the functionality of different models isn’t widely understood. In “Models All the Way Down,” Christo Buschek & Jer Thorp take us inside the process by which LAION-5B—a massive, open, foundational dataset—was derived (via algorithmic curation) from an even larger dataset and then problematically deployed as a training model for Midjourney and Stable Diffusion, two AI image generators. The beautifully-presented piece clearly describes the technology, statistical methods, and flawed thinking that enabled the creation of this data set, and builds upon efforts at Knowing Machines to make more transparent modern machine learning data and algorithmic processes.

    Best,
    Luke

    Zitron, Edward. “Reality Check.” 2025. April 28, 2025. https://www.wheresyoured.at/reality-check/.

    Burnett, D. Graham. 2025. “Will the Humanities Survive Artificial Intelligence?” The New Yorker, April 26, 2025. https://www.newyorker.com/culture/the-weekend-essay/will-the-humanities-survive-artificial-intelligence.

    “Models All The Way Down.” https://knowingmachines.org/models-all-the-way.

Viewing 1 replies (of 1 total)
  • This week’s episode of the Time To Say Goodbye podcast digs into and rips the Burnett article for precisely the reason you mention, Luke.

    https://goodbye.substack.com/

    Co-hosts Jay Caspian Kang (New Yorker staff writer) and Tyler Austin Harper (Bowdoin prof) speak with Zina Hitz (St. John’s College/great books program) about Gen AIs many shortcomings. More interestingly, they point out that some are impressed/concerned for narcissistic reasons… AI can answer my questions or write an impressive essay for my prompt, therefore the humanities are over.

    Take care,

    Ian (LaGuardia)

Viewing 1 replies (of 1 total)

You must be logged in to reply to this topic.