I’m sharing here an annotated list of readings/watchings from this week that our team has shared with the faculty who are participating in the Institute.
Feedback and discussion in this space is welcome!
Emily Bender (computational linguist, and one of the most eloquent and powerful critics of AI) “debated” Sébastien Bubeck (mathmetician and computer scientist at Open AI) on the question of whether chatbots “understand.” This is a fascinating conversation, and raises many of the themes that will thread through our work together and with which we all should be grappling. It puts on the display the tensions between a critical perspective on AI development and the more enthusiastic framing that connects the rhetoric of AI and corporate strategy, and which may indeed prove irreconcilable.
The International Energy Agency has released a report on implications of AI on both the consumption of energy going forward and the energy industry. We have not dug into this too deeply, and it should be read with the understanding that the audience is more the energy industry than the general public… but there is useful contextual data for our consideration.
Arvind Narayanan and Sayash Kapoor, authors of AI Snake Oil, have published a piece in Nature calling, urgently, for guidelines to be established governing the use of AI in scientific research.
Narayanan, Arvind, and Sayash Kapoor. 2025. “Why an Overreliance on AI-Driven Modelling Is Bad for Science.” Nature 640 (8058): 312–14. https://doi.org/10.1038/d41586-025-01067-2.
Anthropic (which maintains Claude) has released a report on “how university students use Claude.” Not much in here will surprise you, but it does give insight into how these companies are approaching the challenge to be “responsible” actors within education spaces.