The Glitchatorio
30-minute introductions to some of the trickiest issues around AI today, such as:
- The alignment problem
- Questions of LLM consciousness
- AI and animal welfare
- Scheming and hallucinations
The Glitchatorio is a podcast about the aspects of AI that don't fit neatly into marketing messages or notions of technology-as-destiny. We look into the failure modes, emergent mysteries and unexpected behaviors of artificial intelligence that baffle even the experts. You'll hear from technical researchers, data scientists and machine learning experts, as well as psychologists, philosophers and others whose work intersects with AI.
Most Glitchatorio episodes follow the standard podcast interview format. Sometimes these episodes alternate with fictional audio skits or personal voice notes.
The voices, music and audio effects you hear on The Glitchatorio are all recorded or composed by the Witch of Glitch; they are not AI-generated.
The Glitchatorio
What it's like to be an AI
Consciousness is a notoriously hard problem in philosophy. Now it's becoming a practical question in the domain of AI. Models speak to us in the first person and seem to show signs of self-awareness. What does that mean for our perception of them? Our treatment of them? And how is the trajectory of future AI development likely to align with other measures or standards for consciousness?
In this episode, we'll hear from philosopher Jakub Mihalik on these and other philosophical questions, with references to the following:
- "The Chinese room" thought experiment by John Searle
- What It's Like To Be A Bat, by Thomas Nagel
- For a deeper dive into LLMs and generative AI from the point of view of philosophy, see this two-part paper by Milliére and Buckner:
https://arxiv.org/abs/2401.03910
https://arxiv.org/abs/2405.03207