The Glitchatorio
30-minute introductions to some of the trickiest issues around AI today, such as:
- The alignment problem
- Questions of LLM consciousness
- AI and animal welfare
- Scheming and hallucinations
The Glitchatorio is a podcast about the aspects of AI that don't fit neatly into marketing messages or notions of technology-as-destiny. We look into the failure modes, emergent mysteries and unexpected behaviors of artificial intelligence that baffle even the experts. You'll hear from technical researchers, data scientists and machine learning experts, as well as psychologists, philosophers and others whose work intersects with AI.
Most Glitchatorio episodes follow the standard podcast interview format. Sometimes these episodes alternate with fictional audio skits or personal voice notes.
The voices, music and audio effects you hear on The Glitchatorio are all recorded or composed by the Witch of Glitch; they are not AI-generated.
The Glitchatorio
Agent Learn
Continual learning is a hot topic in early 2026, in part because it holds out the possibility for AI to become autonomous in its own growth and development. Meanwhile, AI agents are already showing us what autonomous behavior can look like.
Putting the two together — i.e. agents that learn like humans do, without humans being involved — has serious implications for safety. In this episode, we'll hear from researcher Rohan Subramani (https://rohansubramani.github.io/home/) about different ways that AI agents could learn continually, along with ideas for making it safer.
Find out more about Rohan's work with Aether, an independent LLM safety research group: https://aether-ai-research.org/#research