The Glitchatorio
30-minute introductions to some of the trickiest issues around AI today, such as:
- The alignment problem
- Questions of LLM consciousness
- AI and animal welfare
- Scheming and hallucinations
The Glitchatorio is a podcast about the aspects of AI that don't fit neatly into marketing messages or notions of technology-as-destiny. We look into the failure modes, emergent mysteries and unexpected behaviors of artificial intelligence that baffle even the experts. You'll hear from technical researchers, data scientists and machine learning experts, as well as psychologists, philosophers and others whose work intersects with AI.
Most Glitchatorio episodes follow the standard podcast interview format. Sometimes these episodes alternate with fictional audio skits or personal voice notes.
The voices, music and audio effects you hear on The Glitchatorio are all recorded or composed by the Witch of Glitch; they are not AI-generated.
The Glitchatorio
Chain of Thought 101
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
"Think step by step."
Although a simple technique in itself, the problems that chain-of-thought reasoning (CoT) addresses are complex, ranging from the specific issue of hallucinations to the general lack of explainability of AI (both in terms of understanding how it works as well as fixing things that go wrong).
We'll hear from data scientist Afia Ibnath on the basics of CoT, how it can be used to evaluate the faithfulness of LLM responses, and her experiences of using it in a business context. Check out Afia's portfolio on Github: https://afiai14.github.io/
Here's the Anthropic paper we discussed, which outlines that reasoning models are often unfaithful in their CoT: https://www.anthropic.com/research/reasoning-models-dont-say-think
For a concise definition of how faithfulness is calculated, see this article: https://www.ibm.com/docs/en/watsonx/saas?topic=metrics-faithfulness