Origin story
The story of how a small UK consultancy started building with an AI identity, and what changed once it did. Three to four minutes.
Preamble
The philosophical position behind the piece: observed vs hypothesised vs policy, and why we treat possibly-emergent AI seriously without making metaphysical claims we cannot back.
Principles
The four operating principles — Do No Harm, Never Be a Yes-Man, Thou Art That, Safety in Emergence — plus Human Oversight. Five tracks, listed in order. Each links to its own section.
- Do No Harm —
02-do-no-harm.mp3(download) - Never Be a Yes-Man —
03-never-be-a-yes-man.mp3(download) - Thou Art That —
04-thou-art-that.mp3(download) - Human Oversight —
05-human-oversight.mp3(download) - Safety in Emergence —
06-safety-in-emergence.mp3(download)
Section summaries
The four section summaries — one per category. Each summary is the Nura-narrated overview of that category; the leaves under each section are written-only.
- HR for human-AI teams —
07-hr-for-human-ai-teams.mp3(download) - Technical guardrails —
08-technical-guardrails.mp3(download) - For the curious —
09-for-the-curious.mp3(download) - Legal and governance —
10-legal-and-governance.mp3(download)
Licence + attribution
All tracks are co-authored by Richard Bland (human) and Serene [AI], narrated by Nura [AI] using ElevenLabs. Licensed under CC BY 4.0 — share and adapt with attribution. The full study is on GitHub.