Leaving, handing over, and being replaced by AI - deliberately

Before leaving my last role, I fed two years of work into an AI tool, creating a chat interface that lets people keep asking questions long after I've gone - which, depending on your perspective, is either considerate or deeply unsettling.

Reading time: 8 minutes

When we leave a project or organisation it's not only our knowledge that disappears, but the familiar way that knowledge was shared; the quick question, the offhand explanation, the human shorthand - the uncertain, tangled way people ask and answer each other.

Sometimes we prepare handovers or write documentation about the critical parts of our work. It's something I've always done and been an advocate for - if you're going to leave it seems only polite to explain why things are the way they are. But in reality, these handover documents only get read during the transition period, when colleagues feel duty-bound to acknowledge your effort and make sure some words exist on a page that look like knowledge and sound like knowledge. But within weeks, those files are neatly stored in a forgotten folder and rarely opened again.

This isn't anyone's fault. It's just that people are used to interacting with people, not documents.

Turning a handover into an ongoing conversation

Before leaving my role as a director at a climate-focused NGO, I wrote a 12,000 word playbook that tried to capture everything - from the unspoken “why we work this way” to the detailed “how the data estate is organised.” Yet, like any handover, even the most carefully structured playbook is still a dense block of information that relies on someone's patience and motivation to explore. But I wanted the knowledge to be approachable and stay conversational, not static; something that people could still ask questions of, like they used to ask me.

So I pulled together the playbook but also over a hundred other documents from my two years in the role - things I'd written, contributed to, or commissioned - and fed them into Google NotebookLM. This provided a simple chat interface that people in the organisation could use to query the corpus of knowledge I'd lead the creation of.

🧠 Chat with Alaric's brain

This notebook contains around 175 handover and knowledge documents that i've written, commissioned or contributed to between Oct 2023 and Sep 2025. Most of this will be helpful, some will be baffling, but importantly all of this is now your problem and not mine.

Get started by asking questions, just expect a lot less humour in the responses...

176 sources

Maybe this is a better way to hand things over? Not through documents, but through questions that can still be asked and answered.

How it responded to the questions people ask

What makes this model of interaction interesting isn't just the volume of information it can draw on, but the way it can respond to the variety of questions people have.

Remembering why and how we work: context, rationale, and reflection

🧠 Chat with Alaric's brain

Remind me, why are we bothering to write plain accessible language when our audience is highly trained professionals and legal specialists?

176 sources

🧠 Chat with Alaric's brain

What did we learn from the research on publishing mixed language content?

176 sources

Getting things done: operational, instructional, and day-to-day support

🧠 Chat with Alaric's brain

How much are we paying for the Algolia search integration?

176 sources

🧠 Chat with Alaric's brain

How do i find out who’s on our user research panel?

176 sources

Keeping things human: conversations, curiosity, and sometimes humour

🧠 Chat with Alaric's brain

What’s our strategy for addressing the declining website traffic?

176 sources

🧠 Chat with Alaric's brain

What’s Al’s favourite food?

176 sources

What made it work

It drew on detailed, up-to-date and honest knowledge

I'd compiled a data set of around 175 documents that covered a broad range of activities and outputs. Amongst many things, the greatest hits included:

  • Detailed historical research findings
  • Content style guide and logo usage guidelines
  • Detailed technical documentation for our website and CRM, including the rationale behind the technical design decisions
  • Backlogs and roadmaps for various parts of the digital estate
  • Personal and team-based week notes detailing what’s going well and what we’re struggling with
  • Project retrospectives, detailing the mistakes we'd made and how people can avoid them in the future

It stayed within a defined corpus.

Because NotebookLM only answers questions using the documents I gave it, it avoided the usual risks of AI “hallucination,” where a model invents facts or confidently fills in gaps with information that isn't actually there. Every response could be traced back to a known source, helping the team trust the answers it gave them.

It used tools and data already in the estate.

There was no need to introduce new software, security reviews, or onboarding processes. Everyone already had access to Google Workspace, and all the data lived safely within the organisation's existing systems, making adoption simple and low-risk. Just a bookmark to the chat dialogue was needed.

A Slack bot: probably the most obvious evolution

The next logical step would be to bring that same capability into Slack - after all, that's where most of the real questions get asked. Amusingly, this also means the knowledge can be queried as part of a conversation (or argument) between colleagues...

#support-platforms Give and receive help for our...

Dave Lister

Oi, crew - what d’you reckon we need to do different with the 2025 Impact Report? Last year’s one was good an’ all, but it read like @Rimmer's autobiography - long, boring, and full of charts no one asked for.

Arnold Rimmer

Honestly, @Lister, your literary criticism carries all the weight of a soggy kebab. @Alaric "Kryton" King, be a dear and explain to our resident slob what improvements are actually needed for the 2025 edition - preferably using small words and a wipe-clean diagram.

Need some context on the example above?

The characters in the example above are from Red Dwarf, a British sci-fi comedy about the last human alive drifting through space aboard a mining ship three million years after a radiation leak wiped out the rest of the crew.

Dave Lister is a messy, easy-going slob and reluctant hero. Arnold Rimmer is his uptight, holographic bunkmate, obsessed with rules and respect. Kryten is a fussy, overly polite service robot who just wants to clean and please.

It's the story of a mismatched crew trying to stay functional despite limited resources, vague missions, and wildly different personalities - which makes it a pretty accurate metaphor for most organisations on Earth. Even if it's set in space.

Writing and compiling the corpus of documents for your AI replacement to use is hard and a little tedious, but maybe the most difficult thing is choosing the personality that you'd want to be crystallised in, and forever remembered as.