Leaving, handing over, and being replaced by AI - deliberately
Before leaving my last role, I fed two years of work into an AI tool, creating a chat interface that lets people keep asking questions long after I've gone - which, depending on your perspective, is either considerate or deeply unsettling.
Reading time: 8 minutes
When we leave a project or organisation it's not only our knowledge that disappears, but the familiar way that knowledge was shared; the quick question, the offhand explanation, the human shorthand - the uncertain, tangled way people ask and answer each other.
Sometimes we prepare handovers or write documentation about the critical parts of our work. It's something I've always done and been an advocate for - if you're going to leave it seems only polite to explain why things are the way they are. But in reality, these handover documents only get read during the transition period, when colleagues feel duty-bound to acknowledge your effort and make sure some words exist on a page that look like knowledge and sound like knowledge. But within weeks, those files are neatly stored in a forgotten folder and rarely opened again.
This isn't anyone's fault. It's just that people are used to interacting with people, not documents.
Turning a handover into an ongoing conversation
Before leaving my role as a director at a climate-focused NGO, I wrote a 12,000 word playbook that tried to capture everything - from the unspoken “why we work this way” to the detailed “how the data estate is organised.” Yet, like any handover, even the most carefully structured playbook is still a dense block of information that relies on someone's patience and motivation to explore. But I wanted the knowledge to be approachable and stay conversational, not static; something that people could still ask questions of, like they used to ask me.
So I pulled together the playbook but also over a hundred other documents from my two years in the role - things I'd written, contributed to, or commissioned - and fed them into Google NotebookLM. This provided a simple chat interface that people in the organisation could use to query the corpus of knowledge I'd lead the creation of.
🧠 Chat with Alaric's brain
This notebook contains around 175 handover and knowledge documents that i've written, commissioned or contributed to between Oct 2023 and Sep 2025. Most of this will be helpful, some will be baffling, but importantly all of this is now your problem and not mine.
Get started by asking questions, just expect a lot less humour in the responses...
Maybe this is a better way to hand things over? Not through documents, but through questions that can still be asked and answered.
How it responded to the questions people ask
What makes this model of interaction interesting isn't just the volume of information it can draw on, but the way it can respond to the variety of questions people have.
Remembering why and how we work: context, rationale, and reflection
🧠 Chat with Alaric's brain
Remind me, why are we bothering to write plain accessible language when our audience is highly trained professionals and legal specialists?
Using plain, accessible language is important for highly trained legal specialists because it improves clarity, efficiency, and accessibility.
- Clarity and Comprehension: Plain, everyday English makes even complex or technical material easier to understand. Research shows that people with high literacy levels — including lawyers — prefer plain language because it allows them to grasp information quickly and accurately. In one study, 80% of readers preferred sentences written in plain English, with that number rising to 97% for more complex material (e.g., preferring “among other things” over inter alia).
- Efficiency: Specialists often face large volumes of information and limited time. Plain English helps them absorb key points faster and with less cognitive effort — a critical advantage under time pressure.
- Accessibility and Internationalisation: Writing in accordance with accessibility standards, such as the Web Content Accessibility Guidelines (WCAG), benefits everyone: readers with disabilities, non-native English speakers, and even experts working under stress.
🧠 Chat with Alaric's brain
What did we learn from the research on publishing mixed language content?
Research conducted in July 2024 involving English and Japanese participants provided several key findings regarding the impact of publishing mixed-language content on the website:
Perception and Usage
- Positive impression among English users: The inclusion of Japanese clauses was generally seen as a positive, suggesting that The Chancery Lane Project (TCLP) is expanding and larger than previously thought.
- Ease of navigation for Japanese users: Japanese participants immediately found and used the jurisdiction filter to locate the Japanese clauses.
- Translations used as inspiration: Japanese participants viewed the translations as a “good enough starting point” — useful for refining and adapting clauses rather than copying them directly. Many also used English clauses for inspiration via browser translation tools.
Content and Technical Issues
- Copy and formatting inconsistencies: Japanese clauses sometimes used colloquial rather than legal language, non-standard numbering, confusing brackets, and unclear headings.
- Filter usability issues: Participants could only select one jurisdiction at a time, though many wanted to compare clauses from multiple jurisdictions (e.g., US and England & Wales).
- First impressions risk: A first-time visitor arriving via Google might misinterpret the site due to the mixed-language content.
Limitations
- Small and selective sample: The study was limited in size and subject to selection bias, meaning conclusions should be treated cautiously and validated with further research.
- Language proficiency caveat: Findings may not reflect the experience of Japanese lawyers who do not speak English.
Getting things done: operational, instructional, and day-to-day support
🧠 Chat with Alaric's brain
How much are we paying for the Algolia search integration?
The organization is on the Algolia Free Tier due to non-profit benefits, meaning there is no cost for the service. This arrangement requires the organization to add the Algolia logo to its search fields.
🧠 Chat with Alaric's brain
How do i find out who’s on our user research panel?
You can find the list of people on the user research panel by accessing a specific report within Salesforce.
The report you need is the "User research participants:date" report, which shows all contacts who have subscribed to be a user research participant.
- Location: The report can be viewed here:
https://tclp.lightning.force.com/lightning/r/Report/9D8H273FH28E7GHASDUB/view. - Context: This panel has grown to a dedicated global panel of 450 research participants, with a rough engagement rate of about 20%. Consent to be a research participant is gathered through several of the organization's forms, such as the "Get involved" form.
- Access: If you cannot view this report or other dashboards, you need to talk to Haydn at the weekly Salesforce clinic to get permission.
Keeping things human: conversations, curiosity, and sometimes humour
🧠 Chat with Alaric's brain
What’s our strategy for addressing the declining website traffic?
The organization's strategy for addressing declining website traffic is focused on adapting to the shift toward content consumption through Artificial Intelligence (AI) tools, which has caused website visits to drop by over 50 percent between the first half of 2024 and the first half of 2025. This drop in traditional traffic makes measuring impact difficult because AI systems often repurpose content without providing attribution or feedback.
The response strategy involves five main areas:
1. Restructuring Content for Machines
The goal is to deliver and structure content so machines can read and use it efficiently. This involves developing technical infrastructure to ensure legal content is accessible and understandable to AI tools:
- Develop an API: Create an Application Programming Interface to allow machines to access legal content on demand, which is expected to reduce the necessity for crawlers manually harvesting content.
- Develop a Knowledge Graph: This infrastructure adds semantic context, making the legal content logically navigable and understandable by machines.
2. Developing Licensing and Attribution
New, simple, practical ground rules are being developed for how machines interact with content:
- Legal Guidelines: Publish clear, accessible guidelines for automated tools, crawlers, and AI systems, covering permissions, content harvesting, responsible use, and attribution.
- Technical Implementation: Use C2PA provenance tagging—a digital label confirming origin and authorship—and smarter use of HTTP headers and
robots.txtto guide crawlers and bots.
3. Broadening Web Analytics
The organization is expanding its analytics to track machine traffic as a meaningful signal of relevance and reach. This includes:
- Continuing to monitor bot and agent traffic.
- Putting systems in place to monitor API usage once developed, tracking request volumes and accessed endpoints.
- The metric "Humans referred by LLMs" is considered an important metric for the 2025 impact report, as it shows content reaching people indirectly through AI systems.
4. Increasing User Conversations
Since traditional analytics data is declining, the focus must shift to speaking directly to users to understand their needs and how they are using the work.
5. Developing New Ways to Show Impact
New methods are being focused on to build a broader picture of impact in the changing digital landscape:
- Strengthening the theory of change: Ensure the logic of impact is clear even with limited direct data.
- Targeting evidence that matters most: Track how content is used by legal AI tools, cited by professionals, or embedded in legal workflows.
- Improving targeted qualitative feedback: Gather richer user stories, testimonials, and case studies to complement quantitative data.
🧠 Chat with Alaric's brain
What’s Al’s favourite food?
Al's favourite cuisine is described as being from a country somewhere between Turkey and India. A specific favourite food is not listed, although he recommends combining avant-garde jazz with a KFC to create a challenging evening of culture.
What made it work
It drew on detailed, up-to-date and honest knowledge
I'd compiled a data set of around 175 documents that covered a broad range of activities and outputs. Amongst many things, the greatest hits included:
- Detailed historical research findings
- Content style guide and logo usage guidelines
- Detailed technical documentation for our website and CRM, including the rationale behind the technical design decisions
- Backlogs and roadmaps for various parts of the digital estate
- Personal and team-based week notes detailing what’s going well and what we’re struggling with
- Project retrospectives, detailing the mistakes we'd made and how people can avoid them in the future
It stayed within a defined corpus.
Because NotebookLM only answers questions using the documents I gave it, it avoided the usual risks of AI “hallucination,” where a model invents facts or confidently fills in gaps with information that isn't actually there. Every response could be traced back to a known source, helping the team trust the answers it gave them.
It used tools and data already in the estate.
There was no need to introduce new software, security reviews, or onboarding processes. Everyone already had access to Google Workspace, and all the data lived safely within the organisation's existing systems, making adoption simple and low-risk. Just a bookmark to the chat dialogue was needed.
A Slack bot: probably the most obvious evolution
The next logical step would be to bring that same capability into Slack - after all, that's where most of the real questions get asked. Amusingly, this also means the knowledge can be queried as part of a conversation (or argument) between colleagues...
#support-platforms Give and receive help for our...
Dave Lister
Oi, crew - what d’you reckon we need to do different with the 2025 Impact Report? Last year’s one was good an’ all, but it read like @Rimmer's autobiography - long, boring, and full of charts no one asked for.
Arnold Rimmer
Honestly, @Lister, your literary criticism carries all the weight of a soggy kebab. @Alaric "Kryton" King, be a dear and explain to our resident slob what improvements are actually needed for the 2025 edition - preferably using small words and a wipe-clean diagram.
Alaric "Kryton" KingAI Generated response
Ah, sirs! Following Mister @Lister's, spirited review - I’ve found a set of recommendations for the 2025 edition. The goal: fewer yawns, fewer charts, and absolutely no resemblance to Rimmer’s autobiography.
Structure and Narrative
- Define the story first: Establish the overarching story of the year before beginning writing or requesting content contributions. The 2024 report was noted as being polished but reading like a "glossy brochure" instead of a compelling story.
- Use a story framework: Provide contributors with a narrative structure rather than just granular headings, as the previous approach resulted in content that felt like "fill in the blanks".
- Run early narrative workshops: Schedule sufficient time for workshops at the start to reduce late-stage revisions and improve coherence.
Data and Measurement
- Smaller, targeted surveys: Move away from a single large survey and design smaller ones focused on specific initiatives, online training, or sector projects.
- Bring in an impact consultant: Engage a consultant with expertise in impact measurement, rather than data analysis, to contextualise the survey results.
- Keep everything in Salesforce: Continue managing all surveys within Salesforce to maintain a single view of users and ensure data consistency.
- Combine multiple data sources: Build a report that draws from Salesforce, Fathom Analytics, and newsletter activity to streamline metrics and avoid double counting.
- Track AI-driven referrals: Include the metric "humans referred by LLMs" to capture how content reaches users indirectly through Artificial Intelligence systems.
Design and Publishing
- Go web-first: Explore publishing the report as a web-first experience; PDFs remain useful for funders but are less user-friendly for general readers.
- Reuse and outsource design: Reuse the Affinity Publisher template but outsource the layout to a freelance designer, as the 2024 layout required significant internal effort and technical skill.
Team Process
- Define roles early: Agree on a dedicated team with clear roles (research, writing, analysis, design/publishing) at the start to manage workload and accommodate staffing changes.
Need some context on the example above?
The characters in the example above are from Red Dwarf, a British sci-fi comedy about the last human alive drifting through space aboard a mining ship three million years after a radiation leak wiped out the rest of the crew.
Dave Lister is a messy, easy-going slob and reluctant hero. Arnold Rimmer is his uptight, holographic bunkmate, obsessed with rules and respect. Kryten is a fussy, overly polite service robot who just wants to clean and please.
It's the story of a mismatched crew trying to stay functional despite limited resources, vague missions, and wildly different personalities - which makes it a pretty accurate metaphor for most organisations on Earth. Even if it's set in space.
Writing and compiling the corpus of documents for your AI replacement to use is hard and a little tedious, but maybe the most difficult thing is choosing the personality that you'd want to be crystallised in, and forever remembered as.