I Think I’m A Clone Now.

Elinor Hamilton whispering in Phil Sayer's ear. Both are smiling.
A few years before eLearning Voices was born, Elinor Hamilton and Phil Sayer were two of the most recognised voices on the rail network. Pictured here together in 2013.

Ethical AI could be considered the greatest oxymoron since Government Organisation or Marital Bliss, but like it or not, AI is here to stay. I believe there are ways of using it which support both the client and the artist, as long as there is clear legislation in place to protect the source.

My work relies upon understanding and respecting the needs of each of these parties. In recent months, I’ve faced a challenge from one of the UK’s rail operators – and together, we have created an ethical solution to a complex problem, which will never go away. That problem is death.

Gayanne Potter (definitely not dead at the time of writing), whose voice was cloned by ReadSpeaker and used in a robotic way on ScotRail – without her consent – became a reluctant poster girl for AI’s dirty secret back in May this year. Many more victims of similar theft are poised to come forward, or have already done so.

Gayanne is the wonderful, funny, sweary, intelligent, creative powerhouse behind “The Bubbling Toad”, whose spirit is in no way reflected by the robotic “Iona” on the tannoy.

“Iona” couldn’t write a comedy poem about tits for a friend going through breast cancer treatment, or counsel someone during a mental health crisis. But Gayanne could. And – as she said on the news – unlike “Iona”, she knows how to pronounce Milngavie correctly.

ScotRail have clearly thought about using a voice that sounds local. But the voiceover artist who lives literally down the road had her audio chewed up and regurgitated by a Swedish computer, and was made to sound as if she’d just landed from Mars. Well, that’s one way to alienate your audience.

When Gayanne recorded the original audio for ReadSpeaker five years ago, she couldn’t possibly have given permission for its use on a technology which didn’t then exist.

There are ethical ways of using an AI voice recording, and I know this because I cloned my dead husband.

Please don’t take that sentence out of context. Let me explain.

My late husband, Phil Sayer, and his “railway wife”, Celia Drummond, were two of the most prominent voices across the British railway network for over 30 years. After Phil’s death in 2016, and then Celia’s in 2021, South West Rail were in need of a new voice. I’ve been heard alongside Phil on London Underground announcements for the last twenty years, so I was deemed a good choice to continue the family business. I’m familiar with the quite specific recording style for transport announcements, so I was able to record the script pretty much in real time, in the same studio that Phil and I had worked in together, with the same microphone, and the same cat occasionally getting in the way.

I asked that SWR kept a handful of announcements from each of my predecessors as a loving nod to the giants upon whose shoulders I stood.

My client was keen to explore the idea of an AI voice for the male announcements (meaning the job would never have gone to one of my voiceover colleagues, which is a crucial point). I talked them through the potential pitfalls of an “off the shelf” voice because friends in the industry were already beginning to fall foul of voice cloning. My client felt that using AI was arguably more risky and morally questionable than bringing their old pal back from the dead.

Phil’s voice is already all over the network – and the internet – so anyone could make a bad copy (and probably already have). The question then was not so much a moral one, but a legal one. If someone else copied it for commercial purposes, could I afford to fight them? Nope. Had he ever signed over a “perpetual usage licence” years before iPhones, let alone AI, which could be deliberately misinterpreted by lawyers? Quite possibly. Far better that his sons and I have total control over creating a studio-quality copy, and total control over how “he” is used.

The train announcement project was decades in the making, which is why I feel confident that it’s OK to use AI Phil to add to what already exists, rather than scrap forty years’ worth of audio. It’s not OK to use AI Phil to tweak corporate narrations or commercials which could be re-recorded by an artist who is still alive (and probably needs the work).

Given Phil’s legendary status among the train-loving fraternity, the boys and I even agreed to create a suite of announcements for UK Departure Boards (a family business for whom I’ve created audio for several years, as has my friend and colleague Janine Cooper-Marshall – the voice of GWR) to launch around Christmas 2025. It’s mainly in response to the fact that, ten years after his death, we still get weekly requests from enthusiasts for a copy of Phil’s audio to listen to at home, which has never been possible for us to share. It’s a lovely, but somewhat niche, product which Phil would have enjoyed. He often spoke about how cool it would be if he could somehow add his recordings to people’s model railway layouts, and this is as close as we’re likely to get to making that particular dream come true.

Is it morally right? I hope so. We spent hours debating it as a family, and shed many tears. But it all boils down to two questions. Is this part of an ongoing job that Phil had already started? And/or would he have recorded this himself, had he been alive?

Our son and I have been taking great care to make sure the new announcements are indistinguishable from the old. In fact, it took so bloody long that training a human being from scratch, and getting them to record it all, would have been immeasurably quicker. Our Phil would have known instantly how to say “text British Transport Police on 61016”, rather than “sixty one thousand and sixteen”. And his inflexion would have been bob-on, every time. Don’t even get me started on how long it took to get him to say Llwyngwril and Machynlleth – places he knew very well after his mother sent him off to work in Wales on summer as a teenager. It was incredibly important to us that we honoured his knowledge of those pronunciations, and what the technology thought they should sound like was wildly different from what we managed to create with phonetics, edits, and – crucially – love.

Phil’s voice was easy to clone using existing announcements, because the technology used to string the fragments together when the project began required a robotic and stilted delivery, but we still had to develop production techniques to mitigate the many deficiencies in the AI technology. Moreover, AI Phil couldn’t be used for corporate or commercial purposes anyway, because that work was done in a wholly different style. AI Phil was a pain in the arse. It’s quite possibly the most time-consuming and least cost-effective job we’ve ever done. But we cared enough to do it properly, to protect Phil’s artistic integrity. I knew him well enough that “not sounding like a twat” was of paramount importance, so nothing slipped through Quality Control if it didn’t sound exactly like him. And who better to know this than his wife and son?

Meanwhile, “Iona”’s regurgitated text-to-speech simply did not do justice to Gayanne or her talents, and if we made Phil sound that clunky, I know for a fact that he’d come back to haunt us. ScotRail’s customers were apparently worth so little that they didn’t need to hear their local stations announced correctly. Respecting customers enough to pronounce the name of their home town properly is a fundamental part of our job.

AI voices are understandably attractive to companies because they can be futureproofed in a way that real voices can’t. AI voices don’t catch colds, go on holiday, or inconveniently die. (They also can’t read a script with nuance and love and experience and care, without serious human intervention.) Technology is now available to mitigate the VO’s incapacity, though, and I believe we should use it. Not to take jobs away from voice artists, but to enable us to keep them.

The only way to make AI work for everyone is to be open to technology that is ethically and fairly created. We chose ElevenLabs to help us to fulfil the project, because they were considered to be on the more ethical side of this technology. We had to submit Phil’s death certificate, prove I was his next of kin, and explain the reasons why I wanted to apply to clone his voice. They have kept his clone in a folder that only we can access so it won’t ever be commercially available. Human beings can now own and control their own voiceprint, including what they say and how they say it, while clients can be fully aware of who the voice belongs to, and remunerate them accordingly.

People do business with people. That hasn’t changed, and I’m not sure it ever will. When was the last time your AI voice took you out to lunch, or took a bit of time to help you to rework a script because it sounded more fluent? And I bet they’d never accidentally appear on a Teams call drinking from an inappropriately sweary mug. You can’t AI that kind of customer service.

Some kids inherit a house and what’s left of a pension, but my will states that the twins get full control of my voice clone (for rail announcements only) for an agreed length of time after I join the feathered choir. Lucky them.

AI Phil isn’t Real Phil. AI Phil will never be able to share in the achievements of his children, or counsel them through their losses, and he definitely won’t be able to recite the entirety of Monty Python’s Life of Brian – with the female characters an impressive speciality. He can’t say “I love you” to his boyos in the tender way he used to (believe me, that’s the first thing I tried) – but on the other hand, he doesn’t over share his opinions, because he doesn’t have any. Anyone who knew Real Phil will almost certainly accept this small blessing.

There will always be apps out there that can clone any voice, song, or face, and make it do things that are immoral, stupid, or just plain wrong. Every time we send a video or a voicenote on WhatsApp there’s source material right there, for free. Let the smaller businesses and the kids on TikTok do their worst. Those were never our jobs anyway.

Reputable brands are beginning to steer away from the AI companies who have scoured the internet for sources, and from those who have hidden behind ill-defined laws to abuse an artist’s intellectual property.

Unregulated AI voices of dubious provenance are Ultra-Processed Food for the audio generation, and a vocal equivalent of the Horsemeat Scandal is coming ever closer. But used wisely, AI can be a real advantage to our industry by strengthening bonds between artists and clients.

Clients can still buy organically from the source, and artists can (and should) use new technologies wisely to futureproof the work we do: together.