Democratizing digital deception

Will bad actors use digital duplicates of our dead loved ones against us?

By Brad Berens

I recently shared a microfiction (1,000 words or less), a short science fiction story called “Hacking the Dead” about Trix, a corporate spy who influences the digital duplicate of an equity analyst’s beloved dead mother in order to change his mind about Trix’s company without him realizing it.

Image created using DALL-E

This time, I’ll explore how realistic the story is or isn’t. You don’t have to read “Hacking the Dead” (although it ain’t bad) to understand this week’s piece, but fair warning: Thar Be Spoilers Ahead!

Let’s dig in.

The idea behind “Hacking the Dead” is similar to the 2010 movie Inception, except instead of the covert persuasion happening in dreams it happens in the waking world with dream-like digital ghosts of departed family members.

Like Inception, in “Hacking the Dead” a small team of people (Trix and her unseen co-workers) create a false reality. This is the kind of thing that used to require whole government agencies dedicated to fake narratives and propaganda, but now technology has democratized it. (See also this earlier piece about the movie Wag the Dog and deep fakes.)

Digital twins of the dead: The part of the story most obviously different from our world in 2024 is Alice, the digital twin of the analyst’s late mother. Outside of Harry Potter movies or the Haunted Mansion ride at Disneyland, we don’t see a lot of moving portraits, let alone pictures that chat with us as if they were flesh and blood people. Oliver (the analyst) chats with Alice whenever he is alone—at work, at home, wherever he has an internet connection. He also visits his mother’s grave, where her actual remains lie buried, but when he’s there he still talks with her digital twin: her headstone is a giant monitor.

The biggest challenge we humans face in the new AI era isn’t what will become of our jobs, although that’s a biggie, or whether AI will end humanity Terminator style. Instead, our challenge is determining what’s real, what’s mistaken, and what’s enemy action.

How realistic is this? Very. Digital duplicates of dead people have been around for a while. Replika, a company that lets you create an AI confidant, started when the founder, Eugenia Kuyda, lost her best friend after a car hit and killed him. She used all their text messages to create a chatbot version of her friend. (There’s an alarming 10 minute video about Replika from Quartz here.) Replika is far from the only such company; this article from Gizmodo describes several companies that create digital twins by absorbing massive amounts of data about people before or after they die.

In today’s digital world, gathering data is relatively easy, but that doesn’t mean it’s impossible to create digital duplicates of people who died before the internet. Famously, AI pioneer Ray Kurzweil is trying to create a chatbot version of his late father Fred, who died in 1970, by digitizing paper records. Kurzweil’s daughter Amy lovingly describes this process in her graphic novel, Artificial: a Love Story.

MIT Technology Review’s recent article, “Deepfakes of your dead loved ones are a booming Chinese business” ($) describes how a man named Sun Kai has a weekly videochat with his dead mother’s digital twin, created by a company called Silicon Intelligence. The article observes:

Avatars of the dead are essentially deepfakes: the technologies used to replicate a living person and a dead person aren’t inherently different. Diffusion models generate a realistic avatar that can move and speak. Large language models can be attached to generate conversations. The more data these models ingest about someone’s life—including photos, videos, audio recordings, and texts—the more closely the result will mimic that person, whether dead or alive.

Deepfakes abound with wildly varying degrees of quality and convincingness.

Maybe you think you could never be fooled by a digital duplicate? If so, then I urge you to go look at some of the demos from last week’s OpenAI event announcing the arrival of ChatGPT4.o, a new, more powerful, multimodal AI that you can chat with. The AI’s speech is more natural than Siri or Alexa, and the flirty personality of the default female persona can be just plain weird and sexist. Desi Lydic made delightful fun of this on the Daily Show, while on the Hard Fork podcast Kevin Roose and Casey Newton dug into what makes the AI so convincing and why it’s an “engagement hack” for a product that isn’t all that useful… yet.

However, key both to the microfiction and to the digital twins of the dead companies in real life is that nobody is trying to fool people that the dead original of the digital twin is still alive. But there are still ethical questions about the use of digital twins of the dead.

In India right now, Vijay Vasanth, an actor running for office, released an endorsement video from his father, H. Vasanth Kumar, a well-known politician who died four years ago of COVID, and this is not the only such example. Can Indian voters be sure that the dead father would approve of his son’s policies? Not really.

The least realistic part of “Hacking the Dead” is the precision with which Trix influences the digital duplicate of Alice by bombarding her with positive articles about Trix’s employer, a company about to go public that desperately wants Oliver’s approval.

Generative AI is prone to hallucinations, which is a nice way of saying that it can tell plausible lies with a confident swagger that makes them seem true. Gen AI doesn’t know what is true and what is false, only what words are statistically likely given its training data. So, for example, if an AI’s training data includes a lot of romance novels, the Large Language Model will create prose that includes a lot more bodices and a lot more ripping than actually happens out here in the meat world.

The instructions and articles that Trix covertly presents to digital Alice, therefore, might have had a result distant from Trix’s goal because of hallucinations.

Why does any of this matter?

For me, the most unnerving part of “Hacking the Dead” is its plausibility—and I wrote it!

The biggest challenge we humans face in the new AI era isn’t what will become of our jobs, although that’s a biggie, or whether AI will end humanity Terminator style. Instead, our challenge is determining what’s real, what’s mistaken, and what’s enemy action.


Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at, follow him on Post and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).



See all columns from the Center.

May 24, 2024