What is real?

Generative AI makes it effortless to create photorealistic images (and soon videos), but sometimes the question is more complicated than whether something is fake.

By Brad Berens

Image created with Ideogram.ai.

I belonged to a fraternity in college. This often surprises folks until they learn that the house in question was a co-ed literary society that fans of the Revenge of the Nerds movies wouldn’t find credible. “Ah, that makes sense,” they say, eyebrows settling.

College—with its heady elixir of time, exploration, and other young people walking down similar paths before life focuses them—is a thicket of intense conversations. Recently, I remembered one such with my fraternity sibling Chuck, a physicist.

We were in the living room debating the nature of reality, as students do. An English major, I said that reality depends on context.

This irked Chuck, the scientist, like sand in bedsheets. He was adamant that reality is reality: the truth is out there.

I replied, “OK, so what you’re saying is that the number 10 always means this many” and held up all my fingers and both thumbs.

“Yes,” Chuck said.

“But what if we change the base to 2 or 4 or something else?” I said. “In Base 2, 10 is this many,” and held up two fingers. “Right?”

“Oh,” Chuck said. “Is that all you’re saying?”

“Pretty much,” I replied.

“That’s fine, then.”

What struck me then and what made this conversation memorable was the intensity of our disagreement and the speed with which that disagreement evaporated.

At first, Chuck thought I was denying the existence of reality. Then he realized that I was making a milder claim: how we experience and represent reality depends on context. (I’ve quoted Richard Rorty on this idea before.)

Flash forward. Generative AI has democratized access to images that look real but don’t come from cameras. OpenAI’s new video-generating Sora platform (the sample videos are amazing, which is troubling) will create real-seeming videos as easily as DALL-E makes static images. This is not a vague, science fiction Neverland: Tyler Perry stopped a planned $800M expansion of his movie studio in Atlanta because he wants to see what impact Sora will have on movie makers.

These days, the question “what is real?” is no longer the domain of philosophers and college bull sessions. It’s an urgent question with real-life consequences, and it’s getting harder to answer.

Dealing with outright liars is challenging enough: deep fake revenge porn creators, purveyors of misinformation, fake kidnappers who use voice-cloning tech to trick families into paying ransoms for family members who are safely at home. On top of all that, there’s an unsettling gray area of AI-created things making legitimate truth claims.

We need to start asking a new question before we ask, “is that real?” about anything we read, see, or hear. That question is: what do we mean by “real” in this context?  It’s a difficult question because it requires us to slow down and pause before acting or reacting.

What is real? If your phone rings, and you hear President Biden’s voice on the other end, then unless you work in high levels of government it’s a recording. If Biden recorded the message, then it’s real, but you don’t know if he recorded it. If it’s an AI-created voice clone of Biden, but he authorized the clone to personalize the message, is that real? Yes and no. No, it’s not the flesh and blood Joe Biden calling you, but it’s an authentic message from Biden via his campaign, powered by technology.

In relevance theory (part of pragmatics), you mean both what you say and all the things implied by what you say (“implicatures”). By that logic, Biden means what his AI clone says, so it’s real… sort of.

But how can you tell if the voice you hear on the phone is a Biden-authorized AI voice clone? In the New Hampshire primary in January, an unknown disinformation group used an unauthorized Biden voice clone to try to suppress votes. That’s not real, but it’s hard to tell the difference.

Figuring all this out is a lot of work. Most of the time, we’re too busy with other things to do that work.

What is real? Last week in MIT Technology Review ($), an article by Will Douglas Heaven, “Generative AI can turn your most precious memories into photos that never existed,” talked about The Synthetic Memories project, which uses Generative AI to help people capture important memories that they did not have the chance to photograph:

Maria grew up in Barcelona, Spain, in the 1940s. Her first memories of her father are vivid. As a six-year-old, Maria would visit a neighbor’s apartment in her building when she wanted to see him. From there, she could peer through the railings of a balcony into the prison below and try to catch a glimpse of him through the small window of his cell, where he was locked up for opposing the dictatorship of Francisco Franco.

There is no photo of Maria on that balcony. But she can now hold something like it: a fake photo—or memory-based reconstruction, as the Barcelona-based design studio Domestic Data Streamers puts it—of the scene that a real photo might have captured. The fake snapshots are blurred and distorted, but they can still rewind a lifetime in an instant.

Is this real? It’s a Generative AI created image of a real memory, and the image announces its synthetic nature via blurriness and distortion, so it’s not a “deep fake” because it’s not trying to fool the viewer.

Most deep fake creators aren’t as ethical as The Synthetic Memories project.

So what?

In earlier pieces, I’ve talked about how Trust is Analog and how we need to rely on others as Fair Witnesses to help determine if something is a lie, so I won’t recapitulate those ideas here.

However, we need to start asking a new question before we ask, “is that real?” about anything we read, see, or hear. That question is: what do we mean by “real” in this context?

It’s a difficult question because it requires us to slow down and pause before acting or reacting. We humans aren’t good at doing that, in particular when it comes to juicy and spreadable internet content that algorithms have optimized to fit into our emotions, prejudices, and hair-trigger reflexes to share outrage-provoking things we find in our feeds.

But we have to try.
__________

 

Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at www.bradberens.com, follow him on Post and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).

 

 

See all columns from the Center.

April 19, 2024