My 2023 prediction. . .or prayer

Many thinkers end each year with a cluster of predictions for the next year. I have just one—and it’s more of a prayer than a prediction—about trust.

By Brad Berens

The pressing question of our age isn’t new. The Marx Brothers asked it in Duck Soup (1933): “who ya gonna believe, me or your own eyes?”

In a recent Los Angeles Times article (thanks, Dad, for sharing it), movie director Scott Mann talked about seamless AI-powered film editing that can replace F-bombs with “freaking” or translate dialog into another language without awkward out-of-sync lips. “You can’t tell what’s real and what’s not,” Mann said, “which is the whole thing.” In that same article, computer science professor Robert Wahl observed, “We can no longer fully trust what we see.”

Both Mann and Wahl get things backwards: we’ve never been able to tell what’s real, and we’ve always trusted what we see. We are especially credulous with new technologies. We saw it with social media in the 2016 election cycle (e.g., Cambridge Analytica) and we’re seeing it now with AI.

Back when movies were new, audiences had no experience with the vocabulary of film because it was still developing. Telling the difference between that sort of illusion and reality was difficult. I once read a story (if anybody out there can find a source, I’d love it) that in the early 1900s, at one the first movie showings, when an onscreen gun pointed directly at the audience a man in the theater was so alarmed that he shot back!

In the earliest days of Virtual Reality I watched something similar happen. At a conference I was producing we had Oculus headsets in the lobby. My friend B strapped on the headset, took a virtual roller coaster ride, and screamed her head off because it seemed so real. Today, B might sway a bit, but she’d recognize the illusion.

Doctored photos didn’t start with Adobe Photoshop: they have been around as long as photos have been around. To our 21st Century eyes, the earliest examples in Dino A. Brugioni’s remarkable book Photo Fakery: The History and Techniques of Photographic Deception and Manipulation are obvious. Likewise, once we get accustomed to Photoshopped images, they’re often easy to see. The internet abounds with memes about comically bad Photoshop misadventures. But when photography was new it was easy to believe that fakes were real.

The past weeks have seen numberless stories about remarkable, unnerving AI-created language and images that today threaten the stock photography business (why pay Getty Images when DALL-E will create an image for free?), university admissions procedures (did a human write that essay or was it ChatGPT?), make horrible things like fake revenge porn dangerously easy to create and distribute, and reinforce our worst human biases in black box algorithms.

As these technologies get better, calculated misuses of the technologies will get worse. If somebody we don’t know or don’t know well shares a video of a world leader declaring war, should we believe it? Maybe not, but we will because we humans are natural believers.

The Victorian poet Samuel Taylor Coleridge famously wrote about “the willing suspension of disbelief,” but as one of my grad school professors observed that makes it seem like we’re working to accept what’s put in front of us. Instead, the suspension is effortless. “Disbelief just flies up all by itself,” he said.

Effortless trust is the first part of a two-part problem. As a species, we’ve evolved to act fast, believe quickly, jump to conclusions, not get bogged down by details. This is at the heart of Behavioral Economics: Daniel Kahneman’s nimble “System 1” improvisationally processes most of the tidal wave of information we surf moment to moment. It takes a lot to get methodical, plodding “System 2” off the cognitive couch to say, “wait a minute…” In particular, if we see something that fits our existing biases it’s easy to believe and hard to ignore.

The second part of the problem is democratized access to reality warping technologies. If only movie studios with multimillion dollar budgets could afford AI created words and images, which used to be the case, then we wouldn’t have much to worry about. But we have a lot to worry about because soon everybody will be able to use free or inexpensive technologies to create simulacra that fool most people.

Which brings me to my 2023 prediction or prayer.

Professional trust proxies and Heinlein’s Fair Witnesses

One key selling point of cryptocurrency has been that it’s “trustless”—you don’t need a third party for the system to function.

How’s that working out?

In 2023, human presence powered trust will become more valuable than ever before. What I mean by “human presence powered trust” is that we will teach ourselves to be skeptical of things that we don’t see with our own eyes in real life.

The past weeks have seen numberless stories about remarkable, unnerving AI-created language and images that today threaten the stock photography business (why pay Getty Images when DALL-E will create an image for free?), university admissions procedures (did a human write that essay or was it ChatGPT?), make horrible things like fake revenge porn dangerously easy to create and distribute, and reinforce our worst human biases in black box algorithms.  As these technologies get better, calculated misuses of the technologies will get worse. If somebody we don’t know or don’t know well shares a video of a world leader declaring war, should we believe it? Maybe not, but we will because we humans are natural believers.

It will be hard, all new muscles are hard to build, but we will start to question things that we see on screens, even videoconference screens. We already have practice questioning where the person on the other side of the screen is sitting. Is that Bob’s real living room or is it a virtual background? Soon, we’ll ask ourselves, is that Bob or somebody else wearing a virtual Bob costume? Or is it a Bob bot where a machine and a human puppeteer are collaborating in real time? Or has there ever really been a Bob?

As we start to question digital illusions more, a big challenge will be that what we see with our own eyes in real life does not scale. Life today is bigger than that.

We already have professional, third-party trust proxies when it comes to money: CPA stands for “Certified Public Accountant.” With documents we have different proxies: a Notary Public is a government appointee who witnesses when we sign documents.

We need trust proxies with wider portfolios than money and documents.

Robert A. Heinlein’s classic science fiction novel Stranger in a Strange Land (1961) imagines such a trust proxy: the Fair Witness who watches and remembers only what she or he sees, observes without participation, has no skin in the game, and makes no inferences beyond that.

In the novel, Anne, who works for Jubal Harshaw, is a certified Fair Witness. Jubal asks Anne to demonstrate how Fair Witnesses work in a conversation with Jill Boardman:

“Anne!…” Jubal called out. “That new house on the far hilltop—can you see what color they’ve painted it?”

Anne looked in the direction in which Jubal was pointing and answered, “It’s white on this side.” She did not inquire why

Jubal had asked, nor make any comment. Jubal went on to Jill in normal tones, “You see? Anne is so thoroughly indoctrinated that it doesn’t even occur to her to infer that th other side is probably white, too. All the King’s horses and all the King’s men couldn’t force her to commit herself as to the far side. . .unless she herself went around to the other side and looked.” (130)

Alas, we have no Fair Witnesses for hire yet… although wouldn’t it be great if we could summon one like an Uber through a handy smartphone app?

But we can and must act as Fair Witnesses for each other because trust is analog.

You know those “if you see something, say something” signs in airports and trains? I always worry that those signs bypass our consciences and talk directly to all our reflexes and prejudices.

We need a different sign: “if you see something, don’t say something.” Then in smaller print, “think about it first and talk with a reasonable other person in real life.”

The in real life part is critical. Social media is frictionless fertilizer for misinformation and disinformation. It’s too easy to comment or share. It’s harder to describe whatever yummy story we’ve found to a friend standing in front of us wearing a skeptical look. (In the absence of a nearby friend, make a phone call; don’t text… you don’t know if the person on the other end of the screen is a person.)

So, here’s my prayer for 2023: please ask for help, and please offer help when asked. Before you share a tasty morsel that you found online, find a friend and ask, “is this real?” Or “can you be my Fair Witness?” When you get that call reply, “Where did you see it? Did you check out the source? Let’s look together.”

Because we’re all in this together.
__________

 

Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at www.bradberens.com, follow him on Post and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).

 

See all columns from the Center.

January 4, 2023