Truth in a filtered world

We already live in a polarized society, so what happens when technology makes our filters even stronger?

By Brad Berens

Created with ChatGPT4

Recently, I shared a microfiction (1,000 words or less), a short science fiction story called “Filtered,” about a man, Phil, who found his wife Cynthia’s pet name for him so irritating that he put hi-tech smart filters into his ears to transform “Sweetie” to “Darling” in real time.

You don’t have to read “Filtered” (although it ain’t bad) to understand this week’s piece, where I’ll dig into how possible the world of the story is.

Synthetic Voice: The most realistic aspect of the microfiction is how convincingly the filters in Phil’s ears capture Cynthia’s voice and cadence. Today, “synthetic voice” is when an algorithm can seamlessly duplicate a speaker’s voice.

Podcasts already use synthetic voice to make it easy to alter advertisements that hosts read out loud. But this technology goes far beyond podcasters: for example, Apple’s latest iPhone operating system has “Personal Voice” as an option. After you read sample words for 15 minutes into the device, overnight your phone will churn through the samples and create a digital clone of your voice.

The scary version of this technology is how criminals can use voice cloning to pretend to have kidnapped a loved one, the cloned voice (“Mom, they want $50,000… please help!”) helps them to extort money from panicked relatives. If you ever get such a call, text your allegedly kidnapped loved one before you call the bank: it’s possible, even likely, that your loved one is happily sipping a latte at a cafe rather than in the clutches of evildoers.

Real-Time Voice Changes: This, too, is realistic. It’s already happening, albeit in more controlled environments than how Phil’s filters change Cynthia’s voice anywhere Phil encounters it. Today’s real-time translation is a more complex endeavor than editing out individual words like Phil does.

Localize has a nice overview of real-time translation technology, and Skype (a Microsoft subsidiary) offers real-time translation that (like Apple’s Personal Voice) recreates your own voice when translating your words into another language.

A more primitive version of the technology is Voice Mod, which gamers use to change their voices while gaming or using Discord, and there are many similar offerings.

Batteries and Compute: The least realistic aspects of the filters are 1) the small size of the filters (they fit invisibly inside his ears), and 2) that I ignored how to keep the filters charged up.

Even with a strong Bluetooth link to Phil’s main computing device (I didn’t specify in the story, but a phone or an AI device) and his digital assistant, Spenser, the filters would need to have a large amount of computational power (or “compute” as more techie people say) inside the devices to do the real-time editing.

Despite Moore’s law and the ever-increasing number of transistors in computer chips, packing that much hardware and software into something so tiny that it fits inside Phil’s ear canal is beyond today’s technology, but not tomorrow’s.

Then there’s the question of how to power the filters. I can’t imagine that anybody would want to stick tiny versions of lightning cables deep into their ears once a day to charge their smart filters, so without another way of charging this hypothetical technology is both improbable and a non-starter from a user behavior perspective.

My physician father, Stephen Berens, pointed out to me that Phil couldn’t possibly stick the filters on top of his ear drums because his fingers would be too big; he also reminded me that the human body generates electricity, so perhaps the filters could draw power from Phil’s body. That’s a neat idea. (Thanks, Dad!) Today, though, the power draw would be too great to run the filters. Plus, there’s the question of the heat the filters would generate with any onboard compute.

For filters to become a reality, most of the compute would need to be outside the ear so that the filters need only vestigial power and won’t heat up Phil’s ear canals.

Podcasts already use synthetic voice to make it easy to alter advertisements that hosts read out loud. But this technology goes far beyond podcasters: for example, Apple’s latest iPhone operating system has “Personal Voice” as an option. After you read sample words for 15 minutes into the device, overnight your phone will churn through the samples and create a digital clone of your voice.

Implications: We’re already living in a polarized world of filter bubbles and “alternative facts,” but even the most hyper partisan news consumer today still knows that there is another side to the issues, even if she or he entirely discounts the validity of that other side.

Filters like Phil’s change that. If you can silently replace offending words and ideas from what you hear moment-to-moment, then your tolerance for ideas that don’t agree with yours will go down because your ideas won’t encounter resistance in everyday life.

For example, these days, there’s a lot of anti “woke” rhetoric—not just from folks on the right but from anybody who resents when one group tries to police how another group uses words. With filters like Phil’s, a racist could change “black” to the N word every time anybody says “black.” Or, filters could amplify and accelerate the rise of antisemitic rhetoric on X (formerly Twitter) since the start of the Hamas/Israel war.

Both of those scenarios are scary, but what’s even scarier is that this sort of filtering technology could make it harder for us to agree about the most basic features of the world we all live in.

Richard Rorty famously wrote:

We need to make a distinction between the claim that the world is out there and the claim that truth is out there. To say that the world is out there, that it is not our creation, is to say, with common sense, that most things in space and time are the effects of causes which do not include human mental states.*

I wonder what Rorty would have made of the kind of filtering technology I explored in “Filters,” because filtering technology renders Rorty’s common sense statement obsolete. With filtering, human mental states can change the world that is out there.

When that is possible, what happens to truth?
__________

 

Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at www.bradberens.com, follow him on Post and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).

 

* This passage comes from the “The Contingency of Language” chapter in Rorty’s Contingency, Irony, Solidarity (pages 4-5), which I’ve quoted at greater length here.

 

See all columns from the Center.

November 22, 2023