What do Mister Rogers and artificial intelligence have to do with each other? Chief strategy officer Brad Berens explores the serendipitous connection.
By Brad Berens
This is a column about the nature of human expertise. That sounds like airy philosophy, but it’s actually an urgent practical question facing us as a species today because of the pressure that algorithms (artificial intelligences and machine learning) put on what we know, what we do, and who we are.
It starts with television.
A curious thing happened when I was leafing through our local PBS television station printed guide. This slim pamphlet covers evening shows, calling out special highlights. One such highlight arrives on Saturday, February 9: Won’t You Be My Neighbor, the hit 2018 documentary about Fred Rogers.
My wife Kathi and I missed seeing it in the theater, so we were delighted to learn that it was coming to our living room. As the Xfinity (Comcast) commercials instruct, I pressed the little microphone button on the Xfinity remote control and said, “Won’t you be my neighbor?” in order to program our DVR to record the show.
That’s when the curious thing happened: the onscreen guide revealed that the documentary will show in Spanish on HBO Latino at 5pm on February 9. That’s three hours earlier than it will be on PBS and in a different language. But why didn’t Xfinity highlight the fact that it would be showing in English later that same night? I searched again; this time I said, “won’t you be my neighbor PBS.” No dice.
It didn’t make sense.
I clicked to the Xfinity online guide and voice-searched “PBS.” The guide jumped to that channel. I clicked “Channel Info” and then manually advanced through the listings day after day (this took a while) until I arrived at Saturday, February 9.
Eureka! There it was: Won’t You Be My Neighbor. I hit record, satisfied but still puzzled. Why hadn’t the PBS showing popped up when I voice-searched? Two explanations came to mind.
#1. The paranoid explanation
Xfinity deliberately hid the free showing of the Mister Rogers documentary because the company makes money per subscriber on HBO. In contrast, it pays money to PBS, what is known as a “retransmission consent” fee, which is one of the ways that broadcast channels make money.
But this interpretation doesn’t make a lot of sense because Xfinity pays the retransmission fee regardless, and the company is more worried about subscribers cutting or shaving the cable cord than it is about any single show. Plus, if English-speaking Xfinity subscribers want to watch the documentary they can’t on HBO (at least this month), and Xfinity values its customer experience.
That’s what humans do best: we deal with things that don’t fit patterns or that combine patterns in non-linear, difficult-to-explain ways. Frequently, the way we get to those combinations is serendipitous and via the default mode. There is no algorithm for serendipity because by its very nature serendipity exceeds the kind of calculations that algorithms require.
#2. The “whoops” explanation
When PBS shared its month-long programming information with Xfinity, it coded Won’t You Be My Neighbor as part of its “Independent Lens” series (you can see this in the callout, above).
I’ve run into this before. When the series “Sherlock” was a big deal, PBS ran it in the U.S. as part of Masterpiece Theater, so if I searched “Sherlock” nothing would pop up.
I dutifully searched “Independent Lens” on the Xfinity online guide, only to find that Won’t You Be My Neighbor is not listed as part of that series.
Neither explanation is satisfying. Fortunately, that’s not what this column is about.
Two different mindsets: search versus exploration
The PBS pamphlet sat on our kitchen counter for days before I found myself absently paging through it. The physical artifact had a persistent presence in my life, unlike something online that I’ll scroll past and never see again unless I make an effort to save it. After I’d passed by the pamphlet several times, the buildup of those repeated exposures prompted me to open it. When I read it, I wasn’t searching for something specific: I was browsing, exploring. Later, when I picked up the remote control to record the documentary, then I was searching.
When searching, whether it’s in a digital or a physical environment, attention narrows and focuses. If you’re concentrating fiercely, things unrelated to your search drop out of your perceptions, which is why people counting basketball passes fail to see a man in a gorilla suit walking through the court.
When exploring, particularly in physical environments, attention expands. Wandering—even if it’s through a paper hodgepodge of different programs that only have “they’re on PBS” in common—activates what neuroscientists call the default mode, which involves daydreaming, noticing connections between things, remembering that thing you’ve been meaning to do but keep forgetting, having a-ha! moments.
Another way of putting this is that without the default mode, wandering, browsing, there is no serendipity.
Why serendipity matters
If you follow stories and research about Artificial Intelligence (AI) and Machine Learning (ML), then you know that what distinguishes algorithms from humans is their relentless efficiency, particularly with pattern recognition. Algorithms don’t get bored and zone out after looking at a million similar items in succession. IBM’s Watson taught itself about malign tumors and was soon better than cancer doctors at identifying the vast majority of tumors… leaving to the human doctors the task of identifying the unusual tumors that didn’t fit the pattern or that combined different patterns.
That’s what humans do best: we deal with things that don’t fit patterns or that combine patterns in non-linear, difficult-to-explain ways. Frequently, the way we get to those combinations is serendipitous and via the default mode.
There is no algorithm for serendipity because by its very nature serendipity exceeds the kind of calculations that algorithms require.
Relatedly, experiences that are driven or amplified by algorithms tend to eliminate serendipity because algorithms always presume that you want stuff like the stuff that you liked before. This is why Amazon and Netflix show you books and shows that are similar to books and shows you’ve already read and watched.
Facebook does the same thing: if you interact with a Facebook friend’s postings in any way (liking, commenting, sharing), then Facebook’s algorithm presumes that you want to see more of your friend’s postings. The notion that by giving a thumbs-up to your college roomie’s post about what he made for dinner you have effectively scratched the itch of your curiosity about that person (to my old roommates: guys, I’m not talking about you) does not fall within the Facebook algorithm’s equations.
This is not a diatribe against algorithmic digital experiences or an end-of-days rant about AIs stealing our jobs.
Instead, more mildly, I’m suggesting that since we are starting to get a clear idea about what algorithms do better than humans we also need to figure out what humans do better than algorithms. Humans can recognize and exploit serendipity. Algorithms can’t. That’s a competitive advantage.
And Lord knows we need all the competitive advantages we can get.
Brad Berens is the Center’s Chief Strategy Officer.
See all columns from the Center.
February 6, 2019