AI has begun with fear

When boos rang out about AI at SXSW (the most tech optimist gathering on the planet) it showed us that the birth of AI is the opposite of the birth of the internet 25 years ago, which helps us to see where public sentiment about AI is headed.

By Jeffrey Cole

Image by Freepik

Last week at Austin’s South by Southwest (SXSW), several tech leaders extolled the virtues of AI and made positive predictions about how it would improve the world. SXSW is one of the leading tech conferences in the world, filled with fans, executives, and investors looking for the next new thing.

There is no audience anywhere that is more supportive of new tech. SXSW attracts enthusiasts who want to see, use, and invest in the newest technologies. Back in 2007, Twitter debuted in front of a SXSW audience who erupted in cheers at the new form of social media .

At this year’s SXSW, one tech panelist urged people “to be an AI leader.” And OpenAI’s Peter Deng shared his (self-serving) view, “I actually fundamentally believe that AI makes us more human.”

Rather than erupting in cheers and wanting to learn more, the audience started booing at all the positive talk of AI.

That’s the reaction we might expect when somebody starts boasting about AI at an organized labor convention, or at a Writer’s Guild meeting where people fear losing their jobs. It is extraordinary that it happened at a fanboy gathering like SXSW.

It is now becoming abundantly clear that AI is different. To a world that knew little or nothing about AI until last year (beyond fictional versions like HAL or Skynet), this is not just another great new technology that will enhance our lives (with a little collateral damage to other people who work on encyclopedias and phone books as happened with the internet).

AI is a game changer, but most of us don’t think it’s a good one — at least not at this early point.

Looking back: the internet in the late 1990s

The emergence of the internet about 25 years ago was different. While the internet itself is 56 years old (first coming online at UCLA over Labor Day weekend in 1969), the public began using it in the mid-1990s with Tim Berners-Lee’s creation of the World Wide Web, and especially once popular browsers like Netscape emerged.

Millions of people were excited when experts forecast the ways the internet would change the world. It would bring people together, mean the death of distance, and we could communicate with anyone in the world at little or no cost. Everyone online would have access to all the information in the world regardless of where they lived. Previously, you had to be in New York, London, or another big city to access the world’s great libraries. Not with the internet! Teachers believed that the use of digital technology would mean their students would come to kindergarten already able to read (at least a little bit) and navigate information.

By the early 2000s, no matter where you looked you could see things getting less expensive, new services being developed, and the democratization of information and journalism. Back then, the idea that anyone could be a journalist was a good thing.

Although there were some who rejected going online for years, until it was it was difficult to remain a non-user, most of us shared the enthusiasm and hope of the digital future.

It took close to ten years of the digital revolution before we could see the other side.

The emergence of social media, which originally had such promise, was one of the leading reasons our optimism toward the internet faded. While it was true anyone could be a journalist, those self-taught journalists had neither training nor regard for ethics and truth, which led to extraordinary divisiveness. The downside of popular web use emerged as we witnessed the growth of screen addiction, bullying, hate speech, disinformation, scams, and the dark web.

A few years ago, I wrote about how — in the 50s and 60s — parents warned their kids about looking both ways before crossing a street and never getting into a car with or accepting candy from strangers. Today, parents still warn about those things and also add a dozen more warnings about being safe online: don’t reveal information; don’t meet somebody in person who you only know from social media, etc. And we are still learning about the internet’s dangerous sides.

It is now becoming abundantly clear that AI is different. To a world that knew little or nothing about AI until last year, this is not just another great new technology that will enhance our lives.  AI is a game changer, but most of us don’t think it’s a good one — at least not at this early point.

Recently, I shared a chart showing that, over twenty-five years (1999-2024), the belief that technology would make the world a better place started at a high level and has steadily dropped.

How AI today is different

So far, AI’s path has been the opposite of the internet’s a quarter century ago. Practically all the public’s attention has focused on fears. And those fears are huge. Few hear or care about the potential positive impact of AI. Instead, we focus on the negative ways it will change our lives, potentially even destroying life itself.

Ask people about how they feel about AI, and the first thing they do is express fear that their jobs will not survive AI. Already, people worry that drivers’ jobs will disappear and that cashiers will be a thing of the past. Many Americans look at their own jobs, concerned that much or all of what they do can be automated. Few fears cut closer to what’s important to people than this. It was one of the major reasons for the Writer’s and Actor’s strikes last year.

Teachers not only worry their positions may evaporate, but also that their students will never develop fundamental skills, nor be able to compete in the marketplace. What happens when these kids grow up and comprise most of the work force?

Although it seems farfetched, we frequently hear about the worst fear of all: Artificial Intelligence will become sentient and decide that it no longer needs human beings, thus ending humankind. This is a cliché in science fiction, but in our reality well-known tech leaders like Elon Musk say AI is a greater threat to humankind than nuclear war. Despite Musk’s recent years of erratic behavior, his credentials as a visionary who sees things that others don’t are unassailable.

As AI develops, will we decide that our fears were unfounded? Will we decide that different forms of AI are extraordinary tools that make almost everything better?

It’s more complicated than that. Though the positive hopes for the internet turned out to be true, it also turned out to have more problems than anyone thought. With AI, we just don’t know what will happen.  And that’s the scary part.
____________

 

Jeffrey Cole is the founder and director of The Center for the Digital Future at USC Annenberg.

 

 

See all columns from the center.

March 20, 2024