Should we take seriously a recent study that shows people like AI-generated poetry? And, asks Center strategic advisor Brad Berens, what are the broader implications?

Image created using Adobe Firefly.*

A few days ago, La Profesora sent me an intriguing link to a Poetry Turing Test set up by a couple of philosophers at the University of Pittsburgh. The test is a simple Google Form that presents the visitor with five poems written by famous human poets and five poems written by ChatGPT 3.5 in the style of famous human poets.

These are the same 10 poems the philosophers (Brian Porter and Edouard Machery) used in their recently-released, open-access Scientific Reports paper, “AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably,” the title of which neatly summarizes its conclusions.

You can also read a clever November 14th article in The Washington Post ($)—”ChatGPT is a poet. A new study shows people prefer its verses. Shall I compare thee to a ChatGPT poem?”—where different interviewees gnash their teeth or shrug at the prospect of innocent people taken in by algorithmically-generated poetry.

The test is a fun exercise. You should take it immediately. Go ahead. I’ll wait.   (more)

A hotly contested election, a polarized nation on the brink of civil war… sounds familiar, doesn’t it? But which election are we talking about?  Center director Jeffrey Cole explains.

The future of the country — or more to the point, whether there would be a country — depended on who won the election. One candidate represented uncertainty, fragmentation, and chaos. The other offered the best hope, however unlikely, to keep the nation together. The entire American experiment rested on the election.

I am referring to 1860, when Abraham Lincoln was running against three other candidates, Stephen Douglas being the best known. Lincoln represented the recently-formed Republican party that had never won a presidential election. Douglas was the Democrat. The two big parties got just 61.2% of the vote. The other two candidates, John Breckinridge of the Southern Democratic Party, received 14.4% of the total, and John Bell of the Constitutional Union Party finished with 12.6%.

It was a mess.

Lincoln won the election with only 39% of the popular vote; all his electoral votes came from what we now call blue states. It was hardly a national consensus. The red states were solidly behind Breckinridge. Although Douglas received the second highest percentage of the vote, he only won one state: Missouri.  (more)

Can a 1983 movie thriller about computers and the military tell us anything about drone warfare today?  Center strategic advisor Brad Berens answers.

Image created using DALL-E.*

In 1984, my lifelong friend Juliet and I were watching a then-recent movie, War Games, at my parents’ house. This was in the early years of home video. The first Blockbuster store had yet to open, and Tim Berners-Lee was five years away from inventing the World Wide Web. Starring a very young-looking Matthew Broderick and Ally Sheedy (although both were in their early 20s during filming), the movie was about David, a high school videogame nerd, and his friend, Jennifer, who hacked into the North American Aerospace Defense Command (NORAD) by accident—they were trying to hack into a videogame company to poke around some upcoming releases.

At one point, Juliet observed of Ally Sheedy, “she has pretty hair.” This mystified me since it had never occurred to me that “pretty” was an adjective that one could use in reference to hair. Years later, when War Games came up, I mentioned Juliet’s evaluation to La Profesora, who at once sagely agreed that, yes, indeed, Ally Sheedy had pretty hair. I still don’t get it, but such is the feminine mystery.

I rewatched War Games on MAX last night. It’s still a lot of fun, and it’s a fascinating example of a retro future: a look forward from way back that tells us a lot about the daylight between where people thought we were going and where we landed.   (more)

Since the Center for the Digital Future first started tracking Internet use 26 years ago, trust in the reliability of online information has been in steady decline. That helps to explain our polarization today. Center Director Jeffrey Cole explains.

The Center for the Digital Future has been tracking the Internet continuously for 26 years through our World Internet Project (WIP). The work in the U.S. and over thirty other countries is the longest and most comprehensive digital project in the world. This column is part of an ongoing series looking at several of the critical Internet issues that only we can compare. Previously we looked at 25 years of users perceptions whether the Internet and smartphones have made the world a better or worse place.

In America, we are in the final weeks of a bitterly contested presidential election that is ending far differently than it began. One of the only constants has been the constant spread of misinformation. Now, it seems impossible for some Americans to look at their varied news sources and determine whether household pets are being eaten in Ohio, did one of the candidates actually work at a McDonald’s, and did the other save or attempt to destroy the Affordable care Act (Obamacare).

The days of having a Walter Cronkite who was “the most trusted man in America” to both sides of the political divide are long gone. The sources one side trusts are completely abhorrent to the other. In the past 26 years, online information has eradicated the business model for traditional news media, adding unverified—and often, unreliable—information to the news mix.

The worst possible case appears to be coming true: not being sure what to trust makes many of us suspicious of all information. We believe nothing unless it comes from our tribe (one of the worst possible sources).

In 1999, we started asking internet users how much of online information do they find reliable. It was difficult to generalize about a medium with so many sources. The question was/is designed to be in line with ninety years of Roper and Gallup surveys that asked consumers how much of the information in newspapers and on radio (and then, starting in the 1950s, television) they believed was credible.  (more)

Jonathan Haidt’s bestseller “The Anxious Generation” is a terrible book on which nobody should waste their money or attention.  Center strategic advisor Brad Berens elaborates.

Image created using ChatGPT*

Last week I had the privilege and pleasure of joining Joey Dumont on an episode of his True Thirty podcast in which we debated the merits of Jonathan Haidt’s bestselling nonfiction book The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness.

Attentive readers will notice that I didn’t hyperlink to Haidt’s book, which I declined to do for two reasons:

#1: Nobody needs my help to learn about this book. As I write, it is #7 on The New York Times combined print and e-book nonfiction bestsellers list, and it has been on that list for 28 weeks.

#2: The book is terrible, and I don’t want to encourage anybody to spend their ducats on it… even with a measly hyperlink. (Sidebar: until I typed that last sentence it had never occurred to me that the word “measly” had to do with “measles,” the dangerous and unsightly disease.)

Want to know why Haidt’s book is ghastly? You should listen to our entire True Thirty conversation because we cover both sides (Joey started out as a fan, but I think I wore him down), and also because we are civil and affectionate the entire time that we are in complete disagreement. In these polarized times, that civility is a pearl of great price.

What’s wrong with the book? Representative examples follow, but before you go to them, please understand that I’m not just recapitulating a podcast in this Dispatch. I’ll make different points at the end.  (more)

Center director Jeffrey Cole digs into the oppressive omnipresence of iPads at every retail transaction.

I used to love the iPad.

Shortly after Steve Jobs announced the third of his great innovations in seven years (after the iPod and the iPhone), I rushed to the head of the line to place my order. My first iPad was $499, heavy, and somewhat clunky.

I loved it! It could do most of the things I had needed a laptop computer for. It was also the best thing imaginable to read a newspaper or a book, play games in bed or even in the bathtub.

Best of all, it was easy to use. There was no learning curve, and it was impossible to corrupt or damage unless you dropped it on the floor. I recommended it to friends who had a hard time learning operating systems and computers. It was also the ideal device for people who introduced viruses onto their computers, or corrupted registries, or encountered the blue screen of death.

The first iPads were inexpensive. Today, while there are versions with OLED and massive memory that cost more than a PC or MacBook, there are also well equipped versions that do almost anything starting at $349. Pretty amazing, but we are used to price drops with electronics.

The iPad was truly revolutionary. It was a computer for people who didn’t have to know they were using a computer. You just opened the case (no booting) and started exploring. In the unlikely event there was something too complex to understand, you could visit the Genius Bar.

These days I hate the iPad!

It has become an oppressive part of my life, annoying and angering me whenever I see one in public. My own iPad, which I continue to love, does not elicit those same feelings. I still see it as my traveling buddy, easier and more fun than turning on a computer (with crossed fingers hoping it boots correctly).  (more)

In the movie business, do sequels thrive or do audiences want original stories? Or both? Or neither? Center director Jeffrey Cole explores the scope of Hollywood’s confusion when it comes to what works.

While not thought of as a philosopher, Mike Tyson may eventually be considered one of the best alongside Yogi Berra (“It ain’t over ‘till it’s over” and “that restaurant is so popular no one goes there anymore”). Tyson’s great quote refers to boxing but fits equally well whether you are a startup, a successful incumbent business, or a movie studio trying to figure out what kinds of films to make: “Everybody’s got a plan until they get punched in the mouth.”

Every year, Hollywood has a perfect plan for what will succeed in getting audiences into the theater. Then it gets punched in the mouth.

The plan worked brilliantly in 2019 when nine films earned global revenues of $1 billion or more. The lesson of all those films was to build on familiar intellectual property (IP). Audiences would be comfortable with what they liked in the past and knew what to expect. At the top of the 2019 list was Avengers: End Game, which was the capstone of over 20 Marvel movies and continued a story from the film just before. Also hugely successful was Toy Story 4, the live remake of The Lion King, Frozen II, a live action Aladdin, Spider-Man: Far from Home, Star Wars: Episode IX and Jumanji: the Next Level.

The remaining two, Joker, featured a new actor and fresh take on the Batman villain who was well known from earlier films and comic books, as was Captain Marvel.

There were no original characters or concepts in the top 10 films of 2023.

Original IP Works or does it?

“Give the people more of what they have shown they like” was the clear lesson of successful box office, and then came COVID. It took a while to get back to significant original production. The two most successful films, before something resembling a normal summer resumed, furthered the lessons of 2019. These films, importantly, were made before COVID — Avatar: The Way of Water and Top Gun: Maverick — both sequels.  (more)

Sometimes, when you know a change is coming, the anticipation itself can create other sorts of change.  Center strategic advisor Brad Berens illuminates.

Image created using Adobe Firefly.*

Regular readers of these columns might remember a few issues back — in Will Ozempic Kill Movie Theaters? — when I explored how the possibility of 10% of the U.S. population going onto GLP-1 weight-loss drugs like Ozempic might be the final nail in the collective coffin of moviegoing.

That piece was part of broader research we’re doing here at the Center for the Digital Future about how Ozempic (and other weight-loss drugs like it) will have second-order impacts (movie theaters, sporting events, concerts, restaurants, adult beverages, travel, fashion) that go far beyond their expected deep impact on healthcare.

We’re about to go into the field with a national survey, and I’ll report back soon.

This issue, though, I want to explore a more personal Ozempic journey: my own.

I have Type 2 diabetes as well as other maladies that are worse because I’m overweight. I’m lucky that I don’t need to give myself insulin shots, but it isn’t getting better. I tried a drug called Metformin that would have done more work to manage my blood sugar and also help me to lose weight, which would have been great, but it torched my stomach.

Since I took up swimming several times each week, my general health has improved (and my knees don’t hurt, which—after being a competitive fencer in my youth and slamming my knees into concrete over and over—is amazing), but my weight hasn’t budged, which means neither has the diabetes.  (more)

How realistic is the idea that an AI-driven “digital intimacy assistant” could help a shy man woo somebody he finds attractive?  Center strategic advisor Brad Berens takes a look.

Image created using Ideogram.ai.*

Recently in my weekly newsletter, I shared a microfiction (1,000 words or less), a short science fiction story called Flyrt about Chris, a shy man, Roxy, the woman he finds attractive, and Cyr, a snarky, AI-powered “digital intimacy assistant” who coaches Chris in how to woo Roxy.

This time, I’ll explore how realistic the story is or isn’t. You don’t have to read Flyrt (although it ain’t bad) to understand this week’s piece, but fair warning: Thar Be Spoilers Ahead!

Let’s dig in.

The digital Cyrano

For those readers who didn’t clock the parallels, Flyrt parallels the plot of Edmond Rostand’s famous 1897 French play Cyrano de Bergerac, which I read in the delightful Brian Hooker translation as a boy. (You can see José Ferrer perform this translation on YouTube in the 1950 film; there are many other adaptions, of which the 1987 Steve Martin movie Roxanne is my favorite.)

In the play, Cyrano is a military man with a huge nose who loves Roxane, but Roxanne loves Christian, who is handsome and tongue-tied. Cyrano helps Christian woo Roxane both because her happiness is important to him and because he thinks she could never accept him as a lover with his humungous honker.

In Flyrt, Christian becomes Chris, Roxane become Roxy, Cyrano becomes the snarky AI Cyr, and Cyrano’s best friend Le Bret becomes Chris’s pal Brett. The parallels are superficial because Cyr the AI does not love Roxy, but they were a hint to the recovering English majors who read these columns.

What is realistic? A lot. An August 30 Financial Times ($) story—“Dating apps develop AI ‘wingmen’ to generate better chat-up lines: Tinder, Hinge, Bumble and Grindr are racing to create chatbots that can coach Gen Z users to flirt”—was my direct inspiration.  (more)

It takes a high-quality crystal ball for an acquirer to spend $50-80 billion for a major studio. As Center director Jeffrey Cole points out, rarely does it succeed.

Finally, after seven years, Disney has something to crow about from its 2019 acquisition of 21st Century Fox. It has the highest grossing film of the year with Inside Out 2 at $1.66 billion. That came from Pixar (which it acquired in 2006).

But the current second-highest grosser, Deadpool & Wolverine, is already at $1.257 billion, still #1 at the box office in its sixth week, and sure to best Inside Out 2 very soon. Deadpool and two other top grossing films also came from Disney’s purchase of Fox: Kingdom of the Planet of the Apes ($397 million) and Alien: Romulus ($293 million and counting, currently #2 at the box office).

This is a delayed, much needed, and partial validation of the acquisition.

Deadpool was so conscious of being a Fox-turned-Disney movie that many of its inside references were based on being the first R-rated Disney Marvel film. Ryan Reynolds happily made jokes about words and subjects that had never been part of a Disney film before. He even called out Kevin Feige, the head of Marvel, by name when he used R-rated content.

Deadpool has become the highest grossing R-rated film of all time (passing Joker, another comic book adaptation). The three 2024 films were not the first IP dividends from Fox. The top film of 2022, Avatar: the Way of Water ($2.3 billion) also came from Fox.

I never thought Disney’s acquisition of 21st Century Fox in 2019 was a good idea. Unlike Disney’s other key acquisitions of Pixar, Marvel and Lucasfilm, it was difficult to see what Disney was paying $71.3 billion for that it didn’t already have.

More worrisome, the acquisition of Fox brought the number of major studios down from six to five. The number has been shrinking since the 1930s when there were eight big studios. The loss of each studio meant fewer films, less employment in entertainment, fewer choices for moviegoers, as well as consolidation of industry power.  (more)

On August 6, Twitter/X owner Elon Musk filed a frivolous lawsuit against an obscure advertising trade group; the timing, says Center strategic advisor Brad Berens, is suspicious.

Created using Adobe Fireflly*

As longtime readers of these columns know, I’ve written an intermittent series about Elon Musk’s acquisition of Twitter. You don’t have to read those older pieces, nor must you care about advertising or know anything about GARM (the Global Alliance for Responsible Media) to understand this week’s piece.

Throughout the Musk/Twitter series, my theses have been stable:

  • Elon Musk has no politics; he is amoral; he loves only money; Tesla is the foundation of his immense wealth.
  • His apparent political swing to the right was to sell electric cars to conservatives.
  • He never really wanted to buy Twitter.
  • But when the Delaware court forced him to do so, he decided to make the most of it as Twitter’s owner and absolute dictator.
  • He enjoys media attention and influence, but his goal is to sell more Teslas

The present story in brief…

On August 6, X (the absurd new name for Twitter) filed a lawsuit against the Global Alliance for Responsible Media (GARM) alleging that the trade association (part of the World Federation of Advertising or WFA) coordinated a boycott of X. The suit also mentioned dozens of advertisers (brands) that are members of GARM.  (more)

Deep fakes, voice cloning, and other technologies are making fraud more convincing and widespread than ever, but there’s another threat to our ability to answer “what is real?”  Center strategic advisor Brad Berens explains.

Image created using DALL-E*

An ongoing topic here in these columns is how answers to the question “what is real?” keep changing as new technologies (Generative AI in particular) make it easier to create images, including photorealistic images of people and events that never happened.

Such images (or videos, or voices) aren’t inherently wrong: ethical issues pop up when people make truth claims about them maliciously (disinformation), innocently pass lies along (misinformation), or stupidly don’t think about the consequences of their actions.

Here are two recent examples.

Example #1, from the stupid side, comes from Elon Musk, about whom I’ve written many times before.

This time, Musk retweeted (because “re-Xed” sounds like you’ve broken up with somebody, gotten back together, and then broken up again) a parody video by @MrReaganUSA in which a clone of Kamala Harris’ voice parrots extreme right-wing claims about her.

While my POV is that the video is more mean-spirited than funny, I acknowledge @MrReaganUSA’s right to make it. I even think that @MrReaganUSA has the right to clone a public figure’s voice in a parody video. Plus, @MrReaganUSA says “Kamala Harris Campaign Ad PARODY” above the video post on X. The video has received over 24 million views so far.

The problem came when Musk retweeted the video under the caption “This is amazing” with a laughing emoji but stupidly did not include the fact that it was a parody. Musk’s retweet has received over 134 million views so far. (It may be my mistake to give Musk the benefit of the doubt that he was either not thinking or simply stirring things up rather than deliberately engaging in disinformation.)  (more)

Teaser: The social disruptions that new, injectable, weight-loss drugs like Ozempic will create go far beyond health and health care.  Center strategic advisor Brad Berens  looks at the issues.

Image created using DALL-E.*

We humans organize our mental worlds with categories and consideration sets, so it can be hard to see when trends from different categories collide.

Back in the day when I worked at EarthLink, a dial-up ISP, we were so focused on AOL and Microsoft that we didn’t realize broadband would wipe out most of the dial-up category. (There are still people on dial-up, believe it or not.) Likewise, Kodak was focused on FujiFilm and ignored its own treasure chest of digital photography patents, etc.

At the Center for the Digital Future at USC Annenberg, we’re thinking hard about the second-order disruptions coming from the new, weekly injectable, GLP-1 weight-loss drugs like Ozempic and Wegovy from Novo Nordisk, Mounjaro and Zepbound from Eli Lilly, and others. Far beyond dieting and weight loss, we predict that weight-loss drugs will change how customers behave across a broad swath of industries. These drugs constitute another disruption, as profound as what’s happening because of AI.

My friend Jeffrey Cole has written a three-part series about the non-obvious winners and losers coming from these new drugs (start here), and we’re going into the field soon with a mini-survey to learn how Americans feel about taking them. After that, we’ll start preparing a major survey. Stay tuned.

As we’ve been exploring different scenarios, I keep coming back to movie theaters. Will weight-loss drugs be the final nail in the coffin for American theatergoing?  (more)

When technology enables us to change our personalities to help us achieve our goals, what duty does the first personality have to the second and vice versa?  Center strategic advisor Brad Berens explains.

Image created using Ideogram.ai

Recently, I shared a microfiction (1,000 words or less), a short science fiction story called Mr. Hyde’s Letter about Tim and Timothy—two aspects of the same man—in which Timothy took powerful medications to become Tim, who was more hard charging and successful, but Tim wasn’t happy with the life enabled by the meds.

This time, I’ll explore how realistic the story is or isn’t. You don’t have to read “Mr. Hyde’s Letter” (although it ain’t bad) to understand this week’s piece, but fair warning: Thar Be Spoilers Ahead!

Let’s dig in.

Question #1: Identity

Some neuroscientists, psychologists, and philosophers believe that our sense of a single self behind the wheel of our moment-to-moment consciousness is just an illusion. Instead, we have a complex gang of quarreling selves all vying for control. Sam Harris’ book, Waking Up: A Guide to Spirituality Without Religion is a cogent introduction to this idea. Or you can just watch the Pixar movie Inside Out. (I haven’t seen the sequel yet.)  (more)

Center director Jeffrey Cole explores what has to happen if Biden leaves the race.

This is one for the history books.

The most important presidential election since at least 1864, if not ever, is a little more than 100 days away. While everyone knows everything they need to know (if not too much) about the Republican candidate, it is likely that the Democratic candidate will be someone recognizable by name or face only to voters in one state and those addicted to politics.

The nominee (if not President Biden, as is looking increasingly likely) will make their acceptance speech in front of Democratic delegates in Chicago on August 22. He or she will then have 74 days to transform into a household name and face with a well-understood record as a VP, governor, or Senator and make a case against perhaps the best-known opponent of all time.

Is it possible to go from relatively or completely unknown outside of California, Michigan, Pennsylvania, or elsewhere (even Vice-President Harris, while recognizable, is not well-known on her own) and pick up 270 or more electoral votes between now and the election?  (more)

Tomorrow’s AIs are both more embodied than HAL from 2001 and less robotic than Rosie or Data. Center strategic advisor Brad Berens describes how a better analogy comes from a surprisingly old story.

Image created using Ideogram.ai

Typically, in science fiction and popular understanding, AIs and robots (or drones) fall into distinct, although overlapping, categories.

Robots are autonomous single entities that move through the physical world the way humans and animals do. Robots don’t need human shapes—just think about the variety of droids in Star Wars—although many famous examples are humanoid (Rosie, Data, the Vision, Marvin).

AIs are disembodied voices that affect the physical world (“Alexa, turn on the front lights”) but do not inhabit the physical world: we hear but do not see them. These AIs tend to be tied to a place (a home, a space station or starship), and in some ways to be that place. HAL from 2001: a Space Odyssey is one of the first such, but the helpful computers across the different incarnations of Star Trek (including Zora in the just-concluded Star Trek: Discovery) are where a lot of folks first encounter the idea of an invisible computerized helper. Siri, Alexa, ChatGPT, and all their cousins follow this model.

We need new AI analogies.  (more)

Can the reputations of Harvard University, NPR, and The Washington Post survive recent crises?

By Jeffrey Cole

It’s tough enough to watch institutions that you have long respected be subject to misinformation and intense partisan criticism. It is almost unbearable to witness three of the most trusted sources of information and knowledge in America set themselves on fire as they destroy their own reputations.

Abraham Lincoln, twenty-two years before he became president, warned in the Lyceum Address that the greatest threats come not from external enemies but within: “If destruction be our lot, we must ourselves be its author and finisher.” Or, put another way, to quote one of the scariest lines in horror films: “the call is coming from inside the house.”

Within a matter of months, Harvard University, National Public Radio (NPR), and the Washington Post seem unable to bandage reputational losses from self-inflicted wounds. All are highly respected, long-time institutions (Harvard for 388 years) that are facing massive—perhaps permanent—damage from their own mismanagement and bad decisions.  (more)

Access to ChatGPT’s new voice interface turned into a long conversation while Center strategic advisor Brad Berens walked in the summer sun. The results were mixed.

Image created using Adobe Firefly

To paraphrase and tweak a famous quote usually attributed to H.L. Mencken, nobody has ever lost money by overestimating the laziness of the human mind. To put it more generously, we humans have a lot of decisions to make each day and a limited amount of decision-making energy, so we hoard our cognitive resources and take shortcuts when we can. This is the idea behind Daniel Kahneman’s fast but inaccurate System 1 and accurate but lazy System 2.

Generative AI programs like ChatGPT take advantage of our innate human laziness because they instantly generate plausible-sounding answers to our questions, although those answers may not be accurate. This was already a problem when ChatGPT was only text based, but with its new voice interface the combo-platter of plausibility and laziness has become even more acute.

That’s my topic today.  (more)

Weight loss drugs like Ozempic will disrupt far more than just the waistlines of the people taking them. As Center director Jeffrey Cole explains, entire industries will shift, and some will not survive.

Credit: freepik.com

Morgan Stanley recently predicted that by 2034 ten percent of Americans (35 million people) will be taking semaglutide weight loss medications, the best known of which is Ozempic.

At a cost of $900-$1,300 per month (it’s as low as $200 in the UK), and with little insurance coverage except for diabetics and very obese people, the economic disruption to the healthcare system, and a growing list of companies (some obvious, others surprising) is already starting to be enormous.

In my last column, I looked at those who stand to gain as Americans lose weight. This time, I’ll look at those who will lose if tens or hundreds of millions around the world are able to safely and affordably move from obesity. While there would seem to be some obvious losers, many of the companies most threatened have already taken significant steps to mitigate what they can see as a rapidly shifting landscape.  (more)

Center strategic advisor Brad Berens explains how brands have different functions in our lives — some easy to understand, and some that deserve extra pondering.

Image created using Ideogram.ai., and yes I know it’s 18.

A few days ago, my friend Om Malik reached out with these questions about brands:

I am thinking about something and wanted to get a better idea of what it means to be a music artist or a media company as a brand. What defines a brand? Any thoughts on that? I am wondering what happens to traditional brands over time. What makes them retain or lose their value?

I have a lot to say on this topic, which will surprise nobody, some of which I shared with Om and which have now evolved into this piece.

Image created using Ideogram.ai., and yes I know it’s 18.

There is a lot of mystical, Three Card Monte misdirecting nonsense out there about brands.

Marketers tend to overplay brand loyalty quite a bit. Byron Sharp, in his magnificent book How Brands Grow: What Marketers Don’t Know, points out both 1) that even the most loyal customers will buy—without experiencing anguish—competitor brands when their preferred brand isn’t available, and 2) that brands grow simply when people are aware of their existence and able to purchase them. The awareness bit is the hard part.

I’ve written a lot about the roles that brands play in our lives, with this 2019 piece as one good example, but my thoughts have matured over the last few years.

Here are 13 overlapping ways of thinking about brands across a spectrum starting with the simplest and ending with the most complex.  (more)

Who are the big winners because of Ozempic? Beyond the people who lose weight, says Center director Jeffrey Cole, new drugs like Ozempic may benefit companies, industries, professionals, and families in unexpected ways.

Disney got it wrong with one of its most popular and enduring theme park attractions: it’s not such a small world after all!

Photo by Varnsi

The “Small World” ride premiered at the 1964 New York World’s Fair. Built by Disney’s imagineers, it was such a sensation the company quickly moved it to Disneyland where it opened in 1966. Visitors sit in a small boat that moves through a shallow channel—viewing dancing dolls from around the world while being serenaded by one of the catchiest songs ever. It’s impossible to get that song out of your head, no matter how hard you try.

In 1966, the channel was deep enough for the boat to seat about twenty average sized riders, a mixture of adults and kids. But as love for the attraction grew, so too did the weight of the visitors to the park. By 2006, what had been normal weight for twenty people in the sixties now carried an extra 500 to 600 pounds. Often the ride would get stuck as the payload of the boat caused it to scrape the bottom. For a while, Disney dealt with the problem by only partially filling the boats. However, seeing empty seats angered people waiting in long lines.

In 2008, Disney shut down the ride to dig a deeper channel (among other renovations) that allowed the boat to move freely once again. Disney had to reconfigure “It’s a Small World” for the “average sized person” of the 21st century.  (more)

New weight-loss drugs such as Ozempic and Wegovy will do more than just shrink individual waistlines: the economic and social impact, says Center director Jeffrey Cole, may be incalculable.

Disruption, like people, comes in all shapes and sizes. But a new and extraordinarily powerful disruption is — for the first time — changing the actual shape and size of the human body. It is a true game changer, bringing profound impact on our health, life spans, psyches, social lives, economy, as well as how we relate to each other.

This disruption is just getting started.

Around the world, especially in prosperous countries, people are getting fatter. We in America, with our endless appetites and enormous portions, set a high bar, but the rest of the world seems determined to catch up or surpass us (no easy task). More than two thirds (69%) of adult Americans are overweight, and more than one third (36%) are obese. Globally, over 1 billion people are obese, and that number is rising fast.

The tools for fighting (or preventing) obesity are difficult to follow or stick to. It means eating less “fun” food, more healthy things, and smaller portions. Plus regular exercise. Some diets call for starvation-level intake or only consuming carbs, only proteins, or something else equally unrealistic. That is why so many quit diets and exercise, quickly gaining more weight than before they started.

Now disruption has come to our bodies. Although there are a number of similar-class (GLP) drugs, the best known of them all is Ozempic. I’ll use “Ozempic” generically, even though others Wegovy, Mounjaro, and Zepbound are similar.

For generations, we have dreamed of taking a daily pill to make or keep us thin. Now it is happening, but it is even better than that. You only need to take some of these disruptive new medications once a week!

The results have been extraordinary.  (more)

Will bad actors use digital duplicates of our dead loved ones against us?  Center strategic advisor Brad Berens explains.

Image created using DALL-E

I recently shared a microfiction (1,000 words or less), a short science fiction story called “Hacking the Dead” about Trix, a corporate spy who influences the digital duplicate of an equity analyst’s beloved dead mother in order to change his mind about Trix’s company without him realizing it.

This time, I’ll explore how realistic the story is or isn’t. You don’t have to read “Hacking the Dead” (although it ain’t bad) to understand this week’s piece, but fair warning: Thar Be Spoilers Ahead!

Let’s dig in.

The idea behind “Hacking the Dead” is similar to the 2010 movie Inception, except instead of the covert persuasion happening in dreams it happens in the waking world with dream-like digital ghosts of departed family members.

Like Inception, in “Hacking the Dead” a small team of people (Trix and her unseen co-workers) create a false reality. This is the kind of thing that used to require whole government agencies dedicated to fake narratives and propaganda, but now technology has democratized it. (See also this earlier piece about the movie Wag the Dog and deep fakes.)  (more)

A recent Economist article about dying small towns inspired Center strategic advisor Brad Berens to think about Retro Futures, the failed promise of the hyperloop, and “sideshadows.”

Image created with Ideogram.ai

Typically, when I’ve written about retro futures, I’ve explored how old science fiction stories illuminate things happening today. This time, I’ll take a different angle.

One of the problems with being a futurist and seeing the transformative potential of new technologies is that, when those technologies fail, I’m still haunted by what might have been. In his brilliant 1996 book, Narrative and Freedom: The Shadows of Time, my friend Gary Saul Morson describes this sort of awareness as “sideshadowing”:

“Whereas foreshadowing works by revealing apparent alternatives to be mere illusions, sideshadowing conveys the sense that actual events might just as well not have happened. In an open universe, the illusion is inevitability itself. Alternatives always abound, and, more often than not, what exists need not have existed.” (117)

That’s abstract, so here’s a concrete example:

The April 20th issue of The Economist ($) had an alarming article, “Emptying and Fuming” about dying small towns in America. Cairo, Illinois was the test case:

“Cairo is on its way to becoming America’s newest ghost town. Its population, having peaked above 15,000 in the 1920s, had fallen to just 1,700 people by the 2020 census. Alexander County, Illinois, of which it is the capital, lost a third of its people in the decade to 2020, making it the fastest-shrinking place in America.”  (more)

New developments in Generative AI promise that we’ll all have digital BFFs to help us live our best lives.  But, asks Center strategic advisor Brad Berens, is this really possible?

Image created with Ideogram.ai

Last week, I read an intriguing Psychology Today piece about the next wave of Generative AI powered digital assistants. In “The Emergence of Private LLMs,” John Nosta argues:

“The role of Large Language Models (LLMs) is about to change. Two groundbreaking advancements are set to redefine the way we interact with personal assistants: the rise of private LLMs and the expansion of prompt memory. This powerful combination of memory and privacy in LLMs is poised to create the most sophisticated and influential personal assistants ever conceived, offering unprecedented insights while safeguarding user confidentiality. In the simplest of terms, you just might be finding a new BFF—best friend forever.”

To translate: Generative AI will change in two powerful ways. First, instead of folks logging into ChatGPT or Gemini (etc.), we’ll each own programs that work with our data to help us do what we want to do. Second, instead of those programs forgetting that they’ve ever spoken to us after each interaction (digital amnesia), our new digital pals will remember our interactions (Ars Technica has a handy explanation), becoming ever more personalized helpers.

Nosta’s conclusion is optimistic.  (more)

Generative AI makes it effortless to create photorealistic images (and soon videos), but sometimes, says Center strategic advisor Brad Berens, the question is more complicated than whether something is fake.

Image created with Ideogram.ai.

I belonged to a fraternity in college. This often surprises folks until they learn that the house in question was a co-ed literary society that fans of the Revenge of the Nerds movies wouldn’t find credible. “Ah, that makes sense,” they say, eyebrows settling.

College—with its heady elixir of time, exploration, and other young people walking down similar paths before life focuses them—is a thicket of intense conversations. Recently, I remembered one such with my fraternity sibling Chuck, a physicist.

We were in the living room debating the nature of reality, as students do. An English major, I said that reality depends on context.

This irked Chuck, the scientist, like sand in bedsheets. He was adamant that reality is reality: the truth is out there.

I replied, “OK, so what you’re saying is that the number 10 always means this many” and held up all my fingers and both thumbs.

“Yes,” Chuck said.

“But what if we change the base to 2 or 4 or something else?” I said. “In Base 2, 10 is this many,” and held up two fingers. “Right?”  (more)

Headlines about a new report draw the wrong conclusions about what caused Tesla’s weak sales quarter and stock decline.  Center strategic advisor Brad Berens reports.

Image created with Ideogram.ai

Whoever runs PR for Caliber, the brand reputation consultancy, deserves a raise or at least a nice bottle of Scotch. Last week, a new “In tech we trust?” report from Caliber came out that contextualizes Tesla’s reputation among those of its Big Tech peers: Amazon, Apple, Facebook, Microsoft, Uber, etc.

It provoked a flurry of headlines (hence the Scotch), including:

  • “Would-be Tesla buyers snub company as Musk’s reputation dips” (Reuters)
  • “Elon Musk’s reputation is probably turning buyers off Tesla, analysts say” (Business Insider)
  • “Buyers are avoiding Teslas because Elon Musk has become so toxic” (Futurism)

The Reuters article came first; the others piled on. All either contained or alluded to this provocative (albeit carefully worded) sentence:

“It’s very likely that Musk himself is contributing to the reputational downfall,” Caliber CEO Shahar Silbershatz told Reuters, saying his company’s survey shows 83% of Americans connect Musk with Tesla.

This is nonsense.  (more)

When boos rang out about AI at SXSW (the most tech optimist gathering on the planet) it showed us that the birth of AI is the opposite of the birth of the internet 25 years ago, which helps us to see, says Center director Jeffrey Cole, where public sentiment about AI is headed.

Image by Freepik

Last week at Austin’s South by Southwest (SXSW), several tech leaders extolled the virtues of AI and made positive predictions about how it would improve the world. SXSW is one of the leading tech conferences in the world, filled with fans, executives, and investors looking for the next new thing.

There is no audience anywhere that is more supportive of new tech. SXSW attracts enthusiasts who want to see, use, and invest in the newest technologies. Back in 2007, Twitter debuted in front of a SXSW audience who erupted in cheers at the new form of social media .

At this year’s SXSW, one tech panelist urged people “to be an AI leader.” And OpenAI’s Peter Deng shared his (self-serving) view, “I actually fundamentally believe that AI makes us more human.”

Rather than erupting in cheers and wanting to learn more, the audience started booing at all the positive talk of AI.

That’s the reaction we might expect when somebody starts boasting about AI at an organized labor convention, or at a Writer’s Guild meeting where people fear losing their jobs. It is extraordinary that it happened at a fanboy gathering like SXSW.

It is now becoming abundantly clear that AI is different. To a world that knew little or nothing about AI until last year (beyond fictional versions like HAL or Skynet), this is not just another great new technology that will enhance our lives (with a little collateral damage to other people who work on encyclopedias and phone books as happened with the internet). (more)

At this year’s Consumer Electronics Show, AI was unavoidable, even if many exhibits only used AI superficially.  Center director Jeffrey Cole asks: How will AI change in the coming months?

Early this month, I spent a couple of days at the Consumer Electronic Show (CES) in Las Vegas. In past years, most of the crowds (135,000 this year) were at the Central Hall looking at the latest big screens from Samsung, LG, and Sony. That hall was still full, and immovable as always, but many of the attendees made their way to the Automotive Hall, as well as the Health and Beauty pavilion, where they could see smart toasters, smart toilets, mirrors that diagnosed your health, and more.

I had also been at CES exactly a year ago. Twelve months made one massive difference. At CES 2023, you would have struggled to find any mention of artificial intelligence. While AI was in some of the devices a year ago, the words were barely heard in Vegas.

This year, AI was everywhere

Every product slapped on a “Powered by AI” label, whether it was warranted or not. The hotdogs in the snack booths were conceived and cooked by AI (almost!) A booth would have been naked had it not promised a new AI innovation.

All this in 12 months.  (more)

The trillion dollar ecommerce giant is adding ads to its Prime Video streaming service because everybody else is, but the secret story, reports Center strategic advisor Brad Berens, is about Amazon’s growing AI capabilities. 

On September 22, Amazon announced that it would follow Netflix, Max, and Disney+ and start running ads on Amazon Prime Video. Ad averse Prime subscribers can cough up another $36 per year to protect their tender eyes.

Here’s a snippet from the announcement:

“To continue investing in compelling content and keep increasing that investment over a long period of time, starting in early 2024, Prime Video shows and movies will include limited advertisements. We aim to have meaningfully fewer ads than linear TV and other streaming TV providers.”

The first sentence fails to persuade. Amazon can afford Prime Video without adding ads. At the time I’m writing this, Amazon has a $1.3 trillion dollar market cap. Amazon spent $16.6 billion on entertainment content in 2022, but even if it more than tripled that to $50 billion it would still be a rounding error in Amazon’s total business.   (more)

Jeff Cole describes change and disruption in an age of technology and science, speaking to the Imagine Solutions 2024 conference in Naples, Florida.

See the video of Jeff’s presentation here.

For all Center columns, go here.