Who am I this time?

New developments in Generative AI promise that we’ll all have digital BFFs to help us live our best lives.  But is this really possible?

By Brad Berens

Last week, I read an intriguing Psychology Today piece about the next wave of Generative AI powered digital assistants. In “The Emergence of Private LLMs,” John Nosta argues:

The role of Large Language Models (LLMs) is about to change. Two groundbreaking advancements are set to redefine the way we interact with personal assistants: the rise of private LLMs and the expansion of prompt memory. This powerful combination of memory and privacy in LLMs is poised to create the most sophisticated and influential personal assistants ever conceived, offering unprecedented insights while safeguarding user confidentiality. In the simplest of terms, you just might be finding a new BFF—best friend forever.

To translate: Generative AI will change in two powerful ways. First, instead of folks logging into ChatGPT or Gemini (etc.), we’ll each own programs that work with our data to help us do what we want to do. Second, instead of those programs forgetting that they’ve ever spoken to us after each interaction (digital amnesia), our new digital pals will remember our interactions (Ars Technica has a handy explanation), becoming ever more personalized helpers.

Nosta’s conclusion is optimistic:

As LLMs continue to evolve, we can anticipate a future where AI becomes a seamless extension of ourselves, helping us navigate life’s challenges, optimize our potential, and discover new frontiers of personal growth. The synergy between memory and privacy will give rise to remarkably powerful tools that not only support us but also help us uncover hidden aspects of our own capabilities.

I’m skeptical. My doubts are not about the ever-increasing power of Gen AI (more on this towards the end). Instead, I’m skeptical because of how incoherent we humans are.

How can even the most powerful algorithm help us “optimize our potential” (such a bloodless phrase) when we don’t have a stable sense of who we are or what we want?

Sam Harris, in Waking Up: A Guide to Spirituality without Religion (2014), has a terrific account of how our sense of a consistent self behind the wheel of our ongoing consciousness is an illusion. But, if you don’t want to read 200+ pages, Walt Whitman has a succinct, three-line version in his poem “Song of Myself” from Leaves of Grass (1892):

Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)

Many thinkers (e.g., Bernard Beckerman, Daniel Kahneman, Charan Ranganath) have made the distinction between the experiencing self (what’s happening right now) and the remembering self (what happened before). I believe we should also consider a distinct predicting self (what we think is about to happen).

Image created with Ideogram.ai

So… which self is our new digital BFF supposed to (ick) optimize? Right now, I’m trying to lose a few pounds, but I also love vanilla ice cream with fresh blueberries. Should my digital pal be loyal to my future-focused, predicting self and lock the freezer after dinner or somehow distract me from dessert (“Look, Brad! Comic books!”)? Or should my pal be loyal to my right-now self and remind me that I also like Nestlé Carnation malted milk powder in ice cream and that a fresh container is in the pantry?

Which of our wishes should get priority?

This sort of dilemma isn’t only about dessert. When my beloved grandmother, Edith Berens, died in 2010, we discovered a problem. Shortly before her death, Grandma told my parents that she did not want a funeral service, particularly one conducted by a rabbi. She confessed that she had always secretly been an atheist.

However, as Mom and Dad started to make the arrangements with the cemetery, they realized something. When my grandfather died seven years before, Grandma had prepaid for an hour of chapel time after her death. (It was a twofer.)

It’s unlikely that our coming-real-soon Gen AI BFFs will have complete access to everything we do in the world—that’s a lot of sensors, cameras, and microphones!  But even if that terrifying total information awareness happens, that won’t give an algorithm access to our internal landscapes: the shifting, improvisational, multitudinous throngs of ideas, emotions, impulses, and considered thoughts in our heads. How is an algorithm going to optimize that?

Which wish should we respect? The 2010 wish not to have a funeral or the 2003 prepaid wish to have a funeral? My grandmother was of sound mind until the moment of her death, no dementia, so we couldn’t explain away the “no funeral” request.

It was like a moment in Oscar Wilde’s The Importance of Being Earnest (1895) when Gwendolyn and Cecily both believe themselves to be engaged to Ernest Worthing. “I have the prior claim,” Gwendolyn says because the man she thinks is Ernest proposed to her the day before. Cecily, to whom the man she thinks is Ernest has proposed just a few minutes earlier, ripostes, “since Ernest proposed to you he clearly has changed his mind!”

With my grandmother, in the end we decided both a) funerals exist to help the living grapple with the death of a loved one, and b) my grandmother, a child of The Great Depression, would have flinched to waste a perfectly good hour of chapel time. So we had the funeral.

“But she didn’t want a rabbi,” I said to my parents. “So who is going to do the service?”

There was a pause on the phone, then Dad said, “you are.” (A story for another column.)

Practically speaking, it’s unlikely that our coming-real-soon Gen AI BFFs will have complete access to everything we do in the world—everywhere we go, everybody we interact with, everything we see and hear—that’s a lot of sensors, cameras, and microphones!

But even if that terrifying total information awareness happens, that won’t give an algorithm access to our internal landscapes: the shifting, improvisational, multitudinous throngs of ideas, emotions, impulses, and considered thoughts in our heads. How is an algorithm going to optimize that?

Last thoughts: Nosta’s optimistic conclusion also ignores practical issues. What individual will be able to afford and program a personal LLM? The skyrocketing power consumption of Generative AI alone means that most mere mortals wouldn’t be able to pay the electricity bill for their new digital BFFs.

Instead, I predict that Gen AI will make digital assistants like Siri, Alexa, and the Google Assistant more powerful and helpful. But their loyalties will be to the trillion dollar companies that create them, Apple, Amazon, Alphabet, not to us.
__________

 

Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at www.bradberens.com, follow him on Post and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).

 

 

See all columns from the Center.

May 1, 2024