Retro futures and who counts as human

Retro futures and who counts as human

What lessons does a 1985 Isaac Asimov novel have to teach us about AI and algorithmic bias today?

By Brad Berens

After months of failed attempts and carting the book around the planet, I finished reading Yuval Noah Harari’s magnificent and challenging Nexus: A Brief History of Information Networks from the Stone Age to AI. (Don’t take the word “brief” in the title seriously.) I have admiration for and things to say about Nexus, but in this column, I’m going to dig into one passage about work on algorithmic bias by MIT Professor Joy Buolamwini:

When Buolamwini—who is a Ghanian-American woman—tested another facial-analysis algorithm to identify herself, the algorithm couldn’t “see” her dark-skinned face at all. In this context, “seeing” means the ability to acknowledge the presence of a human face, a feature used by phone cameras, for example, to decide where to focus. The algorithm easily saw light-skinned faces, but not Buolamwini’s. Only when Buolamwini put on a white mask did the algorithm recognize that it was observing a human face. (Page 293)

I created this image using ChatGPT.*

At first it might seem that the people who programmed the algorithm were racists and misogynists, but Harari argues that it’s more subtle: algorithms pick up “racist and misogynist bias all by themselves from the data they were trained on” (Page 294).

Algorithmic bias is disturbing because it means that AIs trained on bad data can reinforce and amplify institutional racism and sexism. For example, a black couple—with identical jobs, savings, and other assets as a white couple—might unfairly find themselves unable to get a home loan because the program the mortgage broker uses to assess risk relies on racist data.

That’s bad enough. Worse is our human tendency to trust the judgment of AIs because AIs don’t have grumpy days when a sour stomach influences decisions.

This does happen with humans, the famous 2011 “Hungry Judge” study demonstrated that parole boards in Israel became stricter just before lunch: “the granting of parole was 65% at the start of a session but would drop to nearly zero before a meal break.” After lunch, the judges would once again become more lenient. The effect has been replicated many times.

Most of the time, AI logic is a black box: we don’t know how the AI arrives at an answer, merely that it does, so we can treat AI decisions like those of the ancient Oracle at Delphi: inscrutable but mystically true. This is a mistake.

Retro Futures and Asimov’s “Robots and Empire”

Over the past few years, I’ve explored several Retro Futures: how older science fiction imagined the future accurately or not.

As I read the Harari passage, I thought of a plot twist from an Isaac Asimov story. (I knew it was from his Elijah Baley and R. Daneel Olivaw mysteries, but I couldn’t remember which one; I wound up re-reading the entire series until I found it.)

The twist comes from Asimov’s 1985 novel Robots and Empire. (Spoiler Alert… but if you haven’t managed to read this book in 40 years maybe you won’t?)

In the novel, humans have abandoned the planet Solaria, leaving only robots behind. The robots are valuable, so different factions have visited Solaria to get them. Each spaceship that has landed on Solaria has been destroyed with all hands lost.

Trader Captain D.G. Baley (a descendant of Elijah from the earlier books), visits Gladia, a close friend of his ancestor, who lives on the planet Aurora. Solarians and Aurorans are longer-lived than most humans. Gladia has resided on Aurora for 230 years, but she was born on Solaria. D.G. convinces Gladia to come with him to Solaria because she is the last Solarian that anybody can find.

When the crew lands on Solaria, hostile robots nearly kill them and destroy their ship—in defiance of the three laws of robotics—until Gladia speaks in her Solarian accent. The robots recognize Gladia as human because of how she talks and stop their attack.

Algorithmic bias is disturbing because it means that AIs trained on bad data can reinforce and amplify institutional racism and sexism. For example, a black couple—with identical jobs, savings, and other assets as a white couple—might unfairly find themselves unable to get a home loan because the program the mortgage broker uses to assess risk relies on racist data.  That’s bad enough. Worse is our human tendency to trust the judgment of AIs because AIs don’t have grumpy days when a sour stomach influences decisions.

In Asimov’s stories, the first and most important law of robotics is “a robot may not injure a human being, or, through inaction, allow a human being to come to harm.” The Solarians found a loophole by narrowly defining who counts as human: people who speak with a Solarian accent.

Should we give AIs the benefit of the doubt?

We treat algorithmic bias as a failure of accuracy. Computer scientists in a hurry train their AIs on immense amounts of data without guidance about what kind of data it is or how to distinguish reliable data from biased or incorrect data. At worst, we blame the scientists for letting their own bias mis-program the AIs.

What Robots and Empire shows us is that algorithmic bias doesn’t have to be a mistake. It can also be the product of human malice. That frightens me.

When it comes to being human, everybody counts.
__________

 

Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at www.bradberens.com, follow him on Blue Sky and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).

 

* Image Prompt: “Create an image with a cluster of human faces of different races, genders, and ages. Include one face that is a robot but still looks human.”

 

See all columns from the Center.

September 3, 2025