Face: The final frontier
Face: The final frontier
A decade after Google Glass, are smart glasses finally becoming a thing?
By Brad Berens
Although we’ve all had our individual journeys with the internet, the journey of the internet itself has been one of increasing intimacy. At first, we had to go somewhere else to get online: to a lab somewhere at a business or university. Then home dial-up arrived. For many, we still had to go somewhere: the disused guest room or home office near a phone outlet where the computer was.

I created this image using Google Gemini / Nano Banana.*
Then came home broadband and wireless, at which point we had laptops that could be online anywhere in the house. Then mobile internet access allowed us to bring the internet with us just about everywhere. Then smartwatches established a beachhead on our wrists and earbuds did the same on our heads.
Sometimes it feels like we’ve been sinking deeper and deeper into the internet, like a mammoth’s slow decline into a tar pit, with our faces looking up with desperate expressions as the tar covers our eyes at the last.
Our eyes are the penultimate stop of the internet’s journey into our lives and onto our bodies. The last stop will be when we plug our brains directly online, which sounds scary but is already happening with Neuralink and other Brain-Computer Interfaces (BCIs), but those won’t be common for a while.
For the last decade or so, smart glasses have either been too expensive, too limited, or too dorky for most people either to have any experience with them or to consider buying them. The Apple Vision Pros, for example, are $3,500.00. Back in 2014, Google Glass was $1,500 and folks would call you a “glasshole” for wearing them.
Two recent stories suggest that this is changing.
1. Alibab’s Quark AI Glasses
John Tomase, an editor at LinkedIn, posted an intriguing roundup story about a new Heads-Up Display (HUD) or smart glasses device coming out of China:
Chinese ecommerce giant Alibaba launched its line of AI-powered smart glasses on Thursday, introducing an affordable rival to Meta in the burgeoning market for wearable tech. The Quark AI Glasses will debut in China and cost between $268 and $536, making them cheaper than Meta’s latest Ray-Bans, which start at $799. They include integration with Alibaba’s Qwen AI chatbot and Taobao shopping app. The market for AI glasses is expanding, with shipments expected to double next year to 10 million.
In the roundup, Karina Taveras noted that a more interesting point of comparison than with Meta’s Ray-Bans is with the Vision Pro glasses that cost $3,500. Taveras further observed that the implications of this are:
- AR/AI wearables could finally hit mainstream adoption
- Chinese tech companies are leading in affordable innovation
- The “luxury tech” model might not be the winning strategy
This reminds me of how smartphones evolved—the real winners weren’t the most expensive devices, but the ones that made powerful technology accessible to everyone.
Taveras’ “luxury tech” approach is how Tesla approached Electric Vehicles, starting with the $200,000 roadster and gradually introducing cheaper and cheaper models, although the company has not gotten close to its long-promised affordable-to-the-middle-class EV.
Apple follows Tesla’s luxury tech approach with the Vision Pro.
Alibaba is going for scale: growing its visual ecosystem as fast as it can.
The Alibaba story is about how smart glasses might become easily affordable and then jump to mass scale. The next story is about how smart glasses can change everyday interactions.
2. The Dutch “Candid Camera” exercise
Even though the video came out a year ago, a clip of Dutch entrepreneur and journalist Alexander Klöpping talking with people while wearing smart glasses that identified them without using proprietary data has become a minor news story in Europe.
Klöpping’s two-minute video on X is worth watching: he asks people for directions, giving the glasses enough time to surface their names, jobs, and LinkedIn profiles. When he then shares that he knows their names, their reactions range from pleased and curious (“have we met?”) to mildly alarmed. Then he shares that he got their names from his smart glasses, which people find interesting (“holy shit!”) or disturbing. “Gathering information about people that way, that doesn’t feel okay to me yet,” says a lawyer.
From the comments surrounding different versions of the video, it isn’t clear whether the combo-platter of existing facial recognition technology and smart glasses is real or a stunt, but the reactions of the people with whom Klöpping chat are genuine.
(By the way, if you haven’t ever heard of the old TV show “Candid Camera,” it has a fascinating history.)
Information asymmetry implications, plus…
It’s already scary enough dealing with organizations ranging from governments to the tech giants to the local supermarket that have more information about us than we do about them, but with smart glasses we will increasingly need to be skeptical about individual people standing in front of us. Does this person really know me? Did I really have dinner with this person at that conference that time? Does this attractive person of the desired gender really want to have a drink with me, or am I about to become an unwilling organ donor?
This sort of skepticism is just not how humans are wired. If somebody seems to like us, most of us believe it. Cults have weaponized this for years (I wrote about cults and persuasion here).
Our eyes are the penultimate stop of the internet’s journey into our lives and onto our bodies. The last stop will be when we plug our brains directly online, which sounds scary but is already happening with Neuralink and other Brain-Computer Interfaces (BCIs), but those won’t be common for a while.
Two other unsettling implications.
First, and this is a theme I’ve explored a lot, smart glasses will contribute to the erosion of shared reality because with different amounts of information and different levels of filtering, we’ll no longer be able to trust that two people are looking at the same thing when they seem to be doing so.
Second, a ubiquitous internet environment is also a ubiquitous advertising environment. Unless we are paying for a service (and sometimes even when we are), we cannot presume that service is working for us. It isn’t. It’s usually working for advertisers. The nightmare scenario is that ad-supported smart glasses technology might mean that we can’t trust that the images we see through our screens are real or created by ads.**
Don’t get me wrong: there’s nothing wrong with advertising, nor is there anything wrong with advertising-supported services and media. There are, however, differences between advertising and reality; what frightens me is the possibility that with smart glasses those differences will be harder to see—literally.
Today, many people won’t leave the house without their smartphones. As smart glasses become cheaper and more widespread, will we be willing to take them off to see what’s really there?
__________

Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at www.bradberens.com, follow him on Blue Sky and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).
* Image Prompt: “Please create a photorealistic image. We are looking through a pair of stylish smart glasses at another person: a white guy in his mid 30s. Information about the man surrounds him, all pulled onto the smart glasses display. His name is Jacob. He is 6′ 1″. His eyes are blue. He is unmarried, Heterosexual. Has a dog named Butch. He is a registered Democrat. He has an iPhone. He is a media executive currently searching for a new job.”
** I explored this theme in a microfiction: “Leaving the Emerald City” and its followup analysis, “The End of Filter Failure.”
See all columns from the Center.
November 10, 2025