Meta sold AI glasses. What people got was ambient surveillance.
People bought Meta's AI glasses thinking they were buying a little bit of the future.
What they apparently bought was a face-mounted camera whose footage could end up in front of human reviewers.
That is the part that should make your stomach turn. According to reporting from Ars Technica, BBC, and TechCrunch, footage from Ray-Ban Meta smart glasses was reviewed by workers at a Kenya-based subcontractor, and the material reportedly included people using the toilet, having sex, undressing, and even financial information. Meta now faces a class action lawsuit in the US, and the UK's Information Commissioner's Office has written to the company over the report. (Ars Technica; BBC; TechCrunch)
If that reporting is accurate, then Meta did not sell people smart glasses. It sold them an ambient surveillance device with human reviewers in the loop.
The product promise and the real product
This is what keeps happening with consumer AI. The product promise is always personal convenience. Ask a question out loud. Capture a memory. Get a little help from the machine. The real product, one layer down, is data extraction at industrial scale.
That gap matters more with glasses than with phones. A phone is something you pick up. Glasses sit on your face and point at the world by default. They do not just record the user. They record everyone around the user, including people who never agreed to be part of Meta's training and review pipeline.
Once human review enters the picture, the marketing falls apart.
A lot of AI features still rely on people behind the scenes: annotators, reviewers, contractors, safety teams. I understand why that happens technically. Models need labels. Systems need debugging. Companies need quality control. Fine. But if your product depends on strangers being able to watch deeply intimate footage, then privacy is not a feature of the product. It is a casualty of how the product works.
The word “private” hides the real operating model
The strongest defense for Meta is the familiar one: human review is limited, disclosed somewhere in the terms, and necessary to improve the system. Maybe. But that defense depends on people understanding what they actually bought.
I do not think most people hear “AI glasses” and imagine offshore contractors watching clips of them in the bathroom.
That is not because users are naive. It is because companies are very good at burying the real operating model behind language like “improve our services” and “help train AI.” Those phrases are technically informative in the same way a skull-and-crossbones icon is technically a label. They are not honest about the lived reality.
The lived reality is this: once you normalize always-on cameras plus cloud AI plus human review, you have built a system that can absorb the most intimate parts of ordinary life and turn them into corporate training data.
That is a scandal, not an implementation detail.
Wearables make surveillance social
What makes this story worse than the usual privacy outrage is that smart glasses push the surveillance burden onto everyone else too.
If I post too much on social media, that is my bad decision. If I wear AI glasses into a café, on a train, into a store, or around my family, I am making that decision for everybody in view. Their image, their voice, their bad hair day, their kid in the background, their credit card on the counter. It all becomes potential input.
That is why this category makes me more uneasy than the usual “your app collected too much data” story. Wearables turn surveillance into a social default. The wearer gets the novelty. Everyone nearby gets drafted in.
And this is exactly how surveillance gets normalized in tech: not through one dramatic betrayal, but through a hundred convenience features that train people to stop expecting privacy in ordinary life.
The ugly truth about AI hardware
The industry wants AI hardware to feel magical, effortless, and inevitable. But the ugly truth is that a lot of these products work because there is a hidden labor system and a hidden data pipeline underneath them.
That hidden system is where the harm lives.
The people selling AI glasses want you thinking about the assistant in your ear. They do not want you thinking about the contractor reviewing footage, the retention policies, the training datasets, or the fact that the camera is always one product update away from becoming more invasive than it already is.
This is why I keep coming back to the same point with AI: the technology is real, and some of it is genuinely exciting. But the business model around it keeps dragging us somewhere awful. We are told these systems are here to help us see, remember, and understand more. In practice, they keep becoming better ways for companies to collect, analyze, and normalize access to human life.
Meta did not stumble into this tension. It built a product category where the value comes from turning the world into model input.
If you sell people “AI glasses” and the result is that strangers may end up reviewing clips of their most private moments, then the honest name for the product is not smart glasses.
It is surveillance you wear on your face.