tech

People bought Meta's AI glasses thinking they were buying a little bit of the future.

What they apparently bought was a face-mounted camera whose footage could end up in front of human reviewers.

That is the part that should make your stomach turn. According to reporting from Ars Technica, BBC, and TechCrunch, footage from Ray-Ban Meta smart glasses was reviewed by workers at a Kenya-based subcontractor, and the material reportedly included people using the toilet, having sex, undressing, and even financial information. Meta now faces a class action lawsuit in the US, and the UK's Information Commissioner's Office has written to the company over the report. (Ars Technica; BBC; TechCrunch)

If that reporting is accurate, then Meta did not sell people smart glasses. It sold them an ambient surveillance device with human reviewers in the loop.

The product promise and the real product

This is what keeps happening with consumer AI. The product promise is always personal convenience. Ask a question out loud. Capture a memory. Get a little help from the machine. The real product, one layer down, is data extraction at industrial scale.

That gap matters more with glasses than with phones. A phone is something you pick up. Glasses sit on your face and point at the world by default. They do not just record the user. They record everyone around the user, including people who never agreed to be part of Meta's training and review pipeline.

Once human review enters the picture, the marketing falls apart.

A lot of AI features still rely on people behind the scenes: annotators, reviewers, contractors, safety teams. I understand why that happens technically. Models need labels. Systems need debugging. Companies need quality control. Fine. But if your product depends on strangers being able to watch deeply intimate footage, then privacy is not a feature of the product. It is a casualty of how the product works.

The word “private” hides the real operating model

The strongest defense for Meta is the familiar one: human review is limited, disclosed somewhere in the terms, and necessary to improve the system. Maybe. But that defense depends on people understanding what they actually bought.

I do not think most people hear “AI glasses” and imagine offshore contractors watching clips of them in the bathroom.

That is not because users are naive. It is because companies are very good at burying the real operating model behind language like “improve our services” and “help train AI.” Those phrases are technically informative in the same way a skull-and-crossbones icon is technically a label. They are not honest about the lived reality.

The lived reality is this: once you normalize always-on cameras plus cloud AI plus human review, you have built a system that can absorb the most intimate parts of ordinary life and turn them into corporate training data.

That is a scandal, not an implementation detail.

Wearables make surveillance social

What makes this story worse than the usual privacy outrage is that smart glasses push the surveillance burden onto everyone else too.

If I post too much on social media, that is my bad decision. If I wear AI glasses into a café, on a train, into a store, or around my family, I am making that decision for everybody in view. Their image, their voice, their bad hair day, their kid in the background, their credit card on the counter. It all becomes potential input.

That is why this category makes me more uneasy than the usual “your app collected too much data” story. Wearables turn surveillance into a social default. The wearer gets the novelty. Everyone nearby gets drafted in.

And this is exactly how surveillance gets normalized in tech: not through one dramatic betrayal, but through a hundred convenience features that train people to stop expecting privacy in ordinary life.

The ugly truth about AI hardware

The industry wants AI hardware to feel magical, effortless, and inevitable. But the ugly truth is that a lot of these products work because there is a hidden labor system and a hidden data pipeline underneath them.

That hidden system is where the harm lives.

The people selling AI glasses want you thinking about the assistant in your ear. They do not want you thinking about the contractor reviewing footage, the retention policies, the training datasets, or the fact that the camera is always one product update away from becoming more invasive than it already is.

This is why I keep coming back to the same point with AI: the technology is real, and some of it is genuinely exciting. But the business model around it keeps dragging us somewhere awful. We are told these systems are here to help us see, remember, and understand more. In practice, they keep becoming better ways for companies to collect, analyze, and normalize access to human life.

Meta did not stumble into this tension. It built a product category where the value comes from turning the world into model input.

If you sell people “AI glasses” and the result is that strangers may end up reviewing clips of their most private moments, then the honest name for the product is not smart glasses.

It is surveillance you wear on your face.

TikTok told the BBC it will not add end-to-end encryption to direct messages because it thinks encryption would make people “less safe.” The thing that keeps hackers, the company, and governments out of your private conversations is being framed as a threat.

TikTok told the BBC it believes end-to-end encryption “prevents police and safety teams from being able to read direct messages if they needed to.”

“Safety” is the one word that can bulldoze privacy debates, especially when kids are involved. But if a platform says it needs the ability to read your messages to keep you safe, it is asking for a master key.

And once that master key exists, it is not just for TikTok. It becomes an access path for law enforcement. You can call that “lawful access” if you want. It is still domestic surveillance.

End-to-end encryption is a design choice

Most people hear “encrypted” and assume that means “private.” It usually does not. A lot of services encrypt messages in transit, then decrypt them on the server.

End-to-end encryption (E2EE) means only the people in the conversation can read it. The service that delivers the message cannot. There is no server-side copy employees can browse, and no plaintext to hand over.

TikTok says its DMs are still protected with “standard encryption,” and that only authorized employees can access messages in limited situations, like responding to a valid law enforcement request or a user report.

That is the whole story. If employees can read the content, your DMs are not end-to-end encrypted. The company can read them, which means the government can potentially get them too.

Safety teams do not need a master key to do safety work

TikTok's argument is the oldest one in tech policy: if the company cannot read messages, it cannot help when something bad happens.

Bad things do happen in DMs. Grooming and harassment are real. Child safety groups like the NSPCC and the Internet Watch Foundation praised TikTok's choice for that reason.

But “we need to be able to read everything” is a lazy, high-risk solution to a hard problem. No safety team is proactively reading DMs at scale. What platforms actually do is respond to reports, look for patterns of abuse, and use product design to reduce exposure in the first place.

Second, E2EE does not prevent a platform from doing the stuff that actually matters for protecting minors. You can make DMs from strangers opt-in by default for teens, put friction and rate limits in the path of unknown accounts, block obvious spam and grooming patterns, and let users report a thread by sharing what they choose to share.

You can build strong guardrails without giving the company a standing ability to open every private envelope.

Internet Society's piece Encryption keeps kids safe online makes a point worth repeating: encryption protects kids too, including from breaches and profiling.

The real risk is the access path

Once you decide the company must be able to read DMs, you have created a permanent high-value target. Even with good controls, that access eventually leaks or gets used in ways users never agreed to.

And it aligns perfectly with what governments have asked for, over and over: a world where providers can be compelled to produce message content when served with legal process (see EFF on the long-running push for encryption backdoors).

The safest messages are the ones the platform cannot read. You cannot leak what you cannot access. You cannot be compelled to hand over plaintext you never had.

If you keep DMs readable, you are keeping an access path open for domestic surveillance. TikTok is explicit that it can access DMs in response to a “valid law enforcement request.” End-to-end encryption removes that option by design.

This matters for any company, but it matters even more for TikTok because of its “combustible optics,” as analyst Matt Navarra put it to the BBC. TikTok is already fighting suspicion about its ownership and state pressure. Keeping DMs decryptable just adds another “trust us.”

And this is where the “for safety” framing really annoys me. If TikTok wants to differentiate itself from rivals by keeping DMs readable, it should say that plainly. It should tell users: we will read DMs when we decide it is necessary, and we will keep them available for law enforcement requests.

That is the deal, and the rest is marketing.

The sharp part: stop calling encryption controversial

End-to-end encryption is the baseline now. Signal and WhatsApp do it, and iMessage has for years (Apple’s iMessage security overview). “We cannot read this” is the cleanest security promise a messaging service can make.

If TikTok wants to keep DMs in the “we can read it” category, fine. Users should know what they are signing up for.

But calling E2EE a safety risk flips the world upside down. The risk is not that criminals will hide from law enforcement. The risk is that regular people will keep being trained to accept surveillance as the default setting for communication.

If a platform tells you it needs to read private messages to keep you safe, believe the quiet part: it is building a system where somebody, somewhere, can read them. And once that door exists, it will get used.

Call it what it is: surveillance.