Meta Smart Glasses Lawsuit Proves Your Photos Aren't as Private as You Think
Quick take: Meta is being sued after a Swedish investigation revealed that contract workers in Kenya reviewed intimate footage captured by Ray-Ban Meta smart glasses - including nudity, sexual activity, and credit card details. Over 7 million people bought these glasses in 2025, trusting marketing that called them "designed for privacy." The lawsuit and a UK government investigation show that when you trust a tech company with visual content, you have no real control over who sees it. If contractors can view your most private moments, "private" means nothing.

What happened with Meta's smart glasses
In early March 2026, a class action lawsuit was filed in federal court in San Francisco against Meta. The trigger was a Swedish newspaper investigation that uncovered what was actually happening behind the scenes with footage from Ray-Ban Meta smart glasses.
Workers at Sama, a Kenya-based data annotation company contracted by Meta, were reviewing video and audio captured by the glasses. These workers encountered intimate content - nudity, sexual activity, financial information like credit card numbers visible on screen. The workers weren't Meta employees. They were third-party contractors in a different country, reviewing footage that millions of users assumed was private.
Over 7 million people purchased Meta smart glasses in 2025 alone. The marketing described them as "designed for privacy, controlled by you." That messaging is now central to the lawsuit.
How your footage ends up with strangers
This isn't a Meta-specific problem. It's how AI training works across the entire tech industry. Every AI feature - voice assistants, image recognition, content moderation - requires human review of real user data at some point. The process typically works like this:
- AI models need training data: To improve features like scene detection, object recognition, or content moderation, companies need humans to label and review real-world footage.
- Third-party contractors do the work: Companies like Sama, Scale AI, and Appen employ workers - often in lower-cost countries - to review, label, and annotate user content. These workers see the actual footage.
- Privacy controls are minimal: Workers may sign NDAs, but they're still humans looking at your content on their screens. There's no technology that can prevent a person from remembering what they saw.
- Scale makes oversight impossible: With millions of users generating content daily, companies can't meaningfully monitor what every contractor sees or does.
Amazon's Ring had a similar issue in 2019 when it was revealed that employees watched customer doorbell footage. Apple contractors listened to Siri recordings that captured intimate moments. Google contractors listened to Assistant recordings. The pattern keeps repeating.

What "designed for privacy" actually means
Meta's marketing for the Ray-Ban smart glasses emphasized privacy controls: a recording indicator LED, voice commands to start and stop capture, and the ability to delete footage. These are real features. They're also completely irrelevant to the actual privacy problem.
The LED tells people around you that you're recording. It doesn't protect you from the company that processes your footage. The voice commands let you control when you record. They don't control who reviews the recording afterward. The delete button removes footage from your device. It doesn't guarantee deletion from the company's training pipelines.
This gap between user-facing privacy controls and actual data handling is the core of the lawsuit. Users believed "designed for privacy" meant their footage stayed private. It didn't.
The UK investigation
The UK's Information Commissioner's Office (ICO) has launched a separate investigation into the smart glasses footage review. Under UK GDPR, processing biometric data and intimate imagery requires explicit consent and clear disclosure. If Meta's privacy policy didn't adequately explain that human contractors would review footage, the company could face enforcement action in the UK as well.
The bigger picture for photo and video privacy
Smart glasses are just the most visible example of a broader issue. Whenever you upload photos or videos to a tech platform, you're trusting that company's entire supply chain - employees, contractors, subcontractors, AI training partners - with your content.
Most cloud photo services include language in their terms of service that allows them to process your content for service improvement, which can include human review. Google Photos, iCloud, Amazon Photos, and Meta's platforms all have some version of this.
The question isn't whether a company says it values privacy. Every company says that. The question is: how many people can potentially see your photos between the moment you upload them and the moment you delete them?
- On a social media platform: Content moderators, AI trainers, data annotators, trust and safety teams, potentially law enforcement
- On a major cloud service: AI training teams, content moderation systems, and anyone with backend access
- On a privacy-first platform: Ideally, nobody except the people you explicitly share with

Try Viallo Free
Share your photo albums with a single link. No account needed for viewers.
Start Sharing FreeWhat you can do to protect your visual privacy
You can't prevent governments from using facial recognition on your passport photo, and you can't control how tech companies handle content once it's on their servers. But you can choose what content you give them in the first place.
- Read the actual terms: Look for phrases like "improve our services," "train our models," or "quality assurance." These usually mean human review of your content is possible.
- Check where data is stored: EU data centers under GDPR offer stronger protections than US-based servers. GDPR requires explicit consent for processing sensitive content like intimate images.
- Minimize what you upload to big tech: The fewer photos on platforms that use your data for AI training, the smaller your exposure surface.
- Use private sharing instead of cloud galleries: When you share photos through private links rather than uploading to a social or cloud platform, you keep your content out of AI training pipelines entirely.
- Be skeptical of "privacy-first" marketing: Smart glasses were marketed as private too. Look at what a company actually does with your data, not what it says in ads.
Private sharing vs. cloud storage
There's a fundamental difference between uploading your photos to a company's cloud and sharing them through a private platform. When you upload to Google Photos, iCloud, or Meta's platforms, your content enters their ecosystem - subject to their terms, their AI training needs, and their contractor supply chain.
Viallo stores photos in EU data centers (Germany), protected by GDPR. There's no AI training on your content, no human review pipeline, no contractor workforce annotating your images. Photos are organized using GPS metadata, not facial recognition or image analysis. When you share an album, only people with the link (and password, if you set one) can see it.
The Meta lawsuit is a reminder that "cloud storage" doesn't just mean your photos live on a server somewhere. It means they're part of a company's data ecosystem. If you want your photos to stay genuinely private, you need a platform where privacy is the architecture, not just the marketing.
Frequently Asked Questions
Did Meta employees see private smart glasses footage?
Third-party contractors at Sama, a Kenya-based company, reviewed footage captured by Ray-Ban Meta smart glasses. They encountered intimate content including nudity and financial details. The class action lawsuit alleges this contradicted Meta's privacy marketing.
Do other tech companies have people review user photos?
Yes. Human review of user content is standard practice for AI training and content moderation. Amazon, Apple, Google, and Meta have all been caught having contractors review user recordings, photos, or footage at various points.
How do I know if my photos are being used for AI training?
Check the platform's terms of service and privacy policy. Look for language about"improving services," "machine learning," or "model training."Under GDPR, EU companies must explicitly state if they use your data for AI. In the US, disclosures are often buried in lengthy policies.
Does Viallo use contractors to review photos?
No. Viallo doesn't run AI analysis, facial recognition, or content labeling on user photos. There is no human review pipeline. Photos are stored encrypted in EU data centers and only accessible to you and the people you share with.
Are smart glasses a privacy risk?
Any always-on camera connected to a cloud service is a privacy risk - both for the wearer and the people around them. The Meta lawsuit highlights that even footage the wearer considered private ended up being viewed by strangers.