Is Big Tech Training AI on Your Photos? How to Check and Opt Out

9 min readBy Viallo Team

Last updated: March 10, 2026

Quick take: Yes, big tech is training AI on your photos - and most of them make it deliberately hard to stop. Meta openly uses Instagram and Facebook content for AI training. Google's new "Ask Photos" feature in Gemini scans your entire library. Amazon quietly removed voice data opt-outs from Echo devices. If you're in the EU or UK, you can file objection forms under GDPR. If you're in the US, you mostly can't opt out at all. The only reliable way to keep your photos out of AI training is to not store them on platforms that train AI - which means smaller services like Viallo, Ente, or self-hosted solutions.

Server racks in a vast warehouse with cool blue industrial lighting, representing big tech data infrastructure

Which companies are actually training on your photos?

I spent a couple of weeks digging through privacy policies, terms of service updates, and regulatory filings from every major tech company. The short version: almost all of them use your photos in some form of AI pipeline. The differences are in how transparent they are about it and whether they give you any way to say no.

Here's the current state as of early 2026, broken down by company.

Meta (Instagram and Facebook)

Meta is the most brazen about this. In June 2023, Meta updated its privacy policy to explicitly state that public posts, photos, and captions on Facebook and Instagram are used to train its AI models, including the Llama family of language models and Meta AI's image generation features.

In the EU and UK, Meta was forced to pause AI training on user data after privacy regulators pushed back. They resumed in late 2024, but now EU/UK users can submit an objection form through their account settings. Meta is legally required to honor these objections under GDPR's Article 21. In practice, the form is buried several menus deep and takes about ten clicks to reach.

In the US? There's no opt-out. Period. If you've posted photos on Instagram or Facebook, Meta can and does use them for AI training. The only way to stop it is to delete the content or delete your account.

Google

Google's situation is more complicated. Their Terms of Service grant a broad license to"use, host, store, reproduce, modify, create derivative works" from your content to "operate, promote, and improve" services. Google has repeatedly said it doesn't use Google Photos content for advertising. But "improving services" is doing a lot of heavy lifting in that sentence.

The big development in 2025 was the expansion of Gemini's integration with Google Photos through the "Ask Photos" feature. This lets you ask natural language questions about your photo library - "show me photos from last Christmas" or "find pictures of my dog at the beach." To do this, Google's AI has to analyze every photo in your library. Google says this processing is used only to power the feature, not to train foundation models. But the distinction between "analyzing your photos to answer questions" and "training AI on your photos" feels pretty thin.

Amazon

Amazon made headlines in 2025 when it updated Echo device policies to remove the option for users to opt out of having their voice recordings used for product improvement. Ring doorbell footage has faced similar scrutiny - Amazon has shared Ring video with law enforcement without user consent in emergency situations, and the company's terms allow the use of uploaded content for "developing and improving" services.

If you use Amazon Photos (included with Prime), the terms are similar to Google's: broad licenses to process, analyze, and use your content for service improvement. Amazon's AI ambitions with Alexa+ and its generative AI push make it likely that user content feeds into training pipelines.

OpenAI and ChatGPT

OpenAI doesn't store your photos the way Google or Meta does, but there's a different privacy concern. In early 2025, a widely reported incident showed that shared ChatGPT conversation links could expose uploaded images and documents to anyone with the URL. The conversations were supposed to be private, but a bug made some shared links publicly indexable.

OpenAI's default policy also allows content submitted through ChatGPT (including uploaded images) to be used for model training unless you explicitly opt out through settings or use the API with data usage controls. If you've ever uploaded a photo to ChatGPT to ask about it, that image may have contributed to training future models.

Google Photos and AI: what's actually happening to your library

Google Photos deserves its own section because it's the most widely used photo storage service in the world, with over 1 billion users and more than 4 billion photos uploaded daily.

Here's what Google Photos already does with your images, whether you realize it or not:

  • Facial recognition grouping: Google's AI identifies faces in your photos and clusters them by person. This happens automatically unless you turn off face grouping in settings. The facial recognition model was trained on user photos.
  • Object and scene detection: Every photo is analyzed to identify objects, locations, activities, and text. That's how you can search "birthday cake" or "beach" and find relevant photos.
  • Ask Photos (Gemini): Launched in 2025, this feature uses Google's Gemini AI to understand your entire photo library contextually. You can ask complex questions like "what did I wear to Sarah's wedding?" and the AI will search through your photos to answer. This requires deep AI analysis of every image.
  • Memories and suggestions: Google's AI selects photos for automatic highlight reels, suggests edits, and creates collages. All of this requires content analysis.

The key question people ask is: does Google use my specific photos to train Gemini? Google's official answer is that Google Photos data is used to "improve Google Photos" but not to train general-purpose AI models. But the Ask Photos feature literally runs Gemini on your photos. Whether that counts as "training" or just "inference" is a distinction most users don't care about - the result is the same: an AI is analyzing your personal photos.

Meta and Instagram: your photos are the training data

With Meta, there's less ambiguity. Meta's AI training on user photos is not a speculation - it's stated policy.

In September 2023, Meta's updated privacy policy added explicit language that content shared on its platforms can be used for "AI experiences." This was followed by the launch of Meta AI across Instagram, Facebook, WhatsApp, and Messenger in 2024, powered partly by models trained on user content.

What does this mean practically? Every public Instagram photo you've posted since the policy change is fair game for AI training. Even private posts may be used if they're shared with other users who then interact with Meta AI features. Meta's generative AI image tools - the ones that let you create stickers and edit photos within Instagram - were trained on images from the platform.

The EU forced Meta to provide an objection mechanism, but Meta initially made it require manual form submissions and provided no bulk opt-out. After regulatory pressure in 2025, the process improved slightly, but it's still not a simple toggle. And again - US users have no equivalent right.

Magnifying glass resting on printed photographs, representing scrutiny of how tech companies use personal photo data

Try Viallo Free

Share your photo albums with a single link. No account needed for viewers.

Start Sharing Free

How to actually opt out (where you can)

I went through the opt-out process for each major platform. Here's what actually works.

Google Photos

  • Go to Google's "My Activity" page and turn off "Web & App Activity"for Google Photos. This limits what Google stores about your interactions.
  • In Google Photos settings, turn off "Face grouping" to prevent facial recognition processing.
  • Disable "Ask Photos" if you don't want Gemini analyzing your library.
  • Note: none of these fully prevent Google from using your photos under the broad ToS license. They limit features, not the underlying data rights.

Instagram / Facebook (Meta)

  • EU/UK users: Go to Settings → Privacy → Data policy → Right to object. Submit the AI training objection form. Meta must honor this under GDPR. You'll get a confirmation email.
  • US users: There is no opt-out for AI training. Your options are to delete individual posts, set your account to private (which limits but doesn't eliminate data use), or delete your account entirely.
  • Everywhere: Downloading your data via "Download Your Information" doesn't remove it from training datasets. Data that was already used for training cannot be un-trained.

Amazon

  • For Alexa: Go to the Alexa app → Settings → Alexa Privacy → Manage Your Alexa Data. You can review and delete voice recordings, but you can no longer opt out of Amazon using your data for "product improvement."
  • For Amazon Photos: There's no specific AI training opt-out. Your options are limited to deleting photos or closing your Amazon Photos account.

ChatGPT (OpenAI)

  • Go to Settings → Data Controls → toggle off "Improve the model for everyone."This prevents your conversations (including uploaded images) from being used for training.
  • Use the API instead of the web interface if you need stronger data controls.
  • Delete shared conversation links if you've shared conversations that contain personal photos.

The honest truth: opt-out settings give you some control, but they're band-aids. The platforms are designed to use your data by default, and the opt-outs are intentionally limited. If privacy is your priority, the real solution is to not put your photos on these platforms in the first place.

Locked filing cabinet with a key in the lock under warm lamp light, representing taking control of personal photo privacy

Privacy-first alternatives that don't touch your photos

If you want to store and share photos without any AI involvement, here are the options that actually deliver on that promise.

Viallo

Viallo doesn't run any AI or machine learning on your photos. No facial recognition, no object detection, no content scanning. The only automated processing is reading GPS coordinates from EXIF metadata for location grouping and generating thumbnails for fast loading. Your photos are stored on Cloudflare R2 servers in Europe under full GDPR protection.

Where Viallo stands out is sharing. You can create gallery links that don't require recipients to create an account, add password protection, see view analytics, and display photos on an interactive map - all without any AI processing. It's built for people who want to share photos privately with family and friends, not store them in a corporate data pipeline. For a deeper look at how we handle photo data, see our detailed breakdown of AI training practices.

Ente

Ente offers end-to-end zero-knowledge encryption, meaning even Ente's own servers can't see your photos. This is technically stronger privacy than Viallo. The trade-off is that Ente's sharing features are more limited - recipients need accounts, and there's no gallery view or location mapping because those features require server-side access to the photos.

Immich (self-hosted)

Immich is an open-source Google Photos alternative you run on your own server. You get complete control over your data. It even has optional local AI features (face recognition, object detection) that run entirely on your hardware. The downside is that you need technical skills to set it up and maintain it, and sharing with non-technical family members requires exposing your server to the internet.

Proton Drive

Proton Drive offers end-to-end encrypted file storage from the makers of Proton Mail. It's solid for encrypted storage but lacks photo-specific features like galleries, albums, or location mapping. It's more of a secure file locker than a photo sharing platform.

Frequently Asked Questions

Can companies train AI on my private photos?

Yes, depending on the platform's terms of service. Meta's policy covers content shared on Facebook and Instagram, including content shared with limited audiences. Google's ToS grants broad usage rights to content stored on its services. "Private" on these platforms means other users can't see it - it doesn't mean the company can't process it.

Does Google's Ask Photos feature mean Google is training AI on my photos?

Google says Ask Photos uses Gemini to analyze your library but doesn't use the results to train foundation models. The distinction between "running AI on your photos" and "training AI on your photos" is technically real but practically meaningless to most users. Either way, an AI is looking at your personal photos.

Can I get my photos removed from an AI training dataset?

Not really. Once a model has been trained on your data, there's no way to"un-train" it. You can request deletion of your source data, and GDPR gives EU residents the right to erasure, but the learned patterns remain in the model's weights. Prevention is more effective than removal.

Is iCloud Photos safer than Google Photos for AI privacy?

Apple has been more restrained about AI training on user photos. Most Apple Photos analysis happens on-device, and Apple's terms are narrower than Google's or Meta's. Apple Intelligence does use server-side processing for some features, but Apple claims this runs on dedicated Apple Silicon servers with privacy guarantees. It's better than Google or Meta, but not as strong as platforms that do zero AI processing.

What's the best way to share photos without AI scanning?

Use a platform that doesn't run AI on your content. Viallo lets you create shareable gallery links with no AI processing, no account required for viewers, and EU-hosted storage. For maximum encryption, Ente offers zero-knowledge sharing but requires accounts for recipients. Avoid sharing through Instagram, Facebook, or Google Photos if AI involvement concerns you.

Do messaging apps like WhatsApp and iMessage scan photos I send?

WhatsApp and iMessage both use end-to-end encryption for messages, meaning the platforms can't access photo content in transit. However, WhatsApp is owned by Meta, and metadata (who you share with, when, how often) is still collected. iMessage is more private, but iCloud backups of messages are not end-to-end encrypted unless you enable Advanced Data Protection.