AI Assistants Can Now See Your Photos - Here's Why That Matters

8 min readBy Viallo Team

Quick take: Apple chose Google's Gemini AI to power the new Siri, giving Google's model access to on-screen content including photos. Apple's new App Store rule (Guideline 5.1.2(i)) now requires all iOS apps to disclose when they send user data to external AI services. Meanwhile, YouTube just expanded its deepfake detection tool to politicians and journalists, and 61 data protection authorities worldwide issued a joint warning about AI-generated imagery. The message is clear: AI systems are getting more access to your visual data than ever, and the rules are scrambling to keep up.

Smart speaker on a kitchen counter next to a stack of printed family photographs

Apple gave Siri to Google

In early 2026, Apple confirmed that Google's Gemini AI model powers the redesigned Siri. The new Siri can understand on-screen content, respond to personal context, and process complex multi-step requests. Apple says processing happens on-device or through its Private Cloud Compute infrastructure.

Here's the part that matters for your photos: Siri can now see what's on your screen. If you're looking at a photo, browsing your camera roll, or viewing a shared album, the AI assistant can process that visual content. Apple's privacy architecture is designed to limit what data leaves your device, but the underlying model understanding your photos is Google's.

This isn't the same as Google Photos training on your images. It's different - and in some ways more intimate. AI training uses your photos as data points in a massive dataset. An AI assistant actively looks at your photos in real time, understanding their content, context, and the people in them.

Apple's new AI disclosure rule

Apple clearly knows this is a sensitive area. In 2026, they introduced Guideline 5.1.2(i) for the App Store, which requires any iOS app that sends user data to external AI services - whether OpenAI, Google Gemini, Anthropic, or others - to explicitly disclose this and get user consent.

Full enforcement starts May 1, 2026. After that date, any app that quietly pipes your data to an AI model without telling you risks removal from the App Store.

This is a positive step, but it also tells you something: the problem is widespread enough that Apple felt compelled to make a rule about it. If most apps were already transparent about AI data sharing, the guideline wouldn't be necessary.

  • Photo editing apps: Many popular photo editors now use cloud-based AI for features like background removal, enhancement, and style transfer. Your photos get sent to external servers for processing.
  • Cloud storage apps: Google Photos, Amazon Photos, and others run AI models on your stored photos for search, face grouping, and suggested edits.
  • Social media apps: Instagram, TikTok, and others analyze your photos for content moderation, ad targeting, and recommendation algorithms.
  • Messaging apps: Some messaging platforms now offer AI features that analyze images shared in conversations.

YouTube's deepfake problem just went political

On March 10, 2026, YouTube expanded its AI-powered deepfake detection tool to government officials, political candidates, and journalists. Previously only available to YouTube Partner Program creators, the tool lets participants upload a video and government ID so YouTube can scan for AI-generated content that mimics their likeness.

The expansion happened for a reason. Deepfake videos of politicians have been appearing with increasing frequency, and the existing tools couldn't keep up. YouTube's response - letting public figures register their likeness for automated scanning - acknowledges that photos and videos of real people are now raw material for AI manipulation.

The Grok scandal set the tone

The urgency behind YouTube's expansion traces back to the Grok deepfake scandal. In late 2025, users of X's AI chatbot mass-generated non-consensual sexualized images, with researchers documenting over 15,000 such images in just two hours. The fallout was global - Malaysia and Indonesia blocked Grok, the UK made creating non-consensual AI intimate images a criminal offense in February 2026, and multiple EU regulators launched investigations.

The lesson: if your photos are accessible to AI systems - whether through a social media platform, a cloud storage service, or a messaging app - they can potentially be used to generate content you never consented to.

Robot hand reaching toward a printed photograph on a desk

Try Viallo Free

Share your photo albums with a single link. No account needed for viewers.

Start Sharing Free

61 countries just warned about AI and your photos

On February 23, 2026, 61 data protection authorities from around the world issued a joint statement through the Global Privacy Assembly specifically about AI-generated imagery. The signatories included the UK's ICO, the European Data Protection Supervisor, and dozens of national regulators.

Their demands were direct: implement child protections in AI image systems, create accessible processes for people to request removal of AI-generated images of themselves, and build stronger safeguards against misuse. This wasn't a suggestion - it was a coordinated warning from the organizations that enforce privacy law globally.

The joint statement specifically called out the risks of non-consensual intimate imagery, defamatory content, and child exploitation. When 61 regulators agree on something, it's because the problem is already serious.

What this means for how you store photos

The shift in 2026 is from AI analyzing your photos passively (search, face grouping) to AI having active, real-time access to your visual content (assistants that see your screen, apps that process images through cloud AI, platforms where your likeness can be replicated).

This changes the calculus for photo storage. It's no longer just about whether a company trains on your photos. It's about whether AI systems can access them at all.

  • Assistants see what you see: If an AI assistant can view your screen, it can see the photos you're looking at - including private ones you haven't shared with anyone.
  • Editing means uploading: AI-powered photo editing features typically require sending your photo to external servers. Every enhancement is also a data transfer.
  • Storage means scanning: Major cloud providers run AI models on stored photos for features like search and organization. Opting out of these features often isn't possible.
  • Sharing means exposure: Once a photo exists on a platform with AI capabilities, the platform's AI can process it - regardless of your privacy settings for human viewers.
Closed photo album with a lock on a clean white surface

How to keep your photos away from AI

You don't have to accept that every photo you take gets processed by AI models. Some photo platforms are designed specifically to avoid this.

  • Choose platforms that don't run AI on your photos: Viallo organizes photos using GPS metadata and timestamps - no image recognition, no facial scanning, no AI processing of your actual images. Your photos stay as files, not as inputs to a model.
  • Store in EU jurisdictions: GDPR provides the strongest legal framework for controlling how your data is processed. EU-hosted storage means your photos are subject to regulations that explicitly limit automated processing without consent.
  • Disable AI features on your phone: Review which apps have permission to access your photo library. Turn off AI assistant features that analyze on-screen content if you're uncomfortable with the tradeoff.
  • Be intentional about sharing: Every platform you share a photo on is a platform that can process it. Share through links that don't require uploading to another person's cloud account.

The trend is clear: AI is getting hungrier for visual data, and photos are the richest source available. The platforms that treat your photos as private content - not as AI training material or assistant context - are the ones worth using.

Frequently Asked Questions

Can Siri see my photos now?

The redesigned Siri powered by Google Gemini can understand on-screen content, which includes photos you're viewing on your device. Apple says processing happens on-device or through its Private Cloud Compute, but the AI model analyzing your content is Google's. You can control Siri's access in your device privacy settings.

What is Apple's Guideline 5.1.2(i)?

It's a new App Store rule requiring iOS apps to disclose when they send user data to external AI services like OpenAI, Google Gemini, or others. Apps must get explicit user consent before sending data. Full enforcement starts May 1, 2026.

Does Viallo use AI to process my photos?

No. Viallo organizes photos using GPS metadata and timestamps embedded in the photo file. It doesn't use image recognition, facial scanning, or any AI model that analyzes the visual content of your photos. Your images are stored as files, not processed as data.

How can I protect my photos from deepfakes?

Limit where your photos are publicly available. Use private sharing platforms instead of social media for personal photos. Store originals in services that don't expose them to AI processing. The fewer places your photos exist online, the harder they are to use for deepfake generation.

Are EU-hosted photo platforms safer from AI scanning?

GDPR requires explicit consent for automated processing of personal data, which includes photos. EU-hosted platforms operating under GDPR have stronger legal obligations to disclose and get consent for AI processing compared to platforms based in countries with weaker privacy laws.