The Ghibli AI Filter Went Viral - Here's What It Taught Us About Photo Privacy
Last updated: March 10, 2026
Quick take: The Studio Ghibli AI filter trend had millions of people uploading personal photos to AI tools without reading a single line of the privacy policy. Most of these services store your images, use them for model training, and keep them long after you forget you uploaded them. The Ghibli filter was fun - but it was also a masterclass in how easily we hand over personal photos to companies we know nothing about. If you want to share photos without feeding them into AI systems, platforms like Viallo never process your images through any AI.

The Ghibli filter phenomenon
In early 2025, the internet collectively decided to turn itself into a Studio Ghibli movie. ChatGPT's image generation got an update, and within days, millions of people were uploading selfies, family photos, pet pictures, and baby portraits to see themselves rendered in Hayao Miyazaki's distinctive animation style.
The results were genuinely charming. Your toddler as a Spirited Away character. Your dog looking like it wandered out of My Neighbor Totoro. Your wedding photo reimagined as a scene from Howl's Moving Castle. It was the kind of viral moment that felt harmless and wholesome.
But underneath the cute illustrations was something worth paying attention to. Every one of those uploads was a personal photo - often containing faces of children, family members, and friends - being sent to an AI company's servers. And almost nobody stopped to ask what happens to those photos after the Ghibli version gets saved to your camera roll.
The scale was staggering. OpenAI's CEO Sam Altman posted that ChatGPT was generating roughly a million images per hour at the trend's peak. That's millions of personal photos flowing into OpenAI's infrastructure in a single day.
What actually happens to photos you upload to AI tools
I went and read the privacy policies that most people skip. Here's what I found across the major AI image tools.
OpenAI (ChatGPT, DALL-E)
OpenAI's terms state that inputs - including uploaded images - may be used to "improve" their services. For free-tier users, this effectively means your photos can be used as training data unless you specifically opt out in settings. Paid users get slightly better terms, but the data still passes through OpenAI's servers and is retained for up to 30 days for abuse monitoring. That selfie you uploaded? It sat on OpenAI's servers for a month.
Other AI image generators
The pattern is consistent across the industry. Midjourney's terms grant them a broad license to use your inputs. Lensa AI - the app behind the earlier AI portrait trend - was found to retain uploaded photos and use them for model improvement. Many smaller AI filter apps that pop up during viral trends have even vaguer policies, or no meaningful privacy policy at all.
The common thread: when you upload a photo to an AI tool, you're almost certainly granting that company some right to use your image. The specifics vary, but the default across the industry is "we can use your uploads to make our AI better."
The metadata problem
It's not just the pixels in your photo. Most phone photos contain EXIF metadata - GPS coordinates, timestamp, device information, camera settings. When you upload a photo to an AI tool, you're often sending all of this along with it. That Ghibli-fied family photo also told the AI company where you live, when the photo was taken, and what phone you use.

The real privacy risks most people miss
The Ghibli filter trend revealed something bigger than one viral moment. It showed how normalized it's become to upload personal photos to AI services without a second thought.
Children's photos are the biggest concern
A huge portion of the Ghibli trend was parents uploading photos of their kids. These are minors who can't consent to having their faces processed by AI systems. Once those images are in a training dataset, there's no practical way to remove them. Your child didn't agree to become part of an AI model's understanding of what children's faces look like - but that's potentially what happened.
Aggregate data is powerful
One photo might seem harmless. But AI companies are collecting millions of photos with associated metadata. At scale, this creates detailed datasets about how people look, where they gather, what their homes look like, and how families are structured. The individual photo is a data point. The collection of millions of photos is a surveillance dataset.
You can't un-train a model
This is the part that makes AI uploads fundamentally different from, say, posting on social media. If you delete an Instagram post, the photo is gone (at least from public view). But if your photo was used to train an AI model, that influence is baked into the model's weights permanently. There's no "delete my photo from your AI" button that actually works. Model unlearning is an active research area, but it's nowhere close to being practical at scale.
The terms can change
Even if a company's current privacy policy seems reasonable, terms of service change. Companies get acquired. Startups run out of money and sell their data assets. The AI filter app that promised not to use your photos today might update its terms tomorrow - or get bought by a company with very different values.

Try Viallo Free
Share your photo albums with a single link. No account needed for viewers.
Start Sharing FreeHow to use AI photo filters safely
I'm not saying you should never use AI filters. They're fun, and sometimes the results are genuinely great. But there are ways to participate in these trends without giving away more than you need to.
- Strip metadata first: Before uploading any photo to an AI tool, remove the EXIF data. On iPhone, you can do this in the share sheet by tapping"Options" and toggling off location. On Android, most gallery apps have an option to remove location data before sharing. This at least prevents the AI company from getting your GPS coordinates.
- Use paid tiers when possible: Paid plans on services like ChatGPT and Midjourney generally have better data retention policies. OpenAI's paid tier, for example, doesn't use your inputs for training by default. It's not perfect, but it's a meaningful improvement over free tiers.
- Avoid uploading children's faces: This is my strongest recommendation. Whatever you decide about your own photos, think twice before uploading photos of your kids to AI tools. They can't consent, and the data can't be removed from trained models. Use photos of objects, landscapes, or pets instead.
- Read the opt-out settings: Most major AI platforms have settings to opt out of training data collection. On ChatGPT, go to Settings and turn off"Improve the model for everyone." On Midjourney, check the privacy settings in your account. These options are often buried, but they exist.
- Use locally-running AI when available: Some AI image tools run entirely on your device. Apple's Image Playground, for example, processes images locally on newer iPhones and Macs. No upload means no data sharing. The results might not be as impressive as cloud-based tools, but the privacy trade-off is zero.
- Check the company behind the app: During viral trends, dozens of copycat apps appear. Many are from unknown developers with no track record and no meaningful privacy policy. Stick with established companies where you at least know who has your data, even if their terms aren't perfect.
A different approach to sharing photos
The Ghibli trend highlighted something interesting about why people share photos. The appeal wasn't really the AI transformation - it was sharing a fun version of a meaningful photo with people who care about it. The AI was just the excuse.
But there's a growing gap between how people want to share photos and what actually happens to those photos behind the scenes. You want to show grandma a cute photo of the kids. You don't want that photo training an AI model. You want to share vacation pictures with friends. You don't want those photos stored on servers indefinitely for"service improvement."
This is exactly why platforms like Viallo exist. Viallo doesn't process your photos through any AI - no facial recognition, no object detection, no style transfer, no content analysis. The only automated processing is reading GPS metadata for location-based organization and generating thumbnails for fast loading. Your photos are stored on EU servers (Cloudflare R2, Europe), and they're never accessed, analyzed, or used for anything beyond showing them to the people you choose to share with.
You can share albums through links that don't require recipients to create an account, add password protection, and see who's viewed your shared albums. It's photo sharing without the AI tax. For more on how mainstream platforms handle your data, see our deep dive into Google Photos and AI training.
To be clear, Viallo isn't end-to-end encrypted - services like Ente and Proton Drive offer stronger technical privacy guarantees. But if your main concern is sharing photos without feeding them into AI systems, Viallo gives you that while still providing a good sharing experience with gallery views, interactive maps, and no-account access for recipients.
Frequently Asked Questions
Does ChatGPT save the photos I upload for the Ghibli filter?
Yes, temporarily. OpenAI retains uploaded images for up to 30 days for safety and abuse monitoring, even after you've received your result. For free-tier users, uploaded images may also be used to train and improve OpenAI's models unless you specifically opt out in your account settings. Paid users have training disabled by default, but the temporary retention still applies.
Can AI companies use my face without permission?
When you upload a photo to most AI tools, you're granting permission through the terms of service you agreed to. The legal frameworks around this are evolving - the EU's GDPR gives users more control, and some US states are introducing biometric privacy laws. But practically speaking, once you click "upload," you've typically agreed to let the company use that image within the scope of their terms.
Is the Ghibli AI filter safe to use with kids' photos?
I'd recommend against it. Children can't consent to having their biometric data processed by AI systems, and once a photo is used in model training, that data can't be practically removed. If you want to create Ghibli-style versions of family photos, consider using locally-running AI tools like Apple's Image Playground that don't upload photos to external servers.
How do I opt out of AI training on ChatGPT?
Go to Settings in ChatGPT and look for the "Data controls" section. Toggle off"Improve the model for everyone." This prevents your future inputs - including uploaded photos - from being used for model training. Note that this doesn't retroactively remove data from already-trained models. Upgrading to a paid plan also disables training by default.
What's the difference between AI processing and AI training?
Processing means the AI analyzes your photo to produce a result - like generating a Ghibli-style version. Training means your photo becomes part of the dataset used to teach the AI model itself. Processing is temporary (your photo is used once to create output). Training is permanent (your photo's patterns are embedded in the model's weights). Both happen when you upload to most AI tools, but they're different concerns.
Are there AI photo filters that don't collect my data?
Yes, but they're less common. On-device AI tools like Apple's Image Playground process photos locally without uploading them. Some open-source tools like Stable Diffusion can run entirely on your computer. The trade-off is usually lower quality or more technical setup compared to cloud-based services like ChatGPT or Midjourney. But your photos never leave your device.
Does Viallo use AI on uploaded photos?
No. Viallo doesn't run any AI, machine learning, facial recognition, or content analysis on your photos. The only automated processing is extracting GPS coordinates from EXIF metadata for location grouping and generating thumbnails at multiple sizes for fast loading. Your photos are stored on Cloudflare R2 servers in Europe and are never used for any purpose beyond displaying them to people you share with.