Deepfakes in 2026: How to Protect Your Photos from AI Manipulation
Last updated: March 10, 2026
Quick take: Deepfake incidents have surged 257% since 2024, and AI-generated CSAM is up 400%. Two major US federal laws now target this - the TAKE IT DOWN Act (signed May 2025) forces platforms to remove non-consensual intimate imagery within 48 hours, and the DEFIANCE Act (January 2026) lets victims sue for up to $250,000 in damages. Over 45 states have their own deepfake legislation too. The single most effective thing you can do right now is stop making your photos publicly available. Deepfake tools need source material, and every public photo is potential fuel.

The deepfake crisis in 2026
The numbers are hard to process. Deepfake incidents have increased 257% compared to 2024, driven by AI tools that are now cheap, fast, and disturbingly easy to use. You don't need technical skills anymore. Some of these tools run entirely in a browser.
The most disturbing trend is AI-generated child sexual abuse material, which has risen 400% according to the National Center for Missing & Exploited Children. Predators are using publicly available photos of children - school photos, sports team pictures, family vacation shots posted on social media - and feeding them into AI tools that generate explicit content.
This isn't hypothetical. In 2025, multiple school districts across the US dealt with incidents where students used AI tools to create fake explicit images of classmates using nothing but their Instagram and Snapchat photos. The source material was just regular selfies and group shots - the kind of thing millions of people post every day without a second thought.
Adults aren't immune either. Deepfake pornography targeting women has become a tool for harassment, extortion, and revenge. A single clear photo of someone's face is often enough for these tools to generate convincing fake imagery. The more public photos available, the more realistic the output.
How deepfakes are made from your photos
Understanding how this works helps explain why limiting public photos matters so much. Modern deepfake tools work in a few different ways, and they all start with the same input: photos of a real person.
Face swapping
The most common technique. An AI model maps the geometry of someone's face from source photos - the distance between eyes, nose shape, jawline, skin texture - and then overlays that face onto another person in a video or image. Tools from 2024-2025 need as few as 3-5 clear photos to produce a convincing swap. Higher quality and more angles give better results.
Full body generation
Newer models can generate entirely synthetic images of a person in any pose or setting. They use reference photos to learn what someone looks like, then generate new images from scratch. These aren't copy-paste jobs - they're entirely new images that never existed, making them harder to detect and debunk.
Where attackers get source photos
- Social media profiles: Instagram, Facebook, TikTok, and LinkedIn are the primary sources. Public profiles are scraped in bulk by automated tools.
- Google Image search: Any photo indexed by search engines is easily discoverable and downloadable.
- School and organization websites: Team photos, staff directories, and event galleries posted on public websites.
- Shared album links: Public photo sharing links on Google Photos, iCloud, or other platforms that don't require authentication.
- Data breaches: Leaked databases from hacked platforms sometimes include profile photos and personal images.
The takeaway is simple: every publicly accessible photo of you or your family is a potential input for deepfake generation. You can't control what someone does with AI tools, but you can control how much source material you give them.

New laws that protect you
The legal landscape has shifted dramatically in the past year. For the first time, there are real federal laws in the US specifically targeting deepfakes - and they have teeth.
The TAKE IT DOWN Act (May 2025)
Signed into law in May 2025, this is the first US federal law that directly criminalizes the non-consensual publication of intimate images, including AI-generated deepfakes. The key provisions:
- It's now a federal crime to publish or threaten to publish non-consensual intimate imagery, whether real or AI-generated.
- Platforms are required to remove flagged content within 48 hours of receiving a valid takedown request from a victim.
- The law applies to both the creators and distributors of deepfake intimate content.
- Minors receive additional protections, with harsher penalties for content involving anyone under 18.
Before this law, victims had to rely on a patchwork of state laws, many of which didn't explicitly cover AI-generated content. The TAKE IT DOWN Act closes that gap at the federal level.
The DEFIANCE Act (January 2026)
This law, which took effect in January 2026, gives victims a civil remedy. Where the TAKE IT DOWN Act is criminal, the DEFIANCE Act is about money damages. Key points:
- Victims can sue creators of non-consensual deepfake intimate imagery for statutory damages up to $250,000.
- You don't have to prove exact financial losses - the statutory damages exist precisely because the harm from deepfakes is hard to quantify in dollars.
- The statute of limitations is 10 years, recognizing that victims often don't discover deepfakes immediately.
- Courts can order the destruction of all copies of the offending material.
State-level legislation
Over 45 states now have some form of deepfake legislation. The coverage varies - some states focus on election-related deepfakes, others on intimate imagery, and some cover both. States like California, Texas, and New York have particularly strong laws with both criminal penalties and civil remedies. If you're targeted, you likely have both federal and state legal options available.

Try Viallo Free
Share your photo albums with a single link. No account needed for viewers.
Start Sharing FreeHow to protect your photos
Laws are important, but prevention is better than prosecution. Here's what actually works to reduce your risk.
1. Audit your public photos
Go through your social media accounts and honestly assess how many clear face photos are publicly visible. Instagram, Facebook, TikTok, LinkedIn - check each one. Search your own name on Google Images. You'll probably find more public photos than you expected.
2. Lock down social media
- Set Instagram and Facebook profiles to private. A public Instagram profile is an open photo library for anyone with a scraping tool.
- Disable the "Allow others to share to stories" option on Instagram to prevent your photos from spreading beyond your followers.
- Review tagged photos regularly. Other people's public posts can expose your face even if your own profile is locked down.
- Turn off facial recognition tagging on Facebook (Settings → Face Recognition → No).
3. Use private sharing instead of public posting
This is the biggest change you can make. Instead of posting family photos to a social media feed, share them through a private platform where photos aren't indexed by search engines and aren't accessible without authorization.
Viallo was built for exactly this use case. Photos shared through Viallo aren't publicly indexed, aren't scraped by bots, and can be protected with passwords or limited to specific people. Recipients can view beautiful gallery layouts without needing to create an account - but the photos stay private and off search engines.
Other privacy-focused options include Ente (for encrypted storage) and Signal (for encrypted messaging with photo sharing). The specific tool matters less than the principle: share privately instead of publicly.
4. Be cautious with children's photos
This deserves its own section because the stakes are higher. Children can't consent to having their photos posted publicly, and the rise in AI-generated CSAM makes this a genuine safety concern, not just a privacy preference.
- Don't post clear face photos of children on public social media profiles.
- Ask schools and organizations about their photo policies - many now have opt-out options for website and social media photos.
- Share children's photos only through private channels - family group chats, private photo albums, or direct sharing with specific people.
- Talk to other parents and family members about not posting your children's photos publicly. This is an awkward conversation but an important one.
5. Check reverse image search regularly
Use Google's reverse image search or tools like TinEye to periodically check where your photos appear online. Upload a clear headshot and see what comes back. This won't catch AI-generated content, but it will show you if your photos have been scraped and reposted somewhere you didn't expect.
6. Use watermarks or reduced quality for anything public
If you do need to share photos publicly - for a business, portfolio, or professional profile - consider using lower resolution versions and visible watermarks. These won't stop a determined attacker, but they make your photos less useful as deepfake source material. Higher quality source images produce better deepfakes.
What to do if you're targeted
If you discover deepfake content of yourself or someone you know, here's the immediate action plan:
- Document everything: Screenshot the content, URLs, user profiles, and timestamps before anything gets deleted. This is your evidence.
- Report to the platform: Every major platform now has specific reporting mechanisms for non-consensual intimate imagery. Under the TAKE IT DOWN Act, they must remove it within 48 hours.
- File a report with the FBI's IC3: The Internet Crime Complaint Center (ic3.gov) handles federal cybercrime reports. Deepfake intimate imagery is now a federal crime, so this matters.
- Contact a lawyer: The DEFIANCE Act gives you the right to sue for up to $250,000 in statutory damages. Many attorneys now handle these cases on contingency.
- Use takedown services: Organizations like the Cyber Civil Rights Initiative and NCMEC (for minors) can help with content removal across platforms.
- Contact StopNCII.org: This free tool, run by the Revenge Porn Helpline, creates a digital fingerprint (hash) of intimate images so platforms can proactively detect and block them from being shared.
The psychological impact of being deepfaked is real and serious. The Crisis Text Line (text HOME to 741741) and RAINN (1-800-656-4673) both support victims of image-based abuse. There's no shame in seeking help.
Frequently Asked Questions
How many photos does someone need to create a deepfake of me?
Current tools can produce a recognizable face swap from as few as 3-5 clear photos. More photos from different angles and lighting conditions produce better results. This is why public social media profiles with dozens or hundreds of selfies are such easy targets - they provide exactly the variety of angles that deepfake models need to work well.
Can I tell if a photo or video is a deepfake?
It's getting harder. In 2024, you could often spot deepfakes by looking for blurry edges around the face, inconsistent lighting, or weird teeth. In 2026, the best models produce output that's nearly indistinguishable from real photos to the human eye. Detection tools exist (like those from Microsoft and Intel), but they're in an arms race with generation tools. Don't rely on being able to spot fakes visually.
Does the TAKE IT DOWN Act apply to AI-generated content?
Yes. The law explicitly covers AI-generated intimate imagery, not just real photos or videos. It criminalizes the non-consensual publication of intimate imagery regardless of whether it depicts a real event or was generated by AI. Platforms must remove flagged content within 48 hours regardless of whether it's real or synthetic.
Are deepfake creators actually being prosecuted?
Yes, prosecutions have increased significantly since the TAKE IT DOWN Act took effect. Several high-profile cases in late 2025 and early 2026 resulted in criminal charges and convictions. The DEFIANCE Act has also led to civil lawsuits with substantial settlements. The legal framework is new, but enforcement is real and growing.
How does Viallo help protect against deepfakes?
Viallo helps by keeping your photos out of public reach in the first place. Photos shared through Viallo aren't indexed by search engines, aren't accessible to web scrapers, and can be password-protected or restricted to specific recipients. This doesn't make your photos immune to misuse by someone you've shared with, but it eliminates the bulk scraping that powers most deepfake attacks. Prevention through privacy is the most effective defense available.
Should I delete all my public photos to protect against deepfakes?
You don't have to go that far, but reducing the number of publicly available clear face photos is the single most impactful thing you can do. Set social media profiles to private, review tagged photos, and consider removing high-resolution face photos from public profiles. For new photo sharing, use private platforms instead of public social media posts.
What about photos already scraped from the internet?
Unfortunately, photos that have already been scraped and downloaded can't be un-scraped. This is why prevention matters more than cure. However, reducing publicly available photos still helps because it limits the attacker's ability to get more angles and higher quality source material. And if deepfake content is created, the new federal laws give you real legal remedies that didn't exist before.