What the TAKE IT DOWN Act Means for Your Shared Photos
Last updated: March 10, 2026
Quick take: The TAKE IT DOWN Act, signed into law in May 2025, makes it a federal crime to publish non-consensual intimate images - including AI-generated deepfakes. Platforms must remove reported content within 48 hours or face legal consequences. Violators face up to 3 years in prison and fines up to $150,000. The DEFIANCE Act, which followed in January 2026, adds civil remedies with damages up to $250,000. Together, these laws give victims real legal tools for the first time. Private photo sharing platforms like Viallo reduce exposure by keeping your photos off public platforms entirely.

What is the TAKE IT DOWN Act?
The TAKE IT DOWN Act (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks) was signed into law by President Trump on May 19, 2025. It's the first US federal law to directly criminalize the non-consensual distribution of intimate images, and it explicitly covers AI-generated deepfakes - not just real photos.
Before this law, victims of non-consensual intimate imagery had a patchwork of state laws to rely on. Some states had strong protections. Others had almost none. There was no federal standard, and AI deepfakes fell into a gray area because the images weren't technically"real." The TAKE IT DOWN Act closes that gap.
The law passed with broad bipartisan support - one of the few things both parties agreed on in 2025. It was championed by survivors of image-based abuse and gained momentum after several high-profile cases of AI-generated deepfakes targeting public figures and teenagers.
Key provisions
- Criminal penalties: Publishing non-consensual intimate images is now a federal crime punishable by up to 3 years in prison and fines up to $150,000. This applies whether the images are real photographs or AI-generated deepfakes.
- 48-hour takedown requirement: Online platforms must remove reported non-consensual intimate imagery within 48 hours of receiving a valid takedown request from the victim or their representative.
- Covers AI-generated content: The law explicitly applies to"digital forgeries" - AI-generated or manipulated images that depict a real, identifiable person in intimate situations, even if the image itself is entirely synthetic.
- Minors get additional protection: Images depicting minors carry enhanced penalties. The law works alongside existing child exploitation statutes to create additional enforcement tools.
How the law works in practice
The TAKE IT DOWN Act creates a two-track system: criminal prosecution for perpetrators and mandatory takedown obligations for platforms.
For perpetrators
If someone publishes or threatens to publish non-consensual intimate images - whether real photos, manipulated images, or AI-generated deepfakes - they can be prosecuted under federal law. The "intent to harm" standard means prosecutors need to show the person knowingly distributed the images without consent. This covers the obvious cases: revenge distribution, extortion, and targeted harassment.
The AI deepfake provision is particularly significant. Before TAKE IT DOWN, creating a realistic deepfake of someone in intimate situations was arguably legal at the federal level as long as it wasn't distributed as "real." Now it doesn't matter whether the image is real or synthetic - if it depicts an identifiable person in intimate situations without their consent, distributing it is a crime.
For platforms
The 48-hour takedown requirement puts direct obligations on online platforms. When a victim submits a valid takedown request, the platform has 48 hours to remove the content. If the platform fails to comply, it can face regulatory action from the Federal Trade Commission.
This is a meaningful shift. Before TAKE IT DOWN, platforms could drag their feet on removal requests with limited legal consequences. Some social media companies took weeks or months to respond to reports of non-consensual imagery. The 48-hour clock changes that dynamic entirely.
The law doesn't require platforms to proactively scan for non-consensual imagery - it only kicks in after a victim reports content. This was a deliberate compromise to avoid First Amendment concerns and the technical challenges of automated content detection.
Filing a takedown request
Victims can submit takedown requests directly to platforms. The request must identify the specific content, confirm it depicts the victim, and state that it was shared without consent. Most major platforms have created dedicated reporting flows since the law took effect. The FTC provides guidance and a complaint process for platforms that don't comply with valid requests.

Try Viallo Free
Share your photo albums with a single link. No account needed for viewers.
Start Sharing FreeThe DEFIANCE Act: civil damages up to $250,000
While the TAKE IT DOWN Act handles criminal prosecution, the DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits Act), signed into law in January 2026, gives victims a civil path to justice.
The DEFIANCE Act allows victims of AI-generated non-consensual intimate imagery to sue for civil damages. The numbers are significant: up to $150,000 in statutory damages per instance, and up to $250,000 if the perpetrator acted with malicious intent. Victims can also recover attorney's fees, making it financially viable to pursue cases even without deep pockets.
How DEFIANCE differs from TAKE IT DOWN
- Criminal vs. civil: TAKE IT DOWN is enforced by prosecutors and can result in prison time. DEFIANCE is pursued by victims directly and results in monetary damages. They complement each other - a perpetrator could face both criminal prosecution and a civil lawsuit.
- Focus on AI deepfakes: DEFIANCE specifically targets digitally altered or AI-generated intimate imagery. It was designed to address the explosion of deepfake technology that makes it trivially easy to create realistic fake intimate images of anyone from a handful of social media photos.
- Statutory damages: Victims don't need to prove specific financial harm. The law provides fixed damage amounts, which is crucial because the harm from non-consensual imagery is often emotional and reputational rather than directly financial.
International context
The US isn't alone in addressing this. Denmark passed a law specifically protecting citizens' "own body, facial features and voice" from unauthorized AI reproduction. The EU's AI Act includes provisions about deepfakes and transparency requirements. The UK's Online Safety Act 2023 criminalized the sharing of deepfake intimate images. Australia's Online Safety Act gives the eSafety Commissioner power to order content removal within 24 hours.
The trend globally is clear: lawmakers are catching up to the technology. But enforcement remains the challenge. Perpetrators can operate across borders, and content can be hosted on platforms in jurisdictions with weaker protections.
What platforms must do now
The TAKE IT DOWN Act creates specific obligations for any platform that hosts user-generated content. Here's what's required and what the industry response has looked like.
- 48-hour removal: Platforms must remove reported non-consensual intimate imagery within 48 hours of receiving a valid request. This applies to images, videos, and AI-generated content.
- Reporting mechanisms: Platforms must provide a clear, accessible way for victims to report non-consensual intimate imagery. Burying the report button five menus deep doesn't cut it anymore.
- No re-upload: Platforms should take reasonable steps to prevent the same content from being re-uploaded after removal. This doesn't require perfect detection, but it does mean platforms need some form of hash-matching or content fingerprinting for removed images.
- Preservation of evidence: While platforms must remove content from public view, they need to preserve copies for potential law enforcement investigations. This creates a tension between victim privacy and prosecution needs that platforms must navigate carefully.
Major platforms - Meta, Google, X, TikTok, Snap - have all updated their policies and reporting systems in response to the law. Smaller platforms and newer services are still catching up. The FTC has issued guidance but hasn't yet brought enforcement actions, so the practical teeth of the law for platform compliance are still being tested.
Protecting your photos from misuse
Laws like TAKE IT DOWN and DEFIANCE give victims recourse after the damage is done. But prevention is always better than prosecution. Here's how to reduce the risk of your photos being misused in the first place.
Reduce your public photo footprint
AI deepfakes are created from publicly available images. The more high-quality photos of you that exist on public platforms - Instagram, Facebook, LinkedIn, dating apps - the easier it is for someone to create a convincing deepfake. This doesn't mean you should never post a photo online, but it's worth being selective about where your photos are publicly accessible.
Use private sharing instead of public posting
When you want to share photos with family and friends, you don't need to post them publicly. Private sharing platforms let you share albums with specific people through direct links, without making your photos discoverable by search engines, AI scrapers, or random strangers.
Viallo is built specifically for this. You create an album, upload your photos, and share it through a private link. Recipients can view the photos in a gallery without creating an account. You can add password protection for extra security and see analytics on who's viewed your shared album. Your photos never appear on any public feed, aren't indexed by search engines, and aren't accessible to anyone who doesn't have the link.
This approach fundamentally reduces your exposure. Deepfakes require source material. If your personal photos are shared privately rather than posted publicly, there's simply less material available for someone to misuse. For more on how this works, see our guide on private photo sharing.
Practical steps if you're targeted
- Document everything: Screenshot the content, the platform it's on, any associated accounts, and timestamps. You'll need this evidence for both takedown requests and potential legal action.
- File platform takedown requests immediately: Under TAKE IT DOWN, platforms have 48 hours to remove content after you report it. Don't wait. Use the platform's dedicated reporting flow for non-consensual intimate imagery.
- Contact law enforcement: Non-consensual intimate imagery is now a federal crime. File a report with your local police and with the FBI's IC3 (Internet Crime Complaint Center). The criminal case is separate from your platform takedown rights.
- Consult a lawyer about civil action: Under the DEFIANCE Act, you can sue for up to $250,000 in damages. Many attorneys now specialize in image-based abuse cases, and the fee-recovery provision means you can pursue cases without paying attorney fees upfront.
- Use the Cyber Civil Rights Initiative: The CCRI (cybercivilrights.org) provides resources, a crisis helpline, and referrals to attorneys who handle image-based abuse cases. They've been instrumental in advocating for the laws that now protect victims.

Frequently Asked Questions
Does the TAKE IT DOWN Act cover AI-generated deepfakes?
Yes. The law explicitly covers "digital forgeries" - images that are AI-generated, digitally altered, or otherwise synthetically created to depict an identifiable real person in intimate situations. It doesn't matter that the image itself isn't a real photograph. If it depicts a real person without their consent, distributing it is a federal crime.
What penalties does the TAKE IT DOWN Act impose?
Criminal penalties include up to 3 years in federal prison and fines up to $150,000. Enhanced penalties apply when the victim is a minor. The DEFIANCE Act adds civil liability of up to $150,000 in statutory damages per instance, or $250,000 for malicious conduct, plus attorney's fees.
How quickly must platforms remove reported content?
Platforms must remove non-consensual intimate imagery within 48 hours of receiving a valid takedown request. This clock starts when the platform receives a properly filed report - not when the content was originally posted. Platforms that fail to comply can face FTC enforcement action.
What is the DEFIANCE Act and how is it different?
The DEFIANCE Act (signed January 2026) creates civil remedies for victims of AI-generated non-consensual intimate imagery. While TAKE IT DOWN provides criminal penalties enforced by prosecutors, DEFIANCE lets victims sue directly for monetary damages up to $250,000. A victim could pursue both criminal charges and a civil lawsuit simultaneously.
Does the law apply to content posted before it was enacted?
The criminal provisions apply to conduct occurring after the law's enactment in May 2025. However, the 48-hour takedown obligation applies to any non-consensual intimate imagery currently hosted on a platform, regardless of when it was posted. So even if content was uploaded in 2020, a victim can file a takedown request today and the platform must remove it within 48 hours.
How does private photo sharing reduce deepfake risk?
AI deepfakes are created from publicly accessible photos. When you share photos privately through a platform like Viallo - with direct links, no public indexing, and optional password protection - those photos aren't available to AI scrapers or bad actors browsing public profiles. It doesn't eliminate all risk, but it significantly reduces the source material available for creating deepfakes.
What should I do if someone creates a deepfake of me?
Document everything immediately - take screenshots with timestamps. File takedown requests with every platform where the content appears (they have 48 hours to comply). Report to local law enforcement and the FBI's IC3. Contact the Cyber Civil Rights Initiative (cybercivilrights.org) for support and legal referrals. Consider consulting an attorney about civil damages under the DEFIANCE Act - you could recover up to $250,000 plus attorney fees.