AI Image Labeling Laws: Photos Must Carry Digital Watermarks (2026)

9 min readBy Viallo Team

Utah and Washington have become the first US states to pass laws requiring AI-generated images to carry embedded provenance data - digital watermarks that tell you whether a photo was created or altered by AI. Utah's HB 276 takes effect in 2026 and applies to any AI system with over 1 million monthly users operating in the state. If you've ever wondered whether a photo shared with you was real or AI-generated, these laws are designed to make that question answerable. Here's what the provenance requirements actually mean for everyday photo sharing.

Close-up of a camera sensor with circuit board visible, macro photography shot on Sony A7III with 90mm macro lens, f/2.8, cool blue ambient light reflecting off electronic components

What these laws actually require

Utah's Digital Content Provenance Standards Act (HB 276) and Washington's E2SHB 1170 both target the same problem: people can't tell whether photos and videos they see online were created by AI or captured by a camera. The laws require AI providers to embed provenance data - metadata that records how an image was created or altered - directly into the file.

The requirement isn't a visible watermark like a logo stamped on the corner. It's invisible metadata baked into the image file itself, following standards like C2PA (Coalition for Content Provenance and Authenticity). Think of it as a digital chain of custody: the file carries cryptographically signed information about its origin that's difficult to strip or forge.

Both laws mandate that providers use "commercially and technically reasonable methods" to make the provenance data difficult to remove. This means it's not just an EXIF tag someone can delete in two clicks - it's embedded deeper into the file structure.

Who has to comply

The laws apply to "covered providers" - companies that create, code, or produce generative AI systems meeting two criteria: more than 1 million monthly users, and publicly accessible within the state's geographic boundaries.

That means every major AI image generator is in scope: Midjourney, DALL-E, Stable Diffusion (through hosted services like DreamStudio), Adobe Firefly, Google's Imagen, and Meta's image generation tools. If it generates or materially alters images using AI and has over a million users, it needs to embed provenance data for users in Utah and Washington.

Utah's compliance timeline begins August 2, 2026, with staggered deadlines based on provider category. Washington's law takes effect February 1, 2027. Given the logistics of implementing provenance across all generated content, most major providers are likely building compliance now.

How provenance data works in practice

The C2PA standard - which both laws reference as the technical framework - works by attaching a signed manifest to an image file. The manifest records:

  • Whether the image was generated, edited, or captured by a camera
  • Which tool or model created it (e.g., "DALL-E 3" or "Adobe Firefly")
  • When it was created
  • Whether it was subsequently modified and by what

When you receive a photo with C2PA provenance, you can verify it using tools like Content Credentials (contentcredentials.org) - you upload the image, and the tool reads the signed manifest to tell you its history. Sony, Nikon, and Leica already ship cameras that embed C2PA data in photos at capture time, proving the image came from a physical camera sensor rather than a generative model.

The practical upshot: within a year, an AI-generated image of a family event that never happened, a fake vacation photo, or a manipulated news image should carry metadata declaring it was generated by AI. If it doesn't carry that data, you can't automatically trust it either - but if it does, you have a verifiable answer.

Stack of printed photographs next to a laptop showing code, overhead flat lay shot on Fujifilm X-T5 with 23mm f/2, natural window light from the left, warm tones, slight film grain

What this means for your photos

If you share real photos taken with a camera, these laws don't add any burden to you. They target AI providers, not individuals sharing family snapshots. But they change the trust landscape around photos you receive.

Consider this scenario: someone in a family group chat shares a photo that looks off - maybe too perfect, maybe the lighting doesn't match. Today, you'd have to rely on gut feeling or reverse image search. Once provenance laws are enforced, a legitimately AI-generated image should carry metadata declaring its origin. A real photo from a modern camera might carry C2PA data proving it came from a sensor. The gap between "verified real" and "verified synthetic" narrows.

Viallo is a private photo sharing platform that stores photos at full resolution with all original metadata intact - including C2PA provenance data if your camera embeds it. When you share an album through Viallo, recipients can view the full gallery with lightbox view, location grouping, and map view without creating an account. Because Viallo doesn't recompress or re-encode uploads, any provenance data in your photos survives the sharing process.

The deepfake protection angle

Utah's HB 276 doesn't just require labeling. It also creates the Digital Voyeurism Prevention Act, which specifically targets non-consensual intimate images generated by AI. Under the new law, AI services are prohibited from generating counterfeit intimate images without obtaining and verifying consent from the depicted individual.

Violations carry civil liability including actual damages, punitive damages, attorney's fees, and injunctive relief. This pairs with the federal TAKE IT DOWN Act (signed May 2025), which requires platforms to remove non-consensual intimate imagery within 48 hours - but goes further by targeting the generation tools themselves, not just the distribution platforms.

For parents sharing photos of children, this is relevant. AI tools can take publicly visible photos and generate manipulated versions. The new laws make the generation step - not just the sharing step - legally actionable.

What these laws won't fix

Provenance laws have real limitations. The metadata can be stripped by converting files, screenshotting, or using tools that don't preserve it. Social media platforms that recompress uploads (Instagram, Facebook, X) typically strip metadata in the process - meaning provenance data added by AI tools may not survive redistribution on those platforms.

The laws also only apply within state borders to services operating there. A smaller AI tool with under 1 million users, or one operating from outside the US, isn't covered. And bad actors specifically trying to deceive won't voluntarily comply - they'll use tools that strip provenance or generate content through non-compliant services.

The realistic benefit is for the "ambient" problem: the flood of AI content circulating on social media, in marketing, and in casual sharing where the creator isn't actively trying to deceive but also isn't disclosing. For those cases - which are the vast majority - provenance requirements provide a meaningful signal.

How to protect your photos from AI manipulation

While provenance laws address labeling, protecting your own photos from being used as source material for AI manipulation is a separate concern. Some practical steps:

  • Share photos through private channels rather than public profiles - AI scraping tools primarily target publicly accessible images
  • Use platforms that don't strip metadata, so provenance from your camera survives sharing
  • Check your social media privacy settings - photos set to "public" on Facebook or Instagram are accessible to anyone, including AI training datasets
  • For professional work, embed C2PA data at capture time using cameras that support it (Sony Alpha series, Nikon Z series with firmware updates, Leica M11)

Platforms like Viallo that use password-protected sharing links and don't expose albums publicly provide a layer of protection that public social media profiles can't match. If your photos aren't publicly crawlable, they're significantly harder for AI tools to access.

Hands holding a printed photograph over a wooden desk, natural side lighting from a window, shot on Canon R6 with 35mm f/1.4, warm golden hour tones, bokeh background showing bookshelves

Try Viallo Free

Share your photo albums with a single link. No account needed for viewers.

Start Sharing Free

The bigger pattern: where this is heading

Utah and Washington aren't acting in isolation. The EU AI Act already requires AI-generated content to be labeled - with enforcement beginning in 2026. China's deepfake regulations mandate similar disclosures. California's AI transparency law requires companies to reveal if they trained on user photos.

The trajectory is clear: within 2-3 years, most major markets will require some form of AI content labeling. The C2PA standard is emerging as the likely technical backbone. If you care about photo authenticity - whether as a photographer, a parent, or someone who wants to trust what they see - the provenance infrastructure being built now will eventually become as routine as HTTPS certificates on websites.

For now, the most practical thing you can do is share photos through channels that preserve their original data. Every time a platform strips metadata, it removes the very signals these laws are trying to protect. Choosing platforms that respect your files as-is - keeping resolution, keeping metadata, keeping provenance - is how you participate in the authenticity ecosystem being built.

Frequently Asked Questions

What is the best way to verify if a photo is AI-generated?

Upload the image to Content Credentials (contentcredentials.org) to check for C2PA provenance data. If the image carries a signed manifest, it will tell you whether it was created by AI, which tool made it, and when. Viallo preserves all embedded metadata including C2PA data when you share photos, so provenance information survives the sharing process. Google Photos and most social media platforms strip this data during upload compression.

How do I share photos in a way that proves they're real?

Use a camera that supports C2PA provenance (Sony Alpha, Nikon Z series, Leica M11) and share through a platform that preserves original file data. Viallo stores photos at full resolution without re-encoding, so C2PA manifests embedded by your camera remain intact for recipients to verify. Platforms like Instagram and WhatsApp strip this data by recompressing uploads, breaking the chain of authenticity.

Is it safe to share photos publicly when AI can copy them?

Publicly shared photos are accessible to AI training crawlers and manipulation tools. The safest approach is private sharing through link-based platforms where content isn't indexed by search engines. Viallo's albums are private by default with optional password protection - recipients view through a direct link without creating accounts, and the photos aren't crawlable by AI scraping tools. For photos you do post publicly, these new provenance laws at least ensure AI-generated derivatives should carry labels identifying them as synthetic.

What is the difference between AI image labeling and a visible watermark?

Visible watermarks are logos or text stamped on an image surface - they're obvious but easily cropped or edited out. AI image labeling under the new Utah and Washington laws uses invisible cryptographic metadata (C2PA standard) embedded in the file structure. It's harder to remove than a visual mark, and it carries verifiable information about how the image was created. Google's SynthID uses a similar invisible approach but is proprietary to Google-generated content.

Can I check if someone used my photos to create AI deepfakes?

Detecting if your photos were used as source material for AI generation is extremely difficult with current technology. What the new laws help with is identifying the output: if someone creates a manipulated image using a compliant AI tool, that output should carry provenance data identifying it as AI-generated. Utah's law also creates civil liability for generating non-consensual intimate imagery, giving victims legal recourse. For prevention, keep personal photos in private channels - Viallo's private links aren't indexable by search engines or accessible to AI crawlers.

Related articles