The AI Industry Is Spending $100M+ to Shape Your Privacy Laws
Quick take: AI companies and their founders are pouring over $100 million into the 2026 US midterm elections. One group tied to Trump advisors is spending at least $100 million. OpenAI's co-founder gave $25 million. Anthropic committed $20 million to a pro-regulation group. The stakes are your privacy - whether photo platforms, AI companies, and data brokers will face real oversight or continue operating with minimal rules.

The money flowing into the midterms
The numbers are staggering. Innovation Council Action, a political group tied to two of President Trump's advisors, announced it would spend at least $100 million on the 2026 midterm elections. The group's goal is to support candidates who favor light-touch AI regulation.
On the other side, OpenAI co-founder Greg Brockman and his wife each contributed $12.5 million to Leading the Future, a group that says it supports candidates who "champion policies that harness the economic benefits of AI and reject attempts to hinder American innovation." That's $25 million from one household.
Anthropic, the company behind Claude AI, took a different approach. It announced a $20 million donation to Public First Action, explicitly saying it agrees with most Americans that "not enough was being done to regulate AI" and that the technology comes with"considerable risks."
These aren't campaign donations in the traditional sense. They're bets on whether the US will regulate AI at all - and if so, how. The outcome directly affects whether companies can use your photos for AI training, sell your data to advertisers, or deploy facial recognition without your knowledge.
The industry is divided - but not on privacy
FEC filings show that AI money is flowing to both parties. This isn't a left-versus-right issue. It's a fight within the AI industry itself about how much regulation is acceptable.
On one side: companies that want to move fast with minimal oversight. They argue regulation will slow innovation and push AI development to China. On the other: companies that want baseline rules, partly because clear regulations create a level playing field and reduce the legal uncertainty that makes investors nervous.
What neither side is pushing for is strong consumer privacy protection. The pro-regulation camp wants AI safety rules - preventing AI from causing large-scale harm. The anti-regulation camp wants no rules at all. Neither group is prioritizing your right to control what happens to your personal photos and data.

Why election spending matters for your photos
Right now, the US has no federal privacy law. Your photo privacy depends on a patchwork of state laws - California's CCPA, Colorado's AI Act, various biometric privacy statutes. These laws have real teeth where they apply, but they leave most Americans unprotected.
The candidates who win in November 2026 will decide whether a federal privacy law gets passed, and what it looks like. A strong law could require companies to get consent before using your photos for AI training, force platforms to delete your data when asked, and ban the kind of hidden tracking that Perplexity was just sued over.
A weak law - or no law at all - means the current situation continues. Companies like Meta can change their privacy policies to monetize your AI conversations. Google can use your photos to train AI models under vague "service improvement" language. And data brokers can buy and sell photo datasets with no meaningful oversight.
"The stakes are really high because once a regulatory system gets entrenched, it's really hard to change it," a University of Rochester professor told ABC News. That's exactly why the AI industry is spending nine figures to shape it.
State laws are filling the gap - for now
While federal action stalls, states are moving. California's AI Transparency Act, which took effect January 1, 2026, requires companies to disclose when content is AI-generated and publish summaries of their training datasets. Colorado's AI Act targets algorithmic discrimination with documentation and transparency requirements.
Indiana's new privacy law entered its enforcement grace period on April 1, 2026. New York enacted bills requiring advertisers to disclose when ads contain "synthetic performers" created with AI. The California Age-Appropriate Design Code was partially upheld by the Ninth Circuit in March 2026, adding new protections for minors online.
These laws matter, but they only protect residents of those specific states. And the AI industry's midterm spending is partly aimed at preventing these state-level efforts from becoming the basis for stronger federal legislation.
Try Viallo Free
Share your photo albums with a single link. No account needed for viewers.
Start Sharing FreeThe EU is years ahead
The contrast with Europe is striking. The EU AI Act is already in enforcement, with rules covering prohibited AI practices, transparency requirements, and penalties up to 7% of global revenue. GDPR continues to protect European users from the kind of ad targeting that Meta just rolled out in the US.
European photo platform users benefit from real rights: the right to know how their data is processed, the right to object to automated decision-making, and the right to have their data deleted. American users have almost none of these protections at the federal level.
This is why platform choice matters. A photo sharing service that stores your data in the EU under GDPR protection gives you stronger privacy guarantees than one hosted in the US under current law - regardless of what the platform's own privacy policy says.
What you can do right now
Choose platforms with strong privacy commitments. Don't wait for Congress to protect your photos. Use services that store data in GDPR-compliant jurisdictions, don't use your photos for AI training, and don't sell your data to advertisers. Viallo stores photos on EU servers and never processes them with AI.
Pay attention to your state's privacy laws. If you're in California, Colorado, Indiana, or another state with new privacy legislation, learn what rights you have. Exercise your opt-out options. Send Global Privacy Control signals through your browser.
Vote. This isn't usually advice you'd find in a photo privacy article, but it's relevant. The candidates who win in 2026 will shape AI and privacy regulation for years. Whether you want stronger rules or less government intervention, the midterms are where that gets decided.

The bottom line
Over $100 million in AI industry money is being spent to influence who writes the rules for AI and privacy. The companies that have your photos, your messages, and your search history are the same companies funding campaigns on both sides of the regulatory debate.
The best immediate protection is practical: keep your personal photos in services that have strong privacy commitments today, regardless of what Congress does tomorrow. Choose EU-hosted platforms where GDPR applies. Avoid free services that monetize your data through advertising. And don't assume that the companies building AI assistants for you are also fighting to protect your privacy - because the FEC filings tell a more complicated story.
Frequently asked questions
Which AI companies are spending the most on the midterms?
Innovation Council Action, tied to Trump advisors, announced at least $100 million. OpenAI co-founder Greg Brockman contributed $25 million to Leading the Future. Anthropic gave $20 million to Public First Action, a pro-regulation group. These are the largest disclosed amounts, but more filings are expected as the election approaches.
Will the 2026 midterms produce a federal privacy law?
It's uncertain. The midterms determine the composition of Congress, which sets the legislative agenda. A strong privacy law has bipartisan support in concept but faces opposition from industry groups. The AI spending makes it harder to predict the outcome.
How does AI regulation affect my photos?
AI regulation determines whether companies need your consent to use photos for training AI models, whether they must disclose what data they collect, and whether you can demand deletion of your data. Without regulation, platforms can change their terms at any time - as Meta recently did with AI chat data.
Are my photos safer in the EU?
Yes. GDPR provides stronger legal protections for personal data than any current US federal law. EU-hosted platforms must get explicit consent for AI processing, honor deletion requests, and face significant fines for violations. Viallo stores all photos on EU servers under GDPR jurisdiction.
What state privacy laws should I know about?
California's CCPA and AI Transparency Act are the strongest. Colorado's AI Act targets algorithmic discrimination. Indiana's privacy law entered enforcement in April 2026. New York requires disclosure of AI-generated content in ads. Check your state's attorney general website for current consumer privacy rights.