Iran Used Facial Recognition to Track Down Protesters From Their Photos
Quick take: A joint investigation by Forbidden Stories revealed that Iran secretly purchased Russian facial recognition software called FindFace and has been using it to identify, track, and detain protesters. The system cross-references faces from protest footage, CCTV cameras, and social media photos. It's a real-world example of how publicly shared photos can be weaponized by governments - and it's not limited to authoritarian regimes.

What the investigation found
In March 2026, a consortium of journalists led by Forbidden Stories published findings from a months-long investigation into Iran's surveillance infrastructure. The central revelation: Iranian authorities secretly purchased FindFace, a facial recognition system developed by Russian company NtechLab, in 2019 through an Iranian front company called Rasadco.
FindFace can identify a face in a crowd within seconds. It runs recorded footage - from CCTV cameras, street recordings, drone footage, even social media videos - through an algorithm that matches faces against government databases. The software was then distributed to multiple Iranian state entities, including the Ministry of Intelligence and the Islamic Revolutionary Guard Corps.
The system was used extensively during and after the 2022 Mahsa Amini protests. As authorities gradually restored internet access, they detained people who were identified as having attended protests. Detainees reported being confronted with facial recognition matches during hours-long interrogations.
How your social media photos feed the system
FindFace doesn't just work with government-issued ID photos. It can cross-reference faces against any photo database - including images scraped from social media platforms. If you've posted photos publicly on Instagram, Facebook, or any indexed platform, those images are potentially part of the matching pipeline.
This isn't theoretical. The investigation documented cases where authorities matched protest footage against social media profiles to identify individuals. A photo you posted of yourself at a cafe two years ago could be the data point that connects your face to footage from a protest you attended.
The technology doesn't require a perfect photo, either. Modern facial recognition works with partial face captures, low-resolution CCTV stills, and oblique angles. A single clear photo of your face on a public social media profile gives the system what it needs to match you across thousands of other images.

This isn't just an Iran problem
It's tempting to dismiss this as something that only happens in authoritarian regimes. But the same technology operates in democracies. Clearview AI has scraped over 50 billion photos from social media and the open web, building a facial recognition database used by law enforcement agencies across the US, UK, and EU.
In the UK, police secretly searched over 150 million passport photos using facial recognition for six years before the practice was revealed. In the US, the FBI and ICE have access to driver's license and passport photo databases. Ring's doorbell cameras now offer facial recognition features, creating neighborhood-level surveillance networks.
The scale differs, and the intent differs. But the underlying mechanism is the same: publicly accessible photos of your face become data points in systems you never consented to and may not even know exist.
The direct connection to photo privacy
Every facial recognition system depends on reference photos. The more photos of you that exist on the open web, the easier it is for any system - government, corporate, or criminal - to identify and track you.
This creates a practical calculus that most people have never considered. When you post a family photo publicly on Facebook, you're not just sharing a memory with friends. You're adding data points to a global pool of facial reference material that can be accessed by anyone with a web scraper and a facial recognition API.
The Iran investigation makes this abstract risk concrete. Real people were detained and interrogated because their faces were matched against photos that were never intended for government surveillance. The photos were shared for personal reasons. The surveillance was an unintended consequence.
What you can do to protect yourself
- Audit your public-facing photos. Check what's publicly visible on every social media platform you use. Most platforms let you restrict past posts or change profile photo visibility. Even limiting a few key photos reduces your facial recognition footprint.
- Share photos privately instead of publicly. When sharing photos with family and friends, use private links rather than public posts. Private photo sharing platforms serve images through authenticated links that aren't indexed by search engines or accessible to scrapers.
- Request deletion from known databases. If you're in the EU, GDPR gives you the right to request deletion from facial recognition databases like Clearview AI. In the US, residents of Illinois, Texas, and Washington have stronger protections under state biometric privacy laws.
- Be selective about group photos. Group photos at events, protests, or gatherings are particularly valuable for facial recognition systems because they show multiple identifiable faces with location and time context.
Try Viallo Free
Share your photo albums with a single link. No account needed for viewers.
Start Sharing Free
Private sharing as a practical defense
You can't control what governments do with surveillance technology. But you can significantly reduce your exposure by keeping personal photos off the public web.
When photos are shared privately - through password-protected albums, authenticated links, or platforms that don't index content publicly - they exist outside the reach of facial recognition scrapers. The photos are accessible to the specific people you share them with and nobody else.
This isn't paranoia. It's a rational response to a documented reality. The same governments and companies that operate facial recognition systems today will have even more powerful tools tomorrow. The photos you share publicly today become permanent entries in databases you can't control.
Frequently Asked Questions
What is FindFace and how does it work?
FindFace is a facial recognition system developed by Russian company NtechLab. It can identify faces in real-time from CCTV footage, recorded video, or photographs by matching them against reference databases. Iran secretly purchased it in 2019 through a front company and distributed it to intelligence agencies.
Can facial recognition match me from a social media photo?
Yes. Modern facial recognition systems can match faces across different photos, angles, lighting conditions, and even years apart. A single clear photo of your face on a public social media profile is sufficient as a reference point for matching against surveillance footage.
Does this happen in the US and Europe too?
Yes, though in different forms. Clearview AI scraped over 50 billion photos from the open web and sells facial recognition to US law enforcement. UK police searched 150 million passport photos for years without public disclosure. The EU AI Act restricts but doesn't fully ban these practices.
How does private photo sharing protect against facial recognition?
Photos shared through private, authenticated links aren't indexed by search engines or accessible to web scrapers. Facial recognition databases are built primarily from public web scrapes, so keeping your photos private keeps them out of those datasets entirely.
Can I remove my face from facial recognition databases?
EU residents can use GDPR rights to request deletion from databases like Clearview AI. In the US, Illinois, Texas, and Washington have biometric privacy laws that provide some protections. However, government databases are generally not subject to individual deletion requests.