UK Police Paused Facial Recognition After It Proved Racially Biased

9 min readBy Viallo Team

Quick take: Essex Police suspended its live facial recognition cameras in March 2026 after a Cambridge University study found the system was significantly more likely to flag Black people. The UK government had just announced plans to expand the technology to 40 more police vans across England and Wales. If your face is in a photo - on social media, in a cloud album, or even just walking past a camera - this story matters.

A CCTV security camera mounted on a concrete wall in an urban setting, overcast sky behind it

What happened in Essex

Essex Police became one of at least 13 UK police forces using live facial recognition (LFR) technology in public spaces. The system works by comparing faces captured by cameras against a watchlist database in real time. Walk past a camera, get scanned, get matched - or not.

In March 2026, a study led by Cambridge University researchers found something the technology's critics had been warning about for years: the system was statistically more likely to correctly identify Black participants than people of other ethnic groups. That sounds like it might be a good thing until you think about what it actually means in practice - more surveillance, more stops, more false positives for one demographic group.

Essex Police paused all LFR deployments immediately. The Information Commissioner's Office, which regulates this technology, confirmed the suspension and issued warnings to other forces about potential accuracy and bias risks.

The UK's facial recognition expansion plan

Here's the part that makes this pause significant: it came just weeks after the UK government announced plans to dramatically scale up facial recognition across England and Wales. A Home Office white paper outlined spending more than 26 million pounds on a national facial recognition system and 11.6 million pounds on LFR capabilities, including 40 new camera-equipped police vans on top of the 10 already in use.

The plan calls for deploying these vans in town centers and high-crime hotspots. That's a lot of faces being scanned - and if the underlying technology has bias baked into it, scaling up just means scaling up the problem.

This expansion plan came on the heels of revelations that police forces had been secretly searching 150 million passport and immigration database photos using facial recognition for years. The number of searches increased from 2 in 2020 to over 400 in 2023, all without public knowledge or parliamentary debate.

A busy pedestrian street in a UK city center, people walking past shops on an overcast afternoon

Why facial recognition keeps getting bias wrong

The bias problem in facial recognition is well-documented and not new. A 2019 US government study by the National Institute of Standards and Technology found that many facial recognition algorithms had higher error rates for Black and Asian faces compared to white faces. In 2020, a Detroit man was wrongfully arrested based on a faulty facial recognition match.

The core issue is training data. Most facial recognition models are trained on datasets that overrepresent lighter-skinned faces. The algorithms learn to detect patterns they've seen the most, and they struggle with faces that look different from their training set.

The Essex case adds a new wrinkle. The National Physical Laboratory found no statistically significant bias in controlled testing of the same system, achieving an 89% true positive rate with a 0.017% false positive rate. But the Cambridge researchers found bias in real-world deployment. The gap between lab conditions and street conditions is where bias hides.

What this means for your photos

You might think facial recognition cameras on UK streets have nothing to do with your photo library. But the technology is all connected. The same algorithms that scan faces on the street are used to identify faces in photos. And the databases they search against are built from photos - passport photos, immigration photos, and potentially any images accessible through government databases.

When you share photos publicly on social media, those images become training data for facial recognition companies. Clearview AI built its entire business by scraping billions of public photos from Facebook, Instagram, and other platforms. Your holiday photos, group shots, and selfies all contribute to databases that can be used to identify and track people.

The question isn't just whether facial recognition is accurate. It's whether your photos should be part of a system that identifies people without their knowledge or consent - and that has been proven to work differently depending on the color of your skin.

Try Viallo Free

Share your photo albums with a single link. No account needed for viewers.

Start Sharing Free

How other countries handle this

The EU's AI Act, which starts major enforcement in August 2026, restricts real-time biometric identification in public spaces to narrow exceptions like terrorism threats and missing persons cases. Most EU member states won't have police vans scanning faces in town centers.

Several US cities - San Francisco, Boston, Minneapolis - have banned government use of facial recognition entirely. Others have placed moratoriums or require warrants for police use.

The UK is going the other direction. Post-Brexit, it's not bound by the EU AI Act, and the government's white paper signals that expansion is the plan, not restriction. The Essex pause is a speed bump, not a policy change.

How to limit your facial recognition exposure

You can't control cameras on the street, but you can control which photos of your face end up in databases that feed these systems. Here's what actually makes a difference:

  • Stop sharing photos publicly on social media. Public Instagram, Facebook, and X posts are scraped constantly by facial recognition companies. Every public photo of a face is potential training data.
  • Use private sharing instead of public posting. When you share photos through private links rather than public feeds, they can't be indexed or scraped by third parties.
  • Check your social media privacy settings. Set your profiles to private. It's not foolproof, but it makes bulk scraping much harder.
  • Be selective about which apps access your camera roll. Many apps request photo library access they don't need. Every app with access is a potential data pipeline.
  • Choose photo storage that doesn't scan your face. Google Photos runs facial recognition on every photo you upload. Apple's processing stays on-device. Viallo doesn't analyze your photos at all.
A person walking through a park, seen from behind, morning fog diffusing the light between trees

The bigger picture

The Essex suspension is noteworthy because it's a rare case where the system caught its own failure. Most facial recognition deployments don't come with independent academic studies measuring bias. They just run. People get flagged, stopped, and sometimes arrested based on algorithms that may not work equally for everyone.

The UK government's response will tell us a lot about where this technology goes next. If the answer is 'fix the algorithm and keep expanding,' we're heading toward a country where every trip to the high street involves having your face checked against a police database. If it triggers a broader conversation about proportionality and consent, there's a chance for meaningful guardrails.

Either way, the photos you share online today build the surveillance databases of tomorrow. Where and how you share them is one of the few things you can actually control.

Frequently Asked Questions

Is facial recognition legal in the UK?

Yes. Unlike in some US cities and under the EU AI Act, there is no blanket ban on police use of facial recognition in the UK. The government's 2026 white paper actively encourages expansion. Individual forces are supposed to conduct data protection impact assessments, but oversight is limited.

Can facial recognition identify me from my social media photos?

Yes. Companies like Clearview AI have scraped billions of public photos from social media platforms to build facial recognition databases. If you've posted photos of your face publicly on Instagram, Facebook, or X, those images may already be in a facial recognition database that can be searched by police or private companies.

Does sharing photos privately prevent facial recognition scraping?

Private sharing through link-based platforms like Viallo is significantly harder for scrapers to access. Your photos aren't indexed by search engines or visible on public profiles. While no method is completely immune, private sharing drastically reduces your exposure compared to posting on public social media feeds.

How does Viallo handle facial recognition?

Viallo doesn't run facial recognition or any AI analysis on your photos. Your images are stored on EU servers, shared through private links, and aren't accessible to scrapers or search engines. No one - including Viallo - scans your photos for faces or any other biometric data.

Related articles