CBP Clearview partnership: Facial recognition deal raises operational and legal questions

Abstract visualization of facial recognition overlays at a border crossing illustrating the CBP Clearview partnership

CBP Clearview partnership: Facial recognition deal raises operational and legal questions

By Agustin Giovagnoli / February 11, 2026

CBP is moving to incorporate Clearview AI’s massive facial recognition database into “tactical targeting,” a step that would extend the agency’s already sophisticated identity‑matching and analytics capabilities across travel, immigration, and enforcement records. The CBP Clearview partnership matters because it could accelerate field identification—from border checkpoints to protest photos—while intensifying legal and civil‑liberties concerns over biometric surveillance at scale [1][2].

Quick summary: What’s happening and why it matters

Reporting indicates CBP aims to connect Clearview AI’s face search to broader intelligence workflows. CBP already runs platforms like IRS‑NG that let analysts upload photos, conduct face comparisons, and map associations across travel and enforcement records—ingesting commercial and open‑source data, including social media. Integrating Clearview’s scraped image corpus would expand the ability to identify individuals from photos captured in the field or near the border and then link them to existing government data [1]. Civil‑liberties experts warn such integration heightens risks of misidentification, mission creep, and chilling effects on protest and political activity [1][2].

What the CBP Clearview partnership changes operationally

CBP’s IRS‑NG has been described as a hub for photo‑driven queries and network mapping across agency datasets. Analysts can upload an image, run face comparisons, and connect the result to travel histories and enforcement records. Reports say DHS components, including CBP and ICE, also use mobile tools—such as Mobile Fortify—for field identity checks. Clearview’s face search could plug into these workflows to accelerate photo‑to‑person identification in both fixed and mobile contexts [1][2].

  • Faster identification from photos taken in the field, at or near the border, or sourced from social media.
  • More comprehensive linking of faces to travel, trade, and immigration data already held by CBP.
  • Expanded network analysis capabilities, given IRS‑NG’s correlation features across datasets [1][2].

Where Clearview’s data comes from—and why it’s controversial

Clearview built its system by scraping billions of images from public websites and social media without individual consent. Scholars and litigants argue that biometric scraping at this scale conflicts with privacy regimes that treat facial templates as highly sensitive, requiring explicit consent or narrow legal bases for processing. These tensions persist even when vendors claim content was “manifestly made public” online [3][4][5].

The ACLU’s lawsuit under Illinois’ Biometric Information Privacy Act (BIPA) produced a settlement that restricts Clearview’s sales to private entities, while law‑enforcement use has largely remained available. That outcome underscores a widening divide between consumer privacy protections and permissive law‑enforcement access to biometric tools [5].

Legal and regulatory implications

  • BIPA: Illinois’ strict biometric law has been central to litigation over facial recognition, reinforcing consent and disclosure obligations for private entities. Although the ACLU settlement constrained Clearview’s private‑sector business, questions remain about oversight and boundaries for government use [5].
  • GDPR: In the EU, biometrics are treated as special‑category data and subject to heightened controls. Scholarship highlights GDPR limits on decisions based solely on automated processing (Article 22), raising concerns about Clearview‑style profiling absent consent or narrow legal justifications. These debates are likely to influence advocacy and potential transatlantic scrutiny, especially as biometric databases grow in scope [3][4]. For the legal text, see GDPR Article 22 (external).

Given CBP’s integration of commercial and open‑source data, any operational use of Clearview could attract oversight inquiries, FOIA requests, and litigation strategies focused on data provenance, retention, auditing, and the role of automated outputs in enforcement decisions [1][3][4][5].

Civil‑liberties and operational risks

Analysts and advocates warn that broad access to surveillance and biometric data can be weaponized absent robust safeguards. Leaked DHS intelligence records have described some immigration protesters as “domestic terrorists,” with IRS‑NG cited as a reporting platform—an example fueling fears that face search could be used to map protest activity and associates. Misidentification risks, disparate impact, and the chilling effect on speech and assembly are central concerns as CBP layers facial recognition atop already extensive travel and immigration datasets [1][2].

Industry trends and governance best practices

Industry analyses show biometric use at borders trending toward more pervasive deployment, coupled with calls for transparency, data minimization, and fairness safeguards. Improving algorithmic performance with diverse datasets and routine bias testing, setting clear retention and purpose‑limitation policies, and establishing independent auditing are widely cited practices to reduce harm while maintaining operational utility [6].

For technology and compliance teams, best practices increasingly emphasize:

  • Documented legal bases for data collection and use.
  • Data‑minimization and strict retention schedules.
  • Bias and performance testing, with corrective action.
  • Independent oversight, audit logs, and redress channels [6].

What businesses and technology teams should watch next

  • Scope and safeguards: Whether CBP publicly details how Clearview outputs are validated, logged, and limited to avoid automated, unreviewed decisions [1][3][4].
  • Litigation and policy shifts: How BIPA jurisprudence and GDPR‑aligned scrutiny shape vendor contracts and government procurement of facial recognition [3][5].
  • Vendor risk: Contracts should specify data provenance, accuracy benchmarks, human‑in‑the‑loop review, and auditing—consider adopting playbooks similar to those used in high‑risk AI deployments. For implementation guidance, see our AI tools playbooks [6].

As the CBP Clearview partnership evolves, the key questions will center on transparency, legal fit under biometrics rules, and whether operational guardrails meaningfully mitigate civil‑liberties risks [1][3][4][5][6].

Sources

[1] CBP intelligence platform sits at intersection of border enforcement …
https://www.biometricupdate.com/202602/cbp-intelligence-platform-sits-at-intersection-of-border-enforcement-and-domestic-surveillance

[2] Department of Homeland Security intensifies surveillance in … – PBS
https://www.pbs.org/newshour/politics/department-of-homeland-security-intensifies-surveillance-in-immigration-raids-sweeping-in-citizens

[3] The Great Scrape: The Clash Between Scraping and Privacy
https://www.californialawreview.org/print/great-scrape

[4] Clearview AI, TikTok: Collection of Facial Images in International Law
https://cjil.uchicago.edu/print-archive/clearview-ai-tiktok-and-collection-facial-images-international-law

[5] Your Face Is Not Your Own
https://www.nytimes.com/interactive/2021/03/18/magazine/facial-recognition-clearview-ai.html

[6] Facial Recognition Trends in Border Control & Travel
https://blog.hidglobal.com/facial-recognition-border-control-and-travel-2025-trends-and-insights

Scroll to Top