Clearview AI claims it has scraped over ten billion photos off of social media to improve its facial recognition tools. The firm is also working on new tools to further strengthen its capabilities.
Clearview AI has garnered criticism for its facial recognition technology in the past. Law enforcement agencies across the US have used the tech in the course of investigations.
It works by scanning a face in real-time and identifying that person in any photos within the database. Similar to the way people search sites work, but with a face instead of a name.
Its biometric database was declared illegal in Europe. Throughout the US, civil rights groups have raised the alarm over the database and its potential to infringe on privacy. These groups have targeted Clearview in multiple lawsuits.
In spite of this, the company appears set on pushing forward. In a conversation with Wired, co-founder and CEO Hoan Ton-That explained that more photos mean a higher likelihood of users finding the person they’re searching for.
Clearview’s scraping technique is similar to the methods employed by background check services, but its scope seems to be ballooning. Moreover, it’s working on two developments that are stoking even more concern.
Deblur and Unmask
The company is looking for a way to “deblur” images to try and identify faces in lower-resolution photos. It’s also developing an unmasking tool that, in theory, will be able to pick up on the visible features of a masked face and fill in the blanks.
This will significantly increase law enforcement’s ability to identify suspects, but it also has the potential to misidentify people.
In the short to medium term, such tech will only be guessing what a blurry or masked face looks like. If the unmasking feature becomes foolproof, it would threaten activists who attend lawful protests and wear masks to conceal their identities.
Still, Clearview AI doesn’t seem concerned about this. The CEO insists that it’s in the interests of law enforcement and that they are working to make sure the tech isn’t misused.
However, the company can’t guarantee that. Tech is often put to purposes other than what it’s intended for when it’s out in the wild. As such, regulation over facial recognition technology is key.