Our website uses cookies so we can analyse our site usage and give you the best experience. Click "Accept" if you’re happy with this, or click "More" for information about cookies on our site, how to opt out, and how to disable cookies altogether.

We respect your Do Not Track preference.

Controversial AI software raises privacy concerns Julia Broughton
22 July 2020 at 12:15

clearview

Clearview AI has been making international headlines again this month as Britain’s Information Commissioner’s Office and Australia’s Information Commissioner launched a joint investigation into its privacy practices. The company has also recently withdrawn from the Canadian market following an ongoing investigation in Canada. 

What is Clearview AI?

Clearview AI technology has primarily marketed itself to law enforcement agencies and allows a user to take a picture of a person, upload it, and identify publicly available photos of the individual, along with links to where those photos appeared. Clearview uses sources such as Facebook, YouTube, Venmo and has approximately 2.8 billion faces in its system.

Clearview AI says its technology has helped law enforcement track down hundreds of perpetrators of crime in relation to child sexual exploitation cases, terrorism, and sex trafficking. It also says its technology has been used to help exonerate the innocent and identify the victims of crimes including financial fraud and child sexual abuse cases.

Concerns about the technology

Commentators have raised concerns about Clearview AI which could raise live issues under New Zealand’s privacy legislation. Websites such as Facebook have raised concerns about whether the technology violates its terms of service by engaging in “image scraping” from its content.

Several agencies have questioned the accuracy of the software and the potential for false matches, particularly in relation to ethnic minorities.

Commentators have also hypothesised about the potential for harm if the software advances without proper regulation or if it gets into the wrong hands. For instance, if the technology advanced to the stage where individuals could be identified on the street, this could be used as a tool for blackmailing, stalking and/or harassing those individuals.

New Zealand context

New Zealand Police contacted Clearview AI in January this year and undertook a trial. It did so without consulting our office or briefing the incoming Police Commissioner, Andrew Coster.

Police national manager of criminal investigations Tom Fitzgerald said its use was limited to about 150 searches of police volunteers, and roughly 30 searches of persons of interest. This involved about five suspects, but each generated several searches.

Fitzgerald said police only had one successful match for a person whose photo was already in the media and that the dataset is too small to be useful in a New Zealand context and it had difficulty identifying people of Maori and Pacific Island descent.

Police have said they will establish new processes to ensure the Police Commissioner and our office are consulted in future. 

Moving forward

The development of facial recognition technology is inevitable, and the technology will become more advanced over time.

It’s important that any new technology be introduced with proper consideration. Our advice is that any artificial intelligence should be used with careful reference to our principles for the safe and effective use of data analytics and that the data:

  • has a clear public benefit for these technologies
  • is fit for purpose
  • is used ethically and with due consideration for the effect it will have on people
  • is used transparently and that its introduction is in-line with legislation and guidelines
  • with an understanding the limitations of the technology such as bias towards specific groups
  • has proper human oversight and reviews of automated processes to ensure robust decision making and accuracy.

Image credit: Samantha Lee / Business Insider

,

Back