Our website uses cookies so we can analyse our site usage and give you the best experience. Click "Accept" if you’re happy with this, or click "More" for information about cookies on our site, how to opt out, and how to disable cookies altogether.

We respect your Do Not Track preference.

49th APPA Forum (Part 3): The accountable AI workshop Jane Foster
30 August 2018 at 10:33

display dummy 1370962 960 720

An international think-tank, the Centre of Information Policy Leadership (CIPL), hosted a workshop on Accountable Artificial Information at Google headquarters.

AI’s current and future role in society

Attendees heard from industry experts on current and future applications of AI, including:

  • Google’s project known as “Glassbox” that is trying to give software more human common sense to discount misleading examples;
  • Accenture’s AI Fairness Tool to help companies ensure their AI is fair as it evaluates and corrects bias in algorithms. It visually demonstrates any trade-off between the overall accuracy of algorithms and their fairness; and
  • Avoiding “black-box models” and importance of transparency in machine learning so that bias not hidden but localised (given that all datasets are biased in some way and AI and machine leaning models are trained on data).  

Panellists in the following session discussed how dependent AI is on data, and the need for large datasets to ensure accuracy; that raises numerous issues such as data monopoly and who has control over the data, and the value of publically available datasets. One panellist emphasised the need for a human to be incorporated into every AI tool and the need for built-in accountability mechanisms and safeguards.  

Privacy regulators' panel

A second panel of privacy regulators and industry officials discussed the challenges and data protection risks of AI, including legal, ethical and lack of public trust. 

The UK Information Commissioner, Elizabeth Denham, gave some examples of the practical implications that they have considered:

  • The Royal Free Hospital Trust providing confidential full medical records of 1.6 million individuals to Google Deep Mind. The ICO found while this was an innovative measure, there was no legal basis or transparency, The ICO did not fine the hospital, but it noted how disappointing this was as it could have been done transparently and no made people lose trust;
  • Facial recognition technology use by UK Police is currently under investigation looking at whether that use is lawful given the known bias and negative impact of false positives;
  • The Data Analytics investigation that highlights how micro-targeting tools can have a significant impact and the need to really understand how data is used. 

The UK Commissioner commented that the GDPR is a much needed given wider concerns with profiling and automated decision making, and noted that accountability and transparency are defining themes in the GDPR. The Commissioner’s view was that you can reconcile innovation and data privacy, and she noted the ICO’s new research on a framework for auditing algorithms working with the Alan Turing Institute.

Microsoft’s Julie Brill noted the US Federal Trade Commission is reconsidering their fundamental principles including AI, and described how AI can fit into compliance areas:

  • consent and purpose limitation (scientists don’t know where it will end up);
  • data minimisation (can harm accuracy, but pseudomising can help if done properly); and
  • transparency and the right to explanation. 

Image credit: Display dummy binary via Pixabay (Creative Commons Licence)

 

,

Back