Our website uses cookies so we can analyse our site usage and give you the best experience. Click "Accept" if you’re happy with this, or click "More" for information about cookies on our site, how to opt out, and how to disable cookies altogether.

We respect your Do Not Track preference.

Privacy Commissioner speech to EMA about AI: 25 September 2023

Tēnā koutou katoa

Ko ahau te Kōmihana Matatapu

Ko Michael Webster ahau

Thanks so much everyone for joining this EMA Forum where I’m going to talk about AI, business, and privacy.

 I’m Michael Webster, New Zealand’s Privacy Commissioner and over the next 30 minutes I’m aiming to talk about AI and privacy, how it relates to business, and what it all means for you.

Before I get into talking about AI, though, I think it’s useful for me to share with you all some of my priorities as Privacy Commissioner and give you a sense of the ‘mood of the citizen/customer’ at the moment, when it comes to the digital world.

People often say to me, “how will you know whether you’re doing a good job as Privacy Commissioner?” -  or what will have changed if my Office is succeeding in its role?

My answer is that I will know we have made the needed change in approach and culture when privacy is treated as a core focus for agencies, as much as health and safety, or good financial reporting, or achievement of key financial and non-financial targets.

I want to see it as a core focus because I believe that that will contribute to three desirable outcomes.

I want to ensure privacy is a core focus for agencies in order to:

  1. protect the privacy of individuals
  2. enable agencies to achieve their own objectives, and
  3. safeguard a free and democratic society.

Of course, getting to this point will require effort across several fronts.

My Office will be focusing on achieving four objectives over the medium term:

  • We will engage and empower people and communities that are more vulnerable to serious privacy harm.
  • We will work in partnership with Māori to take a te ao Māori perspective on privacy.
  • We will set clear expectations to provide agencies with greater certainty about their responsibilities. And
  • We will promptly use our full range of investigation and compliance powers as necessary to hold agencies accountable for serious privacy harm.

And just a reminder, under the Privacy Act, as Commissioner I must, in performing any function or duty, and in exercising any powers, have regard to government and businesses being able to achieve their objectives efficiently.

My Office is determined to be a modern regulator, working with a smart regulatory framework.

If my Office does make a good go of achieving these objectives, we should see in New Zealand a future where:

  • Individuals are more confident – for good reason – that their privacy is protected;
  • Agencies can better achieve their own objectives – financial and non-financial - through respecting the privacy rights of New Zealanders; and
  • The right to privacy and the protection of personal information is valued in New Zealand.

 

So, let’s turn now to AI.

The development of AI offers great potential to make our lives easier, to drive improved productivity, and help us solve some of the problems facing society.

In business circles, and media, AI has been touted as a resource for eliminating manual errors and increasing the accuracy and precision of tasks.

It can write letters or business papers! It can take meeting notes with accuracy! It can create business efficiencies that save time and money! Quicker and faster, yes.

But is the hype worth it?

I totally get the appeal of an attractive new tool that can help make business more efficient.

But I’m also the kind of person who will keep telling you that in your business, keeping personal info0rmation protected and respected should be as fundamentally important as health and safety or prudent financial reporting; it’s not an optional extra.

In my world, we no longer say IF you experience a privacy breach but WHEN so as businesspeople, leaders, and decisionmakers you really need to be on top of this.

I know from our Office stats that criminals are becoming an even greater threat online.

Cybersecurity breaches are increasing and they’re resulting in costly hacks of personal information.

Just this year we started an investigation into New Zealand’s biggest data breach, which was hacker based, and I see no reason for criminals to slow down their activities.

AI can also be used to supercharge these criminals making it an even trickier proposition.

Hopefully this isn’t news to you, but in New Zealand, if you’re using AI in your business then it’s your responsibility to make sure you’re operating within the Privacy Act, including protecting the information you hold.

Regardless of which staff are using AI, the size of your business, or whether you’re in the in the public, private, or not-for-profit sector, it’s your responsibility to work within our privacy laws.

If you only remember one thing today, then make it this: thinking about privacy is vital to using AI tools effectively.

The Privacy Act is technology neutral, giving everyone the same privacy rights and protections whether their personal information is recorded on paper, held digitally, collected by facial recognition technology, or other means.

The Act is currently NZ’s only legislation that regulates AI through the collection, use and disclosure of personal information.

In this sense, AI is just another new technology.

However, there is deep concern out there – expressed by both tech experts and ordinary people - about the potential risks of AI making processes less transparent, reinforcing biases in data, and disconnecting people from important decisions.

Let me give a quick example here.

It will give you a feel for some of the privacy questions you should be thinking about before you decide to incorporate AI tools into your business.

Then we’ll get into the detail.

Let’s say that you’re looking to add a generative AI-based chatbot to your website.

You might be looking to answer common questions, provide information, and take details to follow up enquiries from both existing customers and new visitors.

This new customer service tool is obviously designed to save a lot of emails and calls that staff are manually answering.

But ask yourself first:

  • How will you know the chatbot is giving accurate information to people?
  • What can you do to test the model is reliable and remains reliable over time?
  • Are you doubling down on the chatbot?
  • Or will that information be available another way because not everyone will like chatbots or be familiar with how to use them.

So far so easy but:

  • What are the terms and conditions of this AI tool?
  • Will the provider store information that is put in?
  • Will that information then be used by the provider to train their AI systems?

The questions are many and varied and I haven’t asked these to trip anyone up, but to show the kinds of considerations that need to be made before diving head-first into AI.

The uptake of AI tools presents some specific challenges for privacy because at their heart, privacy protections rely on people and organisations who can understand context and take responsibility for their actions.

This may become harder as AI tools take on more tasks, because they enable new ways to gather and combine personal information.

Those new ways can make it harder to see and understand how personal information is used. And it could make it harder to explain how personal information is used.

Regardless of the tool you’re using, personal information is the important part here.

That’s because the Privacy Act applies whenever you collect, use, or share personal information.

As a rough guide, if you can say who the information is about, it’s personal information.

Clearly that includes information like a name, address, contact details, or photographs of a person.

However, it can also include technical metadata like map coordinates, Internet protocol addresses, or device identifiers related to a person.

And here’s the really interesting part, especially when we’re talking about AI… personal information also includes information about a person that is inaccurate or made up, including fake social profiles and deepfake images.

When we’re talking about deepfakes, then privacy can seem to become very complex very fast.

So that’s why a key part of my role as Privacy Commissioner is around education.

In May this year I issued my expectations of agencies using AI, which set out an eight-point system for checking what and how you’re using AI before diving in.

The detail is on our website, but just quickly, my expectations were:

To have senior leadership approval, which is based on a full consideration of risks and mitigations.

That you review whether a generative AI tool is necessary and proportionate given potential privacy impacts and that you consider whether you could take a different approach.

That you conduct a privacy impact assessment before using these tools.

You should be transparent by telling people how, when, and why the tool is being used.

Engage with Māori about potential risks and impacts to the taonga of their information.

Develop procedures about accuracy and access by individuals to their information.

Ensure you get a human review prior to acting on AI outputs to reduce the risks of inaccuracy and bias.

And, ensure that personal information is not retained or disclosed by the AI tool.

My Office’s initial work focused on taking steps to understand privacy risks.

I wanted to make sure agencies knew that I expected them to be making a conscious and informed decision from leadership about using these tools, where personal information might be involved.

You can find the initial guidance at privacy.org.nz and it was well covered by media at the time so should be easy to find using Google too.

Last week I issued further guidance and that’s what I’d like to focus on now.

It really goes into a lot more detail around how AI works and how it relates to the 13 Information Privacy Principles (IPPs) in the Privacy Act.

It’s thorough, detailed guidance that offers practical examples, and sets out a range of questions for organisations to consider as they think about their privacy obligations.

You can also find that on our website privacy.org.nz.

In the space of a single year, AI tools have had phenomenal growth. Let me give you a very short potted history. Although “history” is a bit of a stretch since we’re starting in 2022.

This time last year the attention was focused on tools that could generate images.

AI tools would take vast amounts of training data – in this case billions of photos, paintings, sketches, and other images – and then process that to find patterns.

Those patterns then get recorded into the algorithm – essentially, the maths in the AI tool that make it work.

The innovation here was adding a user interface so people could use AI to create their own images.

If you’ve used these tools, you know how they work.

You can provide a prompt and they respond with something.

If you were using Instagram or Tik Tok late last year I’m sure you would have seen your friends (and maybe you did it too) happily generating themselves into anime, pop art and impressionist style paintings of themselves after uploading a selfie.

AI tools are very capable systems.

But the output is always based on the training data. You get out what’s in there.

If there aren’t any images or content about a person in the training data, the system can’t make that thing.

If there aren’t many images of woman doctors, the tools may not provide those.

And if there aren’t images of Māori people, it won’t do a good job there either.

Essentially, it’s like those things don’t exist, which you’ll quickly pick up is very problematic and has huge significance for unintended bias.

More on this in a minute but first, back to our history lesson…

In November last year, OpenAI released ChatGPT and anyone who signed up online could use it to generate text.

Putting words in and then reading the words that come out began to feel like communicating.

For weeks on end I felt like I was reading AI articles where the author had used Chat GPT to craft their opening paragraph.

Using ChatGPT felt like you were messaging and being understood by something that uses language the same way we do.

But that feeling can be misleading.

A lot of the concern from credible experts is that we don’t know what’s going on inside these tools. The data goes in, and the patterns come out, but internally there’s a black box.

Even the tech titans who paid for the development of this technology are unsure as to what they have created, and how exactly it works. 

Most of the public-facing AI tools now available have been developed overseas and are based on training data that may not be relevant, reliable, and ethical for use in Aotearoa New Zealand.

While we’re also part of the broader world, we have our own unique mix of cultural perspectives, demographics, and use of languages including English and Te Reo Māori.

Let’s think about that in relation to something like using AI tools to screen business documents.

A good example would be screening job applications to find people you want to interview.

The track record of AI tools in this area is not good, so you need to be very confident the system you want to use will be transparent, accurate, and fair, before you ask anyone to rely on it.

You might want to ask:

  • How can I find out about the reliability and accuracy of the AI tool for this job?
  • Is there a risk of bias in the AI tool or the training data?
  • Who can I talk with to ensure people are ok with me using this tool?
  • Can I engage with experts?
  • Can I engage with people and communities who might be affected?

You’ll recall that engagement with Māori was one of the key expectations I outlined in my initial guidance, for just the reasons I’ve described here.

You need to consider Māori perspectives on privacy. And we recommend you’re proactive in how you engage.

Some of the specific concerns my Office has heard about Māori privacy and AI tools, include: 

  • Concerns about bias from systems developed overseas that do not work accurately for Māori – who are, let’s remember, close to 20 percent of our population.
  • Collection of Māori information without work to build relationships of trust. This can lead to inaccurate representation of Māori taonga that fail to uphold tapu and tikanga.
  • Exclusion from processes and decisions of building and adopting AI tools that affect Māori whānau, hapū, and iwi, including use of these tools by the Crown. 

Under the Privacy Act, when an agency or business holds information about a person, that person can ask for a copy of that information, and for it to be corrected - those two principles are IPP6 and IPP7.

  • Thinking about using AI tools for business – would your business be about to provide information about a person to them if they asked for it?
  • Would you be confident that you could correct personal information?
  • Could you correct AI outputs in a timely way?
  • How would you verify the identity of an individual requesting their information?

On that last point, machine learning algorithms can be trained to replicate an individual's voice, facial features, and even handwriting, so you need to be doubly-sure you’re providing information to the correct individual.

Some uses of AI tools may make it hard to comply with the access and correction principles. 

That’s because building AI tools involves processing training data to build models that do pattern-matching.

The original training data, the pre-trained model, and the outputs may all potentially contain personal information while providing no practical way to access or correct it.

It’s essential that you develop procedures for how your agency will respond to requests from individuals to access and correct their personal information in order to comply with the law.

Just an example, in May 2023 the Office of the Privacy Commissioner of  Canada announced that it had launched an investigation into Open AI in response to a complaint alleging the collection, use and disclosure of personal information without consent.

To comply with privacy law, you need to be confident that you understand potential privacy risks and that you’re upholding the IPPs.

Which is why we expect organisations to do a privacy impact assessment before using AI tools – it will help you build that business confidence in the tools you’re using.

OPC offers guidance on writing a privacy impact assessment (PIA), which will allow you to consider these questions, and to bring up and address potential risks for privacy, transparency, and trust.

Doing a good privacy impact assessment might also involve talking with people and communities who your work will impact.

We have tools on our website privacy.org.nz to help you develop a PIA.

Once you understand the potential risks, you can:

  • use privacy policies to govern your AI tools, and
  • ensure privacy statements set clear expectations.

I’d also encourage you to speak openly in your workplaces about AI because some of your staff may already be using it without your knowledge.

Share the guidance from our office.

Be open about AI and what it means for your business.

Talk openly about processes and considerations with more junior staff who may be very attracted by the idea of AI making work easier but not yet be aware of the full risk landscape, or your organisation’s expectations when it comes to adopting this tool.

For example, less risk-aware staff may accidentally be disclosing personal or confidential business information as part of AI prompts, with the risk that this information is vulnerable to subsequent leaking or privacy breaches.

You all know that your business is ultimately responsible for what your employees do, and you need to make sure you do everything possible to prevent your employees from breaching someone’s privacy.

Overall, it may help to think in terms of using AI in ways that uphold accountability of the people in your organisation, to your customers, and to the broader community.

Where people have responsible roles, whether as financial managers, teachers, or public servants, any use of AI tools should maintain and complement the responsibilities people have in their roles.

Talking with the people in these roles, and the people they work with, will often be critical for good use of AI.

Let’s talk about how personal information might get into AI tools.

A lot of the most popular AI tools have been trained based on information scraped from all over the Internet.

That’s the easiest way to get the large amounts of data used to train image and text generators, but it creates some challenges for privacy.

For better and worse, this training data is going to reflect the things that are on the Internet.

And on top of that, when you or members of your team use these tools, you are sending information as inputs for the model to respond to.

Those inputs can potentially contain personal information - information about your customers, your staff, and other people.

You need to be confident that you are appropriately tracking and managing the use of this personal information in line with the purpose for which it was collected.

Before sharing personal information to an AI tool, you should be confident that it falls within the scope of why it was collected in the first place.

Basically, you should be asking: would my customers or clients expect that I’d use their personal information in this way, based on what I told them when I collected it?

As well as being transparent with people, the information privacy principles require a basic level of fairness.

You need to be confident that you are treating people’s information in ways that are lawful, fair and not unreasonably intrusive.

Before using and relying on AI tools, you should do some investigation of how reliable they will be.

It may be helpful to get some expert and independent perspectives from people who can ask good questions about how things go wrong.

Perhaps you have some people like that in your privacy, or legal, or risk, or cybersecurity teams.

If not, you might want to invest in finding external experts.

I also strongly recommend engaging with your community, and particularly with people in the community who are most likely to feel the effects from systems that are built overseas, and who may not be well represented in training data from outside New Zealand.

There may be other people and communities you work with who you should engage with as well; for example, if you are an employer putting employee data into the AI tool, what obligations do you have to your employees?

Have you ensured you are considering your employees’ privacy rights as well?

This is about building trust and avoiding nasty surprises.

And a great way to achieve trust is to build privacy in from the start.

The first step is to start thinking about privacy.

You’re all here listening to me talk about privacy, AI, and business so give yourselves a pat on the back for making a great first step.

The next step is to keep thinking about privacy – on that note, I’m delighted the EMA is focusing on seminars on AI at the moment.

Your team might want to change the direction of a project.

You might want to roll out a new technology across existing information.

To make good decisions about these activities, you’ll need to keep your privacy thinking and documentation up to date.

Make sure you have someone in your business who is responsible for privacy - a “Privacy Officer”.

This is also a statutory requirement under the Privacy Act

You may have people in your organisation who are early adopters and enthusiasts, and they could inform you about these tools from their own experience.

They may also be able to help with your privacy thinking and identify any unintended consequences of using AI tools.

Is there a possibility of something like that going on inside your organisation without you knowing? I suggest you find out.

The way to avoid nasty surprises is to understand the issues and set clear expectations.

Think about the issues.

Decide your approach. Then tell people. Tell your team. Tell your customers. Transparency is key here.  Negotiate with your providers of AI tools and understand their terms.

We’re at an early stage of using AI tools and developing best practices.

But there is a huge amount of excitement.

I have even heard the development of generative AI compared to the discovery of fire.

You might think that’s buying into the hype.

But what if we took the idea seriously for a moment.

What if we had fire but no fireplaces? No fire escapes? No fire alarms, or fire trucks? No 111 service?

People have lived with fire forever.

And we have lots of ways to understand fire, and keep ourselves safe, and yes, use sensible regulations to help us benefit from it.

We don’t give matches to six-year-olds.

We don’t light fires near dry scrub in summer.

A good fire needs the right fuel, in the right place, with the right safety measures.

Good AI tools need the right data, for the right purpose, with the right governance.

And starting with privacy is an excellent step in that direction.

My team is very keen to monitor developments in the use of AI tools and their privacy impacts.

If you’d like to talk with us about this work, please get in touch by email at ai@privacy.org.nz.

We’d be especially keen to talk about use-cases, about how people are using AI tools in New Zealand, and broader issues relating to AI and privacy.

Good practice around the use of AI will underpin all your efforts to build and maintain that all important digital trust.

So, if I could summarise, with a few headline statements:

  • Thinking about privacy is vital if you’re going to use AI tools well.
  • Spend time at the outset to understand how your proposed deployment of AI is using personal information.
  • Nest AI in your overall organisational and business strategy.
  • Safe and responsible use of AI is good for your customers and good for you.
  • It doesn’t take too much imagination to see the potential for a company to quickly damage a hard-earned relationship with customers, through poor use of AI.
  • When it comes to privacy, the Privacy Act is technology neutral and takes a principles-based approach.
  • While the technology is novel, the principles of data protection law remain the same.
  • Any proposed use of AI should involve carrying out a Privacy Impact Assessment first to identify privacy risk and the potential for privacy breaches.

My Office is focused on ensuring we have a smart privacy regulatory framework, with the right incentives.

  • And, it’s all about innovation at the speed of trust.