Our website uses cookies so we can analyse our site usage and give you the best experience. Click "Accept" if you’re happy with this, or click "More" for information about cookies on our site, how to opt out, and how to disable cookies altogether.

We respect your Do Not Track preference.

Guest post: Protecting privacy by blocking the creepy human factor Andrew Chen
9 May 2019 at 11:58

black and white 2603731 960 720

Once upon a time, we didn’t collect data about people because we didn’t have the technical means to do so. Computers weren’t fast enough, sensors weren’t small enough, and storage wasn’t cheap enough. As technology has continually improved, it has enabled a superabundance of data – more data being collected means more data is being stored and transmitted, which means more data is being used.

Companies now have an established default mentality of “collect data now, figure out applications later”. We have so much data, and there are many new applications of that data that were previously unimaginable – but along with the good also comes the bad.

Humans cause breaches

In 2014, the head of Uber’s New York office was caught using an internal company tool called “God View” to track the location of a journalist without her permission. The tool was widely available to employees, and a whistle-blower later revealed that the tool was used by employees to spy on “high-profile politicians, celebrities, and even personal acquaintances” in real-time. A new level of creepiness was enabled by the proliferation of smartphones and GPS-tracking systems.

Depending on which report you read and which jurisdiction it is about, an estimated 35-55 percent of reported data privacy breaches are because of internal human actions, rather than external hackers. Sometimes it’s people accidentally leaving files on the train, but other times it is people looking up the details of their ex-spouses or finding targets for crime – and these are just the cases where people get caught. In the case of God View, there is an argument that developing it was necessary for monitoring and testing the functionality of the Uber app – but the humans around the system went outside of the intended purpose of the system and took advantage of the availability of data and information.

Changing the architecture

A privacy-affirming architecture offers a technical design solution for the future. As systems become more automated and our accuracy rates for technologies like machine learning, computer vision, and data analytics improve, we can make it harder for humans to access data. A privacy barrier can be created by using a computer to process the raw data coming from sensors, anonymising the underlying data and protecting the privacy of individuals, and then providing processed information through to humans. That privacy barrier separates the personal, privileged data and what can be accessed by humans. That raw data can then be deleted, never to be manipulated or misused by humans.

Let’s take the example of a camera surveillance system in a retail store, which is there to detect shoplifters (also known as loss prevention). In the past, cameras were watched by human security officers. It’s a relatively boring job when most of shoppers aren’t stealing anything, and it’s also easy to miss cases when you’re watching 20 screens at once. Current day technology is improving some of these processes – footage can be easily segmented by security officers and sent to law enforcement authorities and known prolific shoplifters can be detected and intercepted by security officers. But in most cases, the surveillance camera footage is being recorded and stored, and humans can still get access to that footage and put it towards less-than-honourable purposes.

Sensing shoplifting

What if the camera itself could be smart enough to detect when shoplifting is happening? Rather than requiring a human to interpret the video footage, computer vision researchers are already working on algorithms that can understand the types of human motion associated with shoplifting. Could the camera become a “shoplifting sensor” with a narrower purpose, generating alerts for security officers, and only storing footage when a shoplifting event has been detected? This would mean that the rest of the time, when nothing is happening, no footage is being stored, and there is no video for a human to see. A bored security officer can’t voyeuristically watch people shop, and the footage can’t be used to embarrass shoppers or track their purchases. People who are just shopping suffer less privacy loss – the phrase “if you’re not doing anything wrong, you have nothing to hide” might become “if you’re not doing anything wrong, we won’t be looking”.

Hold less data

It’s an example of how technology can be used to help protect our privacy – using algorithms to block data collection that isn’t needed for specific purposes. Holding less data about people helps mitigate the privacy risks, while still collecting enough data and information to get the job done.

Of course, this architectural design choice is not without its limitations. It is still up to system owners to decide whether to implement data capture systems in this way, and we need to be certain that the automated processing is doing its job accurately and repeatedly.

The design choice also doesn’t exist in a vacuum – without regulatory pressure or demands from customers, there may be little incentive for system owners to implement things in this way. But if systems could become designed in this way, then it could be a step towards winding back the clock, going back to a time when there was less data about everyone and everything, and when people could be less worried about losing their privacy.

Andrew Chen is a Research Fellow with the Centre For Science in Policy, Diplomacy, and Society at the University of Auckland. You can find out more about privacy-affirming architectures here.

Image credit: Creepy - via Pixabay 

, ,

Back