Our website uses cookies so we can analyse our site usage and give you the best experience. Click "Accept" if you’re happy with this, or click "More" for information about cookies on our site, how to opt out, and how to disable cookies altogether.

We respect your Do Not Track preference.

Panic in Privacy City Tim Henwood
26 November 2015 at 15:17

Panic

Some of you will be familiar with Gartner’s ‘hype cycle’. The hype cycle concept is the idea that each promising new technology goes through a similar set of phases before it is widely adopted. It often starts with a hiss and a roar, scales the peak of inflated expectations, drops down into the trough of disillusionment, claws its way up the slope of enlightenment until it finally reaches the plateau of productivity.

Daniel Castro and Alan McQuinn recently argued that you could chart a parallel course for the way new technologies attract privacy speculation.

Scaling the peak

Step one:  A new technology is developed. Only experts really understand it for now. It features more in academic papers than it does the media. People with their ear to the ground probably know about it. Inventors, innovators, designers and engineers are getting their hands on it and are figuring out how to monetise it. Word starts to get around about the capabilities, people get excited about it.

Step two: “Privacy activists” get wind of the new technology and come to ruin everyone’s fun. They spread warnings about worst case scenarios which get picked up by the mainstream media. In turn, public perceptions start to shift.

These stories generally invoke Arken’s Law - a version of Godwin’s Law which says ‘any discussion is over when present society is compared to George Orwell's 1984’.

The reception of Google Glass - and the rise of the term ‘Glasshole’ - is a prime example of this phase in action. More recently you might have seen stories about Samsung’s ever-vigilant ‘smart’ TVs or Cortana’s data collection in Windows 10.

The authors say behavioural advertising and facial recognition are up near the top of the peak. Wearable technology like fitbits, drones and the broader internet of things are climbing too. 

Climbing back down

Once the technology becomes more widespread, once we start to see real world applications and realise it might not be so bad, we get to step three. This is when the fear deflates. It is punctuated by ‘micro panics’ over time, but the general direction of anxiety is down.

Castro and McQuinn argue that the fears raised in the previous phase generally turn out to be unfounded, and that the worst case scenarios generally don’t happen. Society as a whole gets on with adopting the new technology. It becomes more widespread and is seen as reaching maturity.

Privacy activist fears are largely forgotten by now, they say, because the sky didn’t actually fall.

Castro and McQuinn go on to argue that everyone also forgets to learn the lesson that the pesky privacy pessimists were wrong and shouldn’t have been listened to. This in turn exposes them to the risk of being swept up in another privacy panic cycle when the next new technology comes along.

So that’s a thumbnail sketch of the cycle as they see it. But what Castro and McQuinn ignore is the role that this ‘privacy activism’ plays. Nothing happens in a vacuum.

The bigger picture

What’s important to recognise is that the impact of each privacy ‘scare’ plays a part in ensuring that future developments are more privacy-friendly.

You can’t dismiss these panics out of hand as hype. It’s not all imagination run wild, more often than not it is the result of independent researchers toiling away behind the scenes.  Look at the Jeep hack – it wasn’t a one-off event. It was a demonstration that was a provocative response to years of warnings and security research going unheeded.

Privacy warnings also form a vital part of risk assessment. When business and government assess the risks of a project, even where a risk is high impact but unlikely, it still gets put in the risk register.

Panics are not simply unfulfilled prophecies, but form part of the overall process of developing privacy laws and norms. Dire warnings play a role in finding an equilibrium that pulls an issue from peak privacy panic to something most people are comfortable with.

It’s not helpful to just say “don’t panic, it’ll be ok”. You have to realise the value of that reasoned concern and the role it plays. Where good privacy analysis comes in is recognising the value in, or even predicting, that panic, and in designing solutions that prevent those fears being realised.

The blast craters that ‘privacy panics’ leave behind, help shape approaches for the future. They help businesses and innovators know where the danger zones are and how to avoid them.

Image: Crowd of people at door of a bank in Berlin, Germany, at the beginning of World War I. From the George Grantham Bain Collection (Library of US Congress) Creative Commons.

Back