Our website uses cookies so we can analyse our site usage and give you the best experience. Click "Accept" if you’re happy with this, or click "More" for information about cookies on our site, how to opt out, and how to disable cookies altogether.
We respect your Do Not Track preference.
I was recently lucky enough to attend the Asian Privacy Scholars Network 5th International Conference, hosted by the Business School at the University of Auckland.
The inspiring line up of privacy thinkers from around the world included the Honourable Michael Kirby, Prof Kiyoshi Murata from Japan’s Meiji University, and Professor Dr Sarah Hosell of the University of Applied Sciences in Cologne. You can find out more about the speakers and topics here. Their presentations will also be published in due course.
Snapchat and sexting
One outstanding privacy commentator was Prof Woodrow Hartzog of Samford University, Alabama. Prof Hartzog is the Starnes Professor of Law at Cumberland School of Law, as well as being an Affiliate Scholar at The Center for Internet and Society at Stanford Law School and he spoke about his upcoming book - Privacy's Blueprint: The battle to control the design of new technology.
Prof Hartzog began his presentation with the example of Snapchat- a smart phone application with an invitation by design to send sensitive information. Its picture messages disappear within seconds of the recipient opening them. When Prof Hartzog asked what the purpose of such an app might be, there were delighted calls of "sexting!" from the mostly middle aged scholarly audience.
Third party operators soon appeared after the advent of Snapchat and these provided ways for snap-chatterers to capture the images before they disappeared. Inevitably, this led to the data breach known as ‘The Snappening’. But shouldn’t Snapchat have been prepared for this eventuality?
Hacks and data breaches
Recently, there have been many other hacks and data breaches in the news media - Ashley Madison, the Australian Census site, Hacking Team, Yahoo to name a few - and yet we see agencies applying sticking-plaster solutions and some governments even acting to criminalise ‘white hat’ (or ethical) hackers who work to expose vulnerabilities safely and alert the relevant agency.
What’s the answer? Prof Hartzog makes three broad points:
Making Privacy by Design meaningful
There are huge gaps in privacy law concerning the design of new technology, and Privacy by Design (PBD) has a long way to go before it reaches the universal acceptance it deserves, according to Hartzog. Furthermore, we need to make sure PBD is a meaningful concept and not just a slogan.
Prof Hartzog says privacy's three basic rules are:
But what do these three aspirational points mean in the real world? How can people control what they don't understand? How can you understand what you are consenting to with a single click as you eagerly wait to use your new app? And how realistic is to go back and check the 50 apps you already have on your phone?
Also while designers might not deliberately tell lies, what about obscuring the important stuff in the usual "accept all" requirement before downloading a new app?
And finally, how do we define harm? In New Zealand, we have a definition in our Privacy Act and some guidance from the Human Rights Review Tribunal, particularly following this precedent-setting case, and others like this one. But harm can be difficult to attribute to a single cause when your personal information is leaking from numerous sources.
Prof Hartzog says the big problem is the overwhelming incentive to design technology which maximises the collection, use, and disclosure of personal information. The value of personal information encourages a “collect first, ask questions later mentality” which marginalises the virtue of being transparent.
While there are some good examples of privacy-protective design, many new digital products and services are not good enough and erode our privacy rights.
In short, the design in new information technologies is failing us.
Three values for design
The three values of Prof Hartzog’s blueprint for designing for privacy are trust, obscurity and autonomy. These three values are intertwined. Autonomy is furthered as a design value when privacy law nurtures technologies that protect our ability to trust and maintain obscurity. Trust and obscurity are complementary values. Trust protects our information within relationships. Obscurity protects us when there is no one to trust.
He also says designers need to design to standards so their products are not deceptive, abusive or dangerous. Lawmakers and the courts need the right tools to discourage deceptive, abusive or dangerous design. These tools vary in strength from soft to moderate to robust. Robust responses should be used to confront the most serious privacy design problems. Lawmakers should seek balance and fit when choosing the appropriate legal response. Their toolbox should include privacy enhancing technologies, education, investigations and enforcement, fines and penalties and international collaboration. If you have others, we welcome your suggestions.
In conclusion, Woodrow Hartzog is a bit of a privacy hero with some really cool ideas. You can follow him on Twitter at @hartzog. When his book is published in 2017, I will be reading it.
Image credit: Red and white bullseye design by Peter Kratochvil
Back