Privacy is Dead – Let's Hope for Tolerance

Privacy has been on my mind a lot lately.  From conferences like DefCon and Quantified Self, to the bitter arguments with my pro and anti-Google friends, I have been engaging in a lot of discussion about privacy.  I tend toward transparency myself, and I actually don’t feel that I have much worth hiding.  So I don’t mind if Google’s new privacy policy lets them search not just my mail, but my docs as well.  My adblock is in place, so I feel unmolested.  I am fascinated by my friends that try to maintain strict control over their personal data.  It reminds me of the protagonists who go off the grid in the John Twelve Hawks Traveler series.  But since I assume my friends aren’t being chased by militant illuminati, I can’t really see the point.

I attended the 2012 Quantified Self conference, and there was much talk of the future of personal data.  I was told by one attendee that the Silicon Valley attitude is that “privacy is dead – get over it.”  Look at how the advertisers howled when Microsoft enabled  “do-not-track” not to mention the various privacy fiascos of Google and Facebook, among others.  There is a struggle to maintain privacy going on now, and it may be that Silicon Valley wants to disrupt it too much.

At his recent Long Now talk, Tim O’Reilly seemed to be resigned to the end of privacy.  He suggested that our ubiquitous personal data should be treated like insider information.  Many people will have access to it, but we should have laws governing the acceptable use of this information.  I am skeptical that this will work.  Misuse of personal data is harder to define and to detect than insider trading.

We had a good discussion of privacy at my Futurist meetup recently.  One new attendee brought up the idea of vendor resource management.  In this model, consumers would subscribe to a service that allowed them to control what advertisements that they wanted to see.   I like the idea, but some advertisers will always be predatory and disinterested in  the consent of the consumer.  She also pointed out that many of us maintain multiple personas online depending on the context.

4chan founder Christopher Poole argues that this is one way that Twitter does a better job of identity management than Facebook or Google:

It’s not ‘who you share with,’ it’s ‘who you share as.’ Identity is prismatic.  People think of anonymity as dark and chaotic, but human identity doesn’t work like that online or offline. We present ourselves differently in different contexts, and that’s key to our creativity and self-expression.

This actually contrasts strongly with Rushkoff’s view of online identity:

Our digital experiences are out of body. This biases us towards depersonalized behavior in an environment where one’s identity can be a liability.

But the more anonymously we engage with others, the less we experience the human repercussions of what we say and do. By resisting the temptation to engage from the apparent safety of anonymity, we remain accountable and present – and much more likely to bring our humanity with us into the digital realm.

Then O’Reilly chimes in with the assertion that trying to be anonymous or firewall personas is futile anyway.  The amount of data about us all is exploding and it’s going to get easier and easier to access.

I think that they are all correct to some degree.  Poole is clearly onto something when he points out the contextual nature of identity.  We all do assume different identities in different contexts even in real life (like Woody Allen’s Zelig.)  Rushkoff is also correct that non-anonymous communication can be more civil and there should be a space for that online. However, even the Federalist Papers were written anonymously.  There will always be some things that need to be said that won’t be if the personal cost is too high.  Unfortunately, O’Reilly is also correct.  I believe we can soon kiss all anonymity goodbye.  And that bodes ill for disruptive or nonconformist discourse.

At DefCon this year, former NSA technical director William Binney accused the NSA of gathering information on all Americans.  Governments around the world and throughout history have used data collection to squash dissent, and in this modern era more data is available than ever before.  Narrow AI systems pull needles of meaning from these unimaginably mountainous haystacks of data now.

The danger of ubiquitous data surveillance by governments to individuals ideally depends on how important individuals really are.  I subscribe to the view that the Arab Spring was more a result of food costs than individual activists.  There is a risk of irrational Hoover types in government with access to this data who will misuse it.  But rational leaders will do what is effective to stay in power.  Making individual activists or other poitical troublemakers disappear won’t save the leaders if the real cause of unrest is hunger.  Daniel Suarez makes the argument in Freedom ™ that the goal of torture is not to stop any individuals but to scare the populace into submission.  But the hungrier people are, the harder they will be to scare.

So my hope is that we will have rational despots at the top that won’t bother spying on activists or the general populace because that isn’t an effective way to stay in power.  I would hope that they would take action against actual criminals, although with big data we are back to the precog problem.  Should we really tolerate a state that arrests citizens because the security apparatus AI thinks they will commit a crime soon?  I would argue not, but there’s nothing I can do about it, so there is no need to stick me in a hole (Ok, Big Brother?).

A further consequence of the death of privacy might be more insidious than troublemakers getting stuck into holes.  The more data that is collected about us, the more predictable we will be.  This data will probably enable our governments to manipulate our behavior more effectively in ways that go beyond propaganda.  Arguably, some advertisers have been using the data they have on us now to influence our buying behavior.  If that isn’t working, then Google AdWords is going to have a whole lot of explaining to do.  Not that I think all advertisement is predatory and manipulative, but some percentage of it must be.  That would be one explanation for all this consumer debt we are mired in.

I don’t want to paint an utterly bleak view of the loss of privacy.  Kevin Kelley posited that this “quantified century” of data collection is rapidly expanding the recent invention called the “self.”  The very definition of self is becoming richer and more articulated.  In some sense we are trading privacy for personalization.  As we reveal more data about ourselves, we can engage with others in new ways and discover new facets of ourselves.  Maybe we will even see ads for things we really want and can afford.

Kelly also suggested that this ocean of data is forming a vast commonwealth.  Meaning and value will be derived by building relationships between various data streams.  These synthesized data streams will flow back into the commonwealth to enrich it.

Tim O’Reilly wasn’t worried about the loss of privacy, as long as it was accompanied by an increase in tolerance for other people.  Tolerance.  It’s not a bad antidote to cushion the blow once privacy passes away.

8 thoughts on “Privacy is Dead – Let's Hope for Tolerance

  1. Thanks Scott. My favorite part of this post is where you beg Big Brother not to put you in a hole. I know you were kidding, but I not completely, right? Anything you say can and will be used against you… forever! Our political context can change. What is safe today can be dangerous tomorrow. Many of us have at least a faint awareness of risk, and this causes self-policing. Even the robots have eyes.

    For a great description of mechanical surveillance and self-policing, see Ryan Calo’s paper on “Robots and Privacy.” (PDF here: http://cyberlaw.stanford.edu/publications/robots-and-privacy) His other papers are great, too.

    We really need to fight hard agains the policeman in our heads, and the ideology of “privacy is dead” is the blunt end of his billy club.

    • Gary,

      You bring up an excellent point about self-censorship. I would feel more free to speak my mind if I used an identity other than “Scott Jackisch,” but the impact of my words might be less. I would say that all conceptual frameworks should be checked occasionally to make sure that they are not unduly constraining our actions. But I will heed your advice and keep a close eye on the idea that “privacy is dead.”

      Thanks for taking the time to comment. I will definitely check out Calo’s papers on this.

      Scott

    • Gary, After reading some of Calo’s paper “Robots and Privacy” I think I understand your comments in a new light. Calo talks about how social robots:

      may reduce the dwindling opportunities for interiority and self- reflection that privacy operates to protect.

      The mechanism by which this happens is that:

      people are hardwired to react to heavily anthropomorphic technologies such as robots as though a person were actually present, including with respect to the sensation of being observed and evaluated.

      This even extends to such trivial “anthropomorphic technologies” as putting a picture of eyes above a donation cup for coffee to increase contributions. And it may way extend to holding the idea in our heads that we are constantly being observed i.e. “Privacy is dead” (otherwise known as paranoia.)

      the false suggestion of person’s presence causes measurable physiological changes, namely, a state of “psychological arousal” that does not occur when one is alone…Privacy provides “a respite from the emotional stimulation of daily life” that the presence of others inevitably engenders

      So in some sense taking too pessimistic a view on this topic could actually harm your psychological life. Ironically, reading Calo’s paper didn’t relieve me of my pessimism. 😉

  2. Merriam Webster defines privacy as:

    a : the quality or state of being apart from company or observation : seclusion
    b : freedom from unauthorized intrusion

    So by that definition, privacy is not dead. There’s no camera on me now, I doubt that my machine is infected with a keytracker, nobody knows what I am doing right now besides me. In that sense, privacy is very alive.

    Now you could argue that privacy is dying, it is already basically dead in some situations (like riding an airplane) and corporations, especially the newer tech ones need to use people’s personal data to make good business decisions, so they have an incentive to invade your privacy more and they basically control politics so they’ll get what they want.

    But I think there will end up being some notion of a basic right to privacy, how it will be defined, I don’t know. I know that google used to let facebook access gmail to let you discover friends on facebook but they no longer do so. So google and facebook don’t agree on how privacy should handled and they aren’t actually sharing information with each other (I think people’s fear of the death of privacy is based on the assumption that all private information will be owned by one monolithic entity). So different corporations will have different ideologies of what privacy people should have and what shouldn’t, but I still don’t think privacy will ever completely die. The data extracted by corporations will be analyzed to make decisions, and that analyzation process will be kept top secret so the competition doesn’t get a hold of it.

    • Robin, I appreciate your position. I would point out that as you wrote that e-mail Google’s algorithms performed something analogous to observation on your text. Google certainly has the right to observe and share that data with it’s partners according to it’s privacy policies. Many newer laptops do have cameras so there may well have been a camera on you and it’s not clear to me how you know that no malware is installed on your machine. After attending BlackHat and DefCon this year, I don’t even trust my bios anymore.

      But I get your point. Privacy isn’t really dead right now. With the title I was really trying to address the idea that privacy is dead which might be widely held in Silicon Valley. I don’t think that all data will be jealously guarded to maintain corporate advantage. A lot of data will be collected that has little value to the controllers of that data, such as surveillance camera footage. Also, there is a lot of benefit to be derived from sharing data. Google might not want to play nice with Facebook or Apple right now but I am sure they have plenty of partners to share your data with. That’s why the privacy policy explicitly states that they can do so.

      Thanks for taking the time to read the article and argue your position.

  3. Why do people expect that precog technology will be accompanied by stagnation in crime-stopping techniques?

    Why won’t the precog police mostly just have something whisper in the pre-criminal’s ear “we’re watching you”?

    • You’re right Peter, something like the subtle warning in the ear would probably be pretty effective in combating actual crime once the prediction models are good enough.

  4. Pingback: Black Hat & DEF CON 2013 – Privacy, Security, and AI | The Oakland Futurist

Leave a Reply to Gary Wolf (@agaricus) Cancel reply

Your email address will not be published. Required fields are marked *