Privacy has been on my mind a lot lately. From conferences like DefCon and Quantified Self, to the bitter arguments with my pro and anti-Google friends, I have been engaging in a lot of discussion about privacy. I tend toward transparency myself, and I actually don’t feel that I have much worth hiding. So I don’t mind if Google’s new privacy policy lets them search not just my mail, but my docs as well. My adblock is in place, so I feel unmolested. I am fascinated by my friends that try to maintain strict control over their personal data. It reminds me of the protagonists who go off the grid in the John Twelve Hawks Traveler series. But since I assume my friends aren’t being chased by militant illuminati, I can’t really see the point.
I attended the 2012 Quantified Self conference, and there was much talk of the future of personal data. I was told by one attendee that the Silicon Valley attitude is that “privacy is dead – get over it.” Look at how the advertisers howled when Microsoft enabled “do-not-track” not to mention the various privacy fiascos of Google and Facebook, among others. There is a struggle to maintain privacy going on now, and it may be that Silicon Valley wants to disrupt it too much.
At his recent Long Now talk, Tim O’Reilly seemed to be resigned to the end of privacy. He suggested that our ubiquitous personal data should be treated like insider information. Many people will have access to it, but we should have laws governing the acceptable use of this information. I am skeptical that this will work. Misuse of personal data is harder to define and to detect than insider trading.
We had a good discussion of privacy at my Futurist meetup recently. One new attendee brought up the idea of vendor resource management. In this model, consumers would subscribe to a service that allowed them to control what advertisements that they wanted to see. I like the idea, but some advertisers will always be predatory and disinterested in the consent of the consumer. She also pointed out that many of us maintain multiple personas online depending on the context.
4chan founder Christopher Poole argues that this is one way that Twitter does a better job of identity management than Facebook or Google:
It’s not ‘who you share with,’ it’s ‘who you share as.’ Identity is prismatic. People think of anonymity as dark and chaotic, but human identity doesn’t work like that online or offline. We present ourselves differently in different contexts, and that’s key to our creativity and self-expression.
This actually contrasts strongly with Rushkoff’s view of online identity:
Our digital experiences are out of body. This biases us towards depersonalized behavior in an environment where one’s identity can be a liability.
But the more anonymously we engage with others, the less we experience the human repercussions of what we say and do. By resisting the temptation to engage from the apparent safety of anonymity, we remain accountable and present – and much more likely to bring our humanity with us into the digital realm.
Then O’Reilly chimes in with the assertion that trying to be anonymous or firewall personas is futile anyway. The amount of data about us all is exploding and it’s going to get easier and easier to access.
I think that they are all correct to some degree. Poole is clearly onto something when he points out the contextual nature of identity. We all do assume different identities in different contexts even in real life (like Woody Allen’s Zelig.) Rushkoff is also correct that non-anonymous communication can be more civil and there should be a space for that online. However, even the Federalist Papers were written anonymously. There will always be some things that need to be said that won’t be if the personal cost is too high. Unfortunately, O’Reilly is also correct. I believe we can soon kiss all anonymity goodbye. And that bodes ill for disruptive or nonconformist discourse.
At DefCon this year, former NSA technical director William Binney accused the NSA of gathering information on all Americans. Governments around the world and throughout history have used data collection to squash dissent, and in this modern era more data is available than ever before. Narrow AI systems pull needles of meaning from these unimaginably mountainous haystacks of data now.
The danger of ubiquitous data surveillance by governments to individuals ideally depends on how important individuals really are. I subscribe to the view that the Arab Spring was more a result of food costs than individual activists. There is a risk of irrational Hoover types in government with access to this data who will misuse it. But rational leaders will do what is effective to stay in power. Making individual activists or other poitical troublemakers disappear won’t save the leaders if the real cause of unrest is hunger. Daniel Suarez makes the argument in Freedom ™ that the goal of torture is not to stop any individuals but to scare the populace into submission. But the hungrier people are, the harder they will be to scare.
So my hope is that we will have rational despots at the top that won’t bother spying on activists or the general populace because that isn’t an effective way to stay in power. I would hope that they would take action against actual criminals, although with big data we are back to the precog problem. Should we really tolerate a state that arrests citizens because the security apparatus AI thinks they will commit a crime soon? I would argue not, but there’s nothing I can do about it, so there is no need to stick me in a hole (Ok, Big Brother?).
A further consequence of the death of privacy might be more insidious than troublemakers getting stuck into holes. The more data that is collected about us, the more predictable we will be. This data will probably enable our governments to manipulate our behavior more effectively in ways that go beyond propaganda. Arguably, some advertisers have been using the data they have on us now to influence our buying behavior. If that isn’t working, then Google AdWords is going to have a whole lot of explaining to do. Not that I think all advertisement is predatory and manipulative, but some percentage of it must be. That would be one explanation for all this consumer debt we are mired in.
I don’t want to paint an utterly bleak view of the loss of privacy. Kevin Kelley posited that this “quantified century” of data collection is rapidly expanding the recent invention called the “self.” The very definition of self is becoming richer and more articulated. In some sense we are trading privacy for personalization. As we reveal more data about ourselves, we can engage with others in new ways and discover new facets of ourselves. Maybe we will even see ads for things we really want and can afford.
Kelly also suggested that this ocean of data is forming a vast commonwealth. Meaning and value will be derived by building relationships between various data streams. These synthesized data streams will flow back into the commonwealth to enrich it.
Tim O’Reilly wasn’t worried about the loss of privacy, as long as it was accompanied by an increase in tolerance for other people. Tolerance. It’s not a bad antidote to cushion the blow once privacy passes away.