Singularity Summit 2012 Day 1

I attended the Singularity Summit again this year, which I believe is my third.  I previously attended 2008 in San Jose and 2010 in San Francisco.  I have a tendency to overdo content consumption at these conferences.  But for this conference I made a conscious effort to spend more time meeting and talking with attendees and less time attending the talks.  The Singularity Summit does a good job of posting videos of the talks on their website, so I will catch up on the presentations later.

One speaker I did see was 23andme founder Linda Avey.  She have a new company called Curious which will be a personal data discovery platform for us QS types.  She mentioned some very interesting devices that I want to check out.  She talked about a GSR/HRV monitoring patch system that can interact with an ingestible pill sensor but I didn’t catch the manufacturer.   She also mentioned a patch being developed by Sano Intelligence to monitor interstitial tissue in realtime to provide a API to the blood stream.  Don’t worry, she insists that you will barely notice the micro-needle.   I definitely want that.

Avey also talked about telomere measurement to monitor stress and microbiome sequencing.  Gut flora is getting lots of attention lately.  As an embodiment subscription holder, I am all for digging into the cognitive impact of the gut.  Ah, so much to quantify and so little time.

So I skipped all the talks and just talked with people.  What did I learn?  Anders Sandberg is cool and had some interesting things to say about suveillance and neuronal stimulation. I will be peering at his blog the next chance I get.   My pal Bill Jarrold told me to check out the “How of Happiness.”  And so I shall put aside my skepticism about the importance of happiness; Bill is never full of shit.

I sat a table with Eliezer Yudkowsky and Luke Nosek, which was fun.  Eliezer noted with some disappointment that the Gates Foundation wasn’t contributing anything to Givewell.org, but he didn’t admit to being surprised by this.  Nosek talked a little about a new Founders Fund AI startup, vicarious.com with Dileep George of Numenta fame.  He also shared a bit of his VC strategy and suggested that it was important to pick non-crazy founders even if their ideas seemed crazy. (Hint: you can tell them by the mechanism they use to explain and rationalize their crazy ideas.)

Eliezer took some exception to the premise that ideas are less important to a startup’s success than its founders.  He wondered if VCs had this bias just because they were bad as detecting good ideas.  I will have to side with Nosek and Paul Graham on this one.  A good idea requires execution, a seemingly crazy idea requires actual sanity, a bad idea might lead you to a better idea if you know how to fail properly.  I came across these quotes on planning which may be germain.  Business ideas being plans of sorts:

Those who plan do better than those who do not plan even though they rarely stick to their plan. – Winston Churchill, British Prime Minister

It is not the strongest of the species that survive, not the most intelligent, but the one most responsive to change. – Charles Darwin, scientist

At the same table, I got to hear about David Dalrymple‘s project to map all neural pathways of a nematode.  Yep, I think you heard that correctly.  Don’t feel bad if you confused this with the OpenWorm project.  Dalrymple is also in on that one.

I got to meet the marvelous but shy (HA!) Razib Khan and thoroughly enjoyed hanging out with him.  He shared some interesting views about the domestication of humans such as: it’s making our brains smaller!  I asked if this was a necessary adaptation.  I assume that big-brained primitive man would be less cooperative and less able to survive in this modern time.  Razib conceded the point and mentioned examples of older populations dying in high numbers when introduced into cities.  This might be due to immune system differences.  Which raises the question: Did out brains shrink to divert energy to our immune system or our gut even?  He was more open to the gut hypothesis when pressed to venture a guess about this..  I hear his blog on gene expression is quite good.

Let’s see, what other notes did I take?  Ah yes: Colin Ho showed us a cool hacked up lifelogger camera.  I want to host a video blog discussing topics of interest with my friends and have each participant wear something like this.  We could edit the video feeds together to show multiple perspectives throughout the conversation.   Technically challenging, but it might be worth it.

Finally I spend some time talking with Alex Peake who shared his vision on how to accelerate the singularity.

Privacy is Dead – Let's Hope for Tolerance

Privacy has been on my mind a lot lately.  From conferences like DefCon and Quantified Self, to the bitter arguments with my pro and anti-Google friends, I have been engaging in a lot of discussion about privacy.  I tend toward transparency myself, and I actually don’t feel that I have much worth hiding.  So I don’t mind if Google’s new privacy policy lets them search not just my mail, but my docs as well.  My adblock is in place, so I feel unmolested.  I am fascinated by my friends that try to maintain strict control over their personal data.  It reminds me of the protagonists who go off the grid in the John Twelve Hawks Traveler series.  But since I assume my friends aren’t being chased by militant illuminati, I can’t really see the point.

I attended the 2012 Quantified Self conference, and there was much talk of the future of personal data.  I was told by one attendee that the Silicon Valley attitude is that “privacy is dead – get over it.”  Look at how the advertisers howled when Microsoft enabled  “do-not-track” not to mention the various privacy fiascos of Google and Facebook, among others.  There is a struggle to maintain privacy going on now, and it may be that Silicon Valley wants to disrupt it too much.

At his recent Long Now talk, Tim O’Reilly seemed to be resigned to the end of privacy.  He suggested that our ubiquitous personal data should be treated like insider information.  Many people will have access to it, but we should have laws governing the acceptable use of this information.  I am skeptical that this will work.  Misuse of personal data is harder to define and to detect than insider trading.

We had a good discussion of privacy at my Futurist meetup recently.  One new attendee brought up the idea of vendor resource management.  In this model, consumers would subscribe to a service that allowed them to control what advertisements that they wanted to see.   I like the idea, but some advertisers will always be predatory and disinterested in  the consent of the consumer.  She also pointed out that many of us maintain multiple personas online depending on the context.

4chan founder Christopher Poole argues that this is one way that Twitter does a better job of identity management than Facebook or Google:

It’s not ‘who you share with,’ it’s ‘who you share as.’ Identity is prismatic.  People think of anonymity as dark and chaotic, but human identity doesn’t work like that online or offline. We present ourselves differently in different contexts, and that’s key to our creativity and self-expression.

This actually contrasts strongly with Rushkoff’s view of online identity:

Our digital experiences are out of body. This biases us towards depersonalized behavior in an environment where one’s identity can be a liability.

But the more anonymously we engage with others, the less we experience the human repercussions of what we say and do. By resisting the temptation to engage from the apparent safety of anonymity, we remain accountable and present – and much more likely to bring our humanity with us into the digital realm.

Then O’Reilly chimes in with the assertion that trying to be anonymous or firewall personas is futile anyway.  The amount of data about us all is exploding and it’s going to get easier and easier to access.

I think that they are all correct to some degree.  Poole is clearly onto something when he points out the contextual nature of identity.  We all do assume different identities in different contexts even in real life (like Woody Allen’s Zelig.)  Rushkoff is also correct that non-anonymous communication can be more civil and there should be a space for that online. However, even the Federalist Papers were written anonymously.  There will always be some things that need to be said that won’t be if the personal cost is too high.  Unfortunately, O’Reilly is also correct.  I believe we can soon kiss all anonymity goodbye.  And that bodes ill for disruptive or nonconformist discourse.

At DefCon this year, former NSA technical director William Binney accused the NSA of gathering information on all Americans.  Governments around the world and throughout history have used data collection to squash dissent, and in this modern era more data is available than ever before.  Narrow AI systems pull needles of meaning from these unimaginably mountainous haystacks of data now.

The danger of ubiquitous data surveillance by governments to individuals ideally depends on how important individuals really are.  I subscribe to the view that the Arab Spring was more a result of food costs than individual activists.  There is a risk of irrational Hoover types in government with access to this data who will misuse it.  But rational leaders will do what is effective to stay in power.  Making individual activists or other poitical troublemakers disappear won’t save the leaders if the real cause of unrest is hunger.  Daniel Suarez makes the argument in Freedom ™ that the goal of torture is not to stop any individuals but to scare the populace into submission.  But the hungrier people are, the harder they will be to scare.

So my hope is that we will have rational despots at the top that won’t bother spying on activists or the general populace because that isn’t an effective way to stay in power.  I would hope that they would take action against actual criminals, although with big data we are back to the precog problem.  Should we really tolerate a state that arrests citizens because the security apparatus AI thinks they will commit a crime soon?  I would argue not, but there’s nothing I can do about it, so there is no need to stick me in a hole (Ok, Big Brother?).

A further consequence of the death of privacy might be more insidious than troublemakers getting stuck into holes.  The more data that is collected about us, the more predictable we will be.  This data will probably enable our governments to manipulate our behavior more effectively in ways that go beyond propaganda.  Arguably, some advertisers have been using the data they have on us now to influence our buying behavior.  If that isn’t working, then Google AdWords is going to have a whole lot of explaining to do.  Not that I think all advertisement is predatory and manipulative, but some percentage of it must be.  That would be one explanation for all this consumer debt we are mired in.

I don’t want to paint an utterly bleak view of the loss of privacy.  Kevin Kelley posited that this “quantified century” of data collection is rapidly expanding the recent invention called the “self.”  The very definition of self is becoming richer and more articulated.  In some sense we are trading privacy for personalization.  As we reveal more data about ourselves, we can engage with others in new ways and discover new facets of ourselves.  Maybe we will even see ads for things we really want and can afford.

Kelly also suggested that this ocean of data is forming a vast commonwealth.  Meaning and value will be derived by building relationships between various data streams.  These synthesized data streams will flow back into the commonwealth to enrich it.

Tim O’Reilly wasn’t worried about the loss of privacy, as long as it was accompanied by an increase in tolerance for other people.  Tolerance.  It’s not a bad antidote to cushion the blow once privacy passes away.

Suarez at the Long Now Again

I went to see Daniel Suarez read from his latest book, Kill Decision at the Long Now this evening. I had originally been introduced to his work at a previous Long Now talk he gave years ago in support of his first book, Daemon.  Daemon was about ways in which a bunch of narrow AIs could be cobbled together to form a deadly system.  Kill Decision seems to be focusing on the problems around weaponizing autonomous drones. Suarez is particularly concerned about allowing algorithms to kill humans. He believes that these “kill decisions” should be made by humans and that treaties should be created to restrict the use of autonomous drones.  Suarez suggested that there has been a historical trend in warfare that has required the complicity of  more and more people over the years.  He compared the relatively few knights required to wage battles in the middle ages with the hundreds of thousands of soldiers who must cooperate to conduct modern wars.  He argues that autonomous drones would reverse this trend and allow even a single person to wage a battle without the complicity of any other humans.     There was a very lively discussion and it was suggested that autonomous drones are not unlike other modern weapons in how separated an attacker can be from the actual killing. Suarez stuck by his guns an insisted that it’s important that humans and not algorithms are making the actual decisions.  He acknowledged that humans still do make horrible decisions that result in many deaths, but pointed out how much worse it could be if the process were automated.  I suggested that if drone warfare followed the pattern of cyberwar as Suarez suggested then we could expect to see hackers contributing to the defense against automated drones.  Alex P. suggested that we should start an open-source anti-drone drone project.  I like that idea.  Technology is often a double-edged sword but there always seems to be more people willing to use it to help than to hurt (barely).