How unlikely is safe AI? Questioning the doomsday scenarios.

I have always been dubious of the assumption that unfriendly AI is the most likely outcome for our future.  The Singularity Institute refers skeptics like myself to Eliezer Yudkowsky’s paper: Complex Value Systems are Required to Realize Valuable Futures.  I just reread Yudkowsky’s argument and contrasted it with Alexander Kruel’s counterpoint in H+ magazine.  H+ seems to have several articles that take exception with SI’s positions.  The 2012 H+ conference in San Francisco should be interesing.  I wonder how it much it will contrast with the Singularity Summit.

One thing that bothers me about Yudkowsky’s argument is that on the one hand he insists that AI will always do exactly what we tell it to do, not what we mean for it to do, but somehow this rigid instruction set could be flexible enough to outsmart all of humanity and tile the solar system with smiley faces.  There is something inconsistent in this position.  How can something be so smart that it can figure out nanotechnology but so stupid that it thinks smiley faces are a good outcome?  It’s sort of a grey goo argument.

It seems ridiculous to even try constraining something with superhuman intelligence. Consider this Nutrient Gradient analogy:

  1. Bacteria value nutrient gradients.
  2. Humans evolved from bacteria achieving a comparable IQ increase to that which a superhuman AI might achieve as it evolves.
  3. A superhuman AI might look upon human values the same way we view bacterial interest in nutrient gradients.  The AI would understand why we think human values are important, but it would see a much broader picture of reality.

Of course this sets aside the problem that humans don’t really have many universally shared values.  Only Western values are cool.  All the rest suck.

And this entire premise that an algorithm can maximize for X doesn’t really hold water when applied to a complex reflexive system like a human smile.  I mean how do you code that?  There is a vigorous amount of hand waving involved there.  I can see detecting a smile, but how to you code for all the stuff needed to create change in the world?  A program that can create molecular smiley faces by spraying bits out to the internet? Really?  But then I just don’t buy recursively self-improving AI in the first place.

Not that I am against the Singularity Institute like some people are, far from it.  Givewell.org doesn’t think that SI is a good charity to invest in, but I agree with my friend David S. that they are poorly equipped to even evaluate existential risk (Karnofsky admits existential risk analysis is only a GiveWell Lab project).  I for one am very happy that the Singularity Institute exists.  I disagree that their action might be more dangerous than their inaction.  I would much rather live in the universe where their benevolent AI God rules than one where the DARPA funded AI God rules.  Worse yet would be a Chinese AI implementing a galaxy wide police state.

This friendliness question is in some ways a political question.  How should we be ruled?  I was talking with one of the SI related people at Foresight a couple of years ago and they were commenting about how much respect they were developing for the US Constitution.  The balance of powers between the Executive, Legislative, and Judiciary is cool.  It might actually serve as good blueprint for friendly AI.  Monarchists (and AI singleton subscribers) rightly point out that a good dictator can achieve more good than a democracy can with all it’s bickering.  But a democracy is more fault tolerant, at least to the degree that it avoids the problem of one bad dictator screwing things up.  Of course Lessig would point out our other problems.  But politics is messy, similar to all human cultural artifacts.  So again, good luck coding for that.

Americans Would Rather Go Mad Max Than Go Socialist

At the East Bay Futurist meetup today, we discussed a non-Singularity scenario  similar to the vision in Lights in the Tunnel.  In this scenario, automation eliminates enough jobs that the economy stops functioning.  The idea that automation causes macroeconomic harm is known as the Luddite Fallacy.   Historically automation has lead to short term unemployment, but the resulting lowered cost of goods supposedly created more demand and the displaced workers were able to find jobs in other sectors.

We were discussing this topic last year around this time at the East Bay Futurists.  It may be that the fall brings out these melancholy thoughts.  Maybe the damp and cold produces some malevolent mold or something.  But I am still looking  for an economist who can show that automation is continuing to create jobs.  It looks like the world employment to population ratio has decreased from 62% to 60% between 1991 and 2011.

Blogger Steve Roth offers a couple of reasons why the Luddite Fallacy argument might be running out of steam:

1. The limits of human capabilities (Not everyone can get a PhD in Computer Science and eventurally there may be nothing that machines can’t do.)
2. The declining marginal utility of innovation and consumption. (All the important stuff has been around since the 60’s and really how many more mansions do you need?)

Now supposedly there is some sort of argument that says consumption by the super rich can continue to drive the economy.  But I like how Roth dissects that argument using Marginal Propensity to Consume.  Basically poor people spend a greater portion of their income.  Apparently the third Lamborghini is somewhat less satisfying to the rich than having enough food is to the poor.

Now there is also this idea that we can somehow transition from a work based economy to an asset based economy.  Robin Hanson alludes to this during this discussion with Martin Ford (see 21:20 for the asset argument).  Hanson’s point about machines generating more net wealth may be true.  Poverty is  decreasing, the number of people living below the poverty line worldwide has decrease from 52% to 28% between 1981 and 2008.  How do we transition from adding value through labor to just owning assets?  It’s especially hard for me to understand how this new asset economy works for the poor.  Do they switch from owning goats to owning GoatBots in order to survive?  A lot of people will get left out in the cold in that sort of economy.  Asset management is tricky and the sheep will soon get fleeced of their assets.

So we need to fundamentally restructure our economy in the face of accelerating automation.  Is it still possible to salvage the work model by finding ways to monetize what people do with their hearts and minds as Lanier suggests?  Or should we just give everyone $25,000 a year to drive consumption as Marshall Brain has suggested?

A lot of people seem to think that some sort of stipend will be required to keep the economy flowing.  However, I am fairly skeptical that this will come about.  Look how the EU is pushing austerity.  Here in the US, half the population demands freedom FROM health care.  I honestly think that us Americans will choose to go Mad Max before we turn (more) socialist.  But I could be wrong.  The Great Depression brought about a bunch of social programs.  Maybe something like that will happen again.

But, Lanier’s argument is interesting:monetize heart and minds, etc.  As I said before, Vinge thinks that the only thing humans can do which machines won’t be able to do is want things.  How do you monetize that?  And even if the SuperRich did suddenly decide to get all loving and start handing out stipends, what about well being?  I think of the youth rioting in England in 2011.  Those kids had the dole, but they weren’t happy.

Seligman’s PERMA (Positive emotions, Engagement, Relationships, Meaning, and Accomplishment) model of well being comes to mind.  We can hand people money and then what?  Star Fleet won’t be recruiting for a while yet. Where does accomplishment come from?  Games?  The arts?  But still this is all premised on a bunch of meglomaniacal sociopaths handing over a bunch of money.  I’m not holding my breathe.  I am just saving as much money as I can in the hopes of affording an adequate KillBot(tm) once ThunderDome time comes.

What is Futurism anyway?

Tonight I attended a party to celebrate the recent marriage of a friend.  I found myself being asked over and over again: “So what is Futurism anyway?”  I couldn’t resist responding that that it was an art movement in Italy around the early 1900’s.  I do actually like a lot of futurist art.  They often tried to depict this sense of motion to capture the frenetic pace of modern life.  I am not too into the violence and fascism though.

But then I had to get serious and come up with a decent answer.  And that is why it’s a good idea to hang out with people outside your scene sometimes.  It forces to you articulate ideas that you often take for granted.  So I would say things like: Futurism is thinking about the future and wondering about what will happen.  Science Fiction is futurism.  Futurist consider the idea that technology is accelerating exponentially and ask what the consequences might be.

And a lot of people responded quite positively to this.  People feel these changes around them.  The impact of automation on jobs is becoming more evident.  We talked about the importance of education in these changing times and how budget cuts and skyrocketing college costs are putting kids into indentured servitude.  We talked about how China might come to rule the world. I trotted out my standard bearish comments regarding China’s corrupt financial system and it’s lack of transparency and rule of law.

A scientist who recently drank the Kurzweillian kool-aid and had actually visited China was part of this discussion.  He mentioned that systems with different paths to accomplish similar ends were more stable.  I took this to be an endorsement of pluralism and I complained that China’s police state doesn’t allow for this.  Another guest chimed in that top down rule can’t work and bottom up societies have more ideas.  But our newly minted Singularitarian friend countered that the Chinese rulers carefully tweak the different elements of society, allowing more freedom in certain areas and restricting it in others.  I don’t understand how this system can possibly work, but it’s hard to argue with the growth numbers.  (Well the specific numbers are probably fudged but there has clearly been lots of growth.)

I talked to another fellow who was into machine learning and who had doubts about the whole Deep Learning project that Norvig was recently crowing about at the Singularity Summit.  His opinion was that Deep Learning has been around for a while and that any recent success of the algorithms might be getting conflated with the benefits conferred by big data.  He said that other algorithms should be tested against this big data to see if they perform almost as well.  He mentioned support vector machines as one alternative, but these seem to require labeled training data, which Deep Learning doesn’t require.  So arguably, Deep Learning is nicer to have when evaluating big unlabeled data sets.  Anyway, when I asked Monica Anderson, she endorsed Deep Learning as being a thing, so I remain impressed for the time being.

My Deep Learning skeptic friend was also wary of Quantified Self.  I think his point was that over-quantification was being slowly forced upon people.  This hilarious scenario of ordering a pizza in the big data future immediately came to mind.  But as much as I love the ACLU, I don’t have much faith that they can protect us against big data.  I actually think that being into QS might better prepare people to deal with big data’s oppression.  At least QS’ers become more aware that personal data can tell a story and they are exploring how some of these stories can be self-constructed.  Hopefully this will help us navigate a future where nothing is private.

A recurring theme when thinking about the future is that humans will somehow get left behind as technological progress skyrockets beyond our comprehension.  A lot of humans are already getting left behind, economically and technologically.  Someone who can’t use search is at a massive disadvantage to everyone that can.  I try to be positive sometimes and point out that mobile devices are spreading throughout the developing world or that humans can augment to keep up with change.  But while we may live in an age of declining violence, I can see why some would still complain of sociopathic corporate actors and the policies being promoted that withdraw a helping hand from those in need.

At one point in the evening, toasts were made to the newlyweds and a passage by CS Lewis celebrating love was read.  I looked around as the various couples reacted to the emotional piece and I thought of my own girlfriend.  I thought about how we had been through death and madness.  Yet we managed to stay together, supporting one another, loving each other after all these years.  I thought about how deeply lucky we are to have one another.  I felt great happiness for these newlyweds with the courage to undertake this struggle for love.  I know us futurists can be cold, almost autistic in our dispassionate rationality, but it may well be love and empathy that will serve us best in the coming future where so little is certain.