The Great Stagnation of Innovation

There was much talk at the Singularity Summit this year of the Great Stagnation.  The basic idea is that contrary to popular belief (among transhumanists), innovation is actually in decline.  Here is an excellent blog post about the Huebner study that showed a reduction in per capita patents since 1870.  I guess John Smart takes issue with the data sampling, etc.  I have my own doubts that patents are a good metric for innovation, but it’s an intriguing idea.  Sure you have the internet, but where are the flying cars?  If per capita innovation is going down, maybe Homer was right all along and we are a bunch of degenerates.

Peter Thiel has been talking about this for a while now.  He points to high energy costs as a failure to innovate in the energy space.  He mentions that median real wages are unchanged since the 70’s and that this suppresses innovation.  He sees the space program in shambles.  Libertarian Thiel even actually (sort of) attributes the Apollo launch to the higher marginal tax rate of the 60’s.  Well he concedes that the government had more macroeconmic control but exercised less microeconomic control.  (i.e. the Polio vaccine wouldn’t have made it past the FDA)

In a debate at Stanford between Thiel and George Gilder, Thiel expands on his ideas that innovation in the real world of matter has been outlawed driving all innovation into the virtual world of bits such as information technology and finance.  Gilder on the other hand takes a view that all fields will become subject to information technology and will soon start to see progress similar to that seen in the world of bits.  Kurzweil commonly makes  similar arguments when he says that biology is becoming an IT field.  As an aside, I know some folks in bioinformatics and the fact is that this field is quite rocky.  Job growth isn’t very impressive.  It’s one thing to crunch the numbers, it’s another thing to deliver tangible results.

So Thiel focuses on the real world and talks about how food production isn’t outpacing population by much.  And he loves to bring up the theory that food cost triggered the Arab spring.  I’m sympathetic to this, I see him coming from an embodiment angle with that.  He also takes some issue with the views of optimistic experts like Gilder and contrasts that with the views of average people.  The percentage of people who think the next generation will be better off than the last generation has steadily gone down over the past 40 years.  I like that angle too, it reminds me of Wisdom of the Crowds.

But I am always wary about these over-regulation stories.  First, improvement in communication technology must be providing a huge decrease in the pressure to innovate on the transportation side.  On the other hand I wonder how much easier it is to move goods around.  I know most shipping cost is tied to fuel prices which supports Thiels energy narrative.   But, we do see logistics operations like Apple, Amazon, and even Walmart that simply could not exist without IT.  Sure, personal air travel might not be faster today than in the 1960’s, but my MacBook air arrived at my doorstep from Shenzhen 4 days after I ordered it.

A lot of the huge progress on the physical side might just have been low hanging fruit and we may just be in the area of diminishing returns.  Gasoline’s energy density is hard to match.  The information theory folks like Gilder and Kurzweil seem to do some handwaving on the energy story.

Fracking might be a thing, but we have to see how it actually pans out.  I don’t blame people for getting pissed when it turns their tap water flamable.  These energy companies love to skimp on costly safety measures (Valdez, Deep Water Horizon, even pipeline monitoring. ) Those Yankees whose drinking water gets hosed by cheap concrete lining in the fracking wells will probably shut it down.  Yankees are feisty like that.

Another problem with the over-regulation theory of innovation decline is that we would expect to see better innovation rates in places with less regulation.  So why don’t we see Texas taking the national lead in innovation?  Europe is pretty heavily regulated and we still see plenty of patents coming out of there.  So I don’t really disagree with most of Thiel’s observations (on this innovation thing only, not the other crazy shit).   I more question the causal mechanisms.  I look forward to his forthcoming book on this topic, coauthored with Max Levchin and chess great Garry Kasparav.  But I am skeptical about any grand plans to change the tides.

I talked with a bunch of Singularity Institute folks about this at the Less Wrong pre-party and the Summit itself and opinions varied.  Some say the innovation slump isn’t actually a thing.  Some say that it’s a thing but it doesn’t matter.  Some suggested that it might buy more time to  develop friendly AI.

But what about the long, long term.  Say there is no Singularity and that innovation was merely a  function of population growth.  If we have population stabilization or even a population crash, will we see innovation follow suit?  In Incandescence by Greg Egan, the survivors of innovation crash are “mining” wire to make crude tools.  This is a common thread in SciFi.  In A Canticle for Leibowitz survivors create illuminated manuscripts of circuit boards.

Oh, but those are more technology crashes than innovation crashes…hmm…

Kevin Kelly makes a compelling argument about the nature of technology in What Technology Wants. This is a cool book that deserves much more discussion, but the basic idea is that new technology sort of springs from the existing framework of old technology.  He points out many inventions that were independently arrived at.  In some sense technological change becomes inevitable but also highly constrained.  Innovation is dependent on the underlying framework of enabling technologies.

So how are you really going to change that?

UPDATE 12/27/2012:  A DOE scientist I met a few months ago actually pointed out that energy efficiency does represent real innovation in the energy space in spite of price increases:

For one example: See figure 1.3 of:
http://cowles.econ.yale.edu/P/cp/p09b/p0957.pdf

You will see that the real price of lighting services has dropped by a factor of ~1000 over the last two centuries: the lighting equivalent of Moore’s law.

Evolution of Social Norms via Network Science and Evolutionary Game Theory 1

At the end of Pinker’s “Decline of Violence” talk last week he said that the evolution of social norms was an exciting area of inquiry.  If we accept Pinker’s data, but don’t feel satisfied by the causal mechanisms he speculates about (i.e. Pacification, etc.), it does seem like a logical next step to dig more fully into social norms.  Some of the researchers that he mentioned were: Nicolas ChristakisDuncan WattsJames Fowler, and Michael Macey.

Now I have to admit that I have a bias toward new ideas that can be easily attached to my existing conceptual framework.  (Arguably we all do and no one could learn anything new without attaching it to existing knowledge but this post isn’t about constructivism.)  It’s especially satisfying when new concepts resonate with remote structures elsewhere in the idea tree.

I read Christaki’s Connected when it first came out and it strongly influenced my thinking on human behavior.  I do plan on reviewing the content, but it basically explores the idea that human behavior is partially a network phenomenon.  This seems obvious and uninteresting until you drill down into some of the consequences.  The book shows that you have a higher chance of gaining weight if there are overweight people in your social network with up to three degrees of separation.  Yep, better start keeping track of your  friends’ friends’ friends.  Don’t worry, this tool I saw on Melanie Swan’s blog can make it easier to map at least your LinkedIn network.

Now there was some controversy around the models used in this book.  I didn’t fully examine them and wouldn’t be able to independently evaluate the statistics anyway.  But I guess Harvard has to defend it’s own and bunch of statisticians from the old alma mater jumped to his defense.  I admit that I’m biased and I like the idea.  For the sake of argument, let’s agree that network behavior contagion is a thing. (If any statistics guru out there can show there exists a laymen’s explanation of why we should absolutely reject these findings, please do.)

Wait, sorry, I don’t have an argument yet.  But Christakis is just really cool.  In this video he talks about how he got into social network science and gives the example of caregivers getting sick from exhaustion and that effecting their other family members.  In a sense, he saw a non-biological contagion of illness.  My girlfriend and I experienced this first hand when her sister died of cancer so I deeply empathize with folks in that example.

On a brighter note,  Christakis gets into topology and nematode neuron mapping in the second half of the video.  This was the stuff we were talking about at the Singularity Summit with Paul Bohm this year.  See?  Christakis is cool.

But Pinker’s “Decline of Violence” thesis must also be supported by evolutionary population dynamics somehow, right?  So I pinged my awesome CogSci book club friend Ruchira Datta, and she recommended the following books for me to explore:

SuperCooperators

Genetic and Cultural Evolution of Cooperation

A Cooperative Species: Human Reciprocity and Its Evolution

I recall that there was a discussion about evolutionary game theory strategies at one of these meetups and it was suggested that there are population equilibria in which a certain percentage of “enforcer” agents (who punish defectors without regard to self-benefit) serve to protect a cooperative majority of nice, contrite, tit-for-tat agents.  So this is why we need tough conservatives around to protect all the cooperating liberals.

I brought this up at the LessWrong meetup tonight and someone objected that this might require group selection or some other troubling theory.  I wonder if it couldn’t be explained more along co-evolutionary lines similar to pollinators and flowering plants.

But anyway, where I’m trying to go with this is that we can take the above scenario and start to examine ways in which the ratios of cooperators and defectors change.  Then we somehow plug that into the whole social network science thing and we will have an awesome blog post or something.  (But I have a bunch more reading to do first.)

Singularity Summit Day 2: Verner Vinge reminds me why I doubt the recursive self-improving AI

I did break down and actually attend a couple of talks at the Singularity Summit this year: Vernor Vinge and Peter Norvig.

Peter Norvig gave a talk that would have satisfied any generic group of AI developers.  Google is making some frightening progress.  This Deep Learning project is the most interesting aspect of his presentation from an AI architecture point of view.  It’s impressive that Google can pair two top-level researchers in the field (Andrew Ng and Geoffrey Hinton) with parallel processing expert Jeff Dean and scale up academic models onto a functional 1000 node cluster.   Boom, you are identifying cats and faces from unlabeled YouTube videos.  It must be sickening to anyone who wants to compete with Google in the AI space.

But he never really mentioned friendliness.  I was hoping he would trot out some more theory behind this big data approach.  He gave a similar talk to Monica Anderson’s AI meetup a couple of years ago.  I was there for that and it was pretty cool to see him present to such a small crowd.

At the Singularity Summit this year, he also talked about Google’s translation service which basically derives translations by mapping many many identical documents written in multiple languages.  I was hoping to ask him what happens when the algorithm starts consuming translations that were actually created by Google Translate.  It’s bound to screw them up if that happens.  But then I realized that Google probably saves every translated document and checks new documents checksums against previous translations before using them to build mappings.  That’s hard to picture though.  They manage:  A. Mind. Crushingly.  Large. Amount. Of. Data.

Vernor Vinge outlined some outcomes that he sees for the singularity.  One crazy idea he puts forth is a digital gaia where the world is minutely ornamented with digital sensors coupled to processors and actuators.  One day they all spontaneously “wake up” and all hell breaks loose.  He describes a reality with all the stability and permanence of the financial markets.  I had a vision of my SmartLivingRoom(tm) suddenly reconfiguring itself into a nightmare of yawning jaws and oozing orifices.  But in reality, we might just see wild fluctuations in the functionality of computationally enhanced environments; from smart to dumb to incomprehensible.

Next up: Augmented intelligence, a neo-neo-cortex provided by technology.  This is his preferred scenario.   Crowdsourcing is cool, yada-yada.  Vinge imagines a UI so extreme that accessing it would be as convenient as the supported cognitive features. I used to like this idea until I started thinking about the security implications.  I don’t want my brain hacked.

He did make one amazingly succinct point about human computer synergy.   Computers can give us instantaneous recall and amazing processing speed, humans can provide that which we are best at: wanting things.

Humans want things.  For me this cuts to the very heart of the AI question.  I always complain that none of these AI geniuses can show us an algorithm to define problems.  (No, CEV doesn’t count.)  Algorithmic problem definition is just another way to say algorithmic desire definition   Good luck with that one.

All simple human desires seem to arise from biological imperatives.  Maybe artificial life could give you that.   More complex desires are interpersonal and might be impossible to reduce back to metabolic processes.  You may want fame for the status, but the specific type of fame depends on which person or group you are trying to impress.  And that changes throughout your life.

And if we do build Artificial Life, it may well be that it can only function with similar constraints as, uh, non-artificial life.  In fact, Terrence Deacon may well be right and constraints are the key to everything.  Ahh, the warm fuzzies of embodiment are seeping over me now.

But seriously, SingInst, where is this algorithmic desire going to come from?  And once you get that, how the hell are you going to constrain the actions of GodLikeAI again?  I know, I know, Gandi would never change himself into an anti-Gandi.  But we may be like bacteria saying that our distant offspring would never neglect the all encompassing wisdom of nutrient gradients.