Superintelligence Skepticism: A Rebuttal to Omohundro’s “Basic A.I. Drives”

Superintelligence by Nick Bostrom

In the past couple of years, we have seen a number of high profile figures in the science and tech fields warn us of the dangers of artificial intelligence.  (See comments by Stephen Hawking, Elon Musk, and Bill Gates, all expressing concern that A.I. could pose a danger to humanity.)  In the midst of this worrisome public discussion, Nick Bostrom released a book called Superintelligence, which outlines the argument that A.I. poses a real existential threat to humans.  A simplified version of the argument says that a self-improving A.I. will so rapidly increase in intelligence that it will go “FOOM” and far surpass all human intelligence in the blink of an eye.  This godlike A.I. will then have the power to rearrange all of the matter and energy in the entire solar system and beyond to suit its preferences.  If its preferences are not fully aligned with what humans want, then we’re in trouble.

A lot of people are skeptical about this argument, myself included.  Ben Goertzel has offered the most piercing and even-handed analysis of this argument that I have seen.  He points out that Bostrom’s book is really a restatement of ideas which Eliezer Yudkowsky has been espousing for a long time.  Then Goertzel digs very carefully through the argument and points out that the likelihood of an A.I. destroying humanity is probably lower than Bostrom and Yudkowsky think it is, which I agree with.  He also points out the opportunity costs of NOT pursuing A.I., but I don’t think we actually need to worry about that, given how the A.I. community seems to be blasting full speed ahead and A.I. safety concerns don’t seem to be widely heeded.

Even though I assign a low probability that A.I. will destroy all humans, I don’t rule it out. It would clearly be a very bad outcome and I am glad that people are trying to prevent this. What concerns me is that some of the premises that Bostrom bases his arguments on seem deeply flawed. I actually think that the A.I. safety crowd would be able to make a STRONGER argument if they would shore up some of these faulty premises, so I want to focus on one of them, basic A.I. drives, in this post.

Now, even though I assign a low probability that A.I. will destroy all humans, I don’t rule it out.  It would clearly be a very bad outcome and I am glad that people are trying to prevent this. What concerns me is that some of the premises that Bostrom bases his arguments on seem deeply flawed.  I actually think that the A.I. safety crowd would be able to make a STRONGER argument if they would shore up some of these faulty premises, so I want to focus on one of them, basic A.I. drives, in this post.

In Superintelligence, Bostrom cites a 2008 paper by Stephen Omohundro called “The Basic A.I. Drives.”  From the abstract:

We identify a number of “drives” that will appear in sufficiently advanced A.I. systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted.

Now this is already raising warning bells for me, since obviously we have a bunch of A.I. systems with goals and none of them seem to be exhibiting any of the drives that Omohundro is warning about.  Maybe they aren’t “sufficiently” advanced yet?  It also seems odd that Omohundro predicts that these drives will be present without having been designed in by the programmers.  That seems quite odd.  He doesn’t really offer a mechanism for how these drives might arise.  I can imagine a version of this argument that says “A.I. with these drives will outcompete A.I. without these drives” or something.  But that still requires that a programmer would need to put the drives in, they don’t just magically emerge.

It … seems odd that Omohundro predicts that these drives will be present (in A.I.) without having been designed in by the programmers.  That seems quite odd.  He doesn’t really offer a mechanism for how these drives might arise.  I can imagine a version of this argument that says “A.I. with these drives will outcompete A.I. without these drives” or something.  But that still requires that a programmer would need to put the drives in, they don’t just magically emerge. … Anyway, let’s dig in a bit further and examine each of these drives.

Biological systems have inherent drives, but I don’t see how any artificial system could spontaneously acquire drives unless it had similar machinery that gave rise to the drives in living things.  And biological systems are constrained by things like the need to survive.  Humans get hungry, so they have to eat to survive, this is a basic drive that is driven by biology.  The state of “wanting” something doesn’t just show up unannounced, it’s the result of complex systems; and the only existing examples of wanting we see are in biological systems, not artificial ones.  If someone posits an artificial system that has the drives of a living thing, but not the constraints, then I need to see the mechanism that they think could make this happen.

So that’s a huge problem.  What does it even mean to say that A.I. will “have” these drives?  Where do these drives come from?  Big problem.  Huge.

Anyway, let’s dig in a bit further and examine each of these drives.  What we see is that, in each case, Omohundro sort of posits a reasonable sounding explanation of why each drive would be “wanted” by an A.I..  But even though this is a paper written in sort of an academic style with citations and everything, it’s not much more than a set of reasonable sounding explanations.  So I will take a cue from rational blogger Ozymandias, and I will list each of Omohundro’s drives and then offer my own list of plausible explanations for why each drive would be entirely different.

1. A.I.s will want to self-improve. Why self modify when you can make tools?  Tools are a safer way to add functionality than self modification.  This is the same argument I use against current generation grinders.  Don’t cut yourself open to embed a thermometer.  Just grab one when you need it and then put it aside.  Also, it’s easy to maintain a utility function if the A.I. just straps on a module as opposed to messing with its own source code.  Upgrades to tools are easy too. It’s foolish and risky to self modify when you can just use tools.

When I first posted this to Facebook, I got into this whole debate with Alexei, who has insight into MIRI’s thinking.  He insisted that the optimization of decision making processes will lead to overwhelming advantages over time.  I countered with the argument that competing agents don’t get unbounded time to work on problems and that’s why we see these “good enough,” satisficing strategies throughout nature.  But a lot of safety A.I. people won’t allow that there can ever be any competition between A.I., because once a single A.I. goes FOOM and becomes godlike, no others can compete with it and it becomes the one to rule them all.  But the period leading up to takeoff would certainly involve competition with other agents, and I also believe that problem solving intelligence does not exist independently, outside of a group, but I won’t get into that here.

2. A.I.s will want to be rational.  This seems correct in theory.  Shouldn’t we predict that rational agents will outcompete irrational agents?  Yet, when we look at the great competition engine of evolution, we see humans at the top, and we aren’t that rational.  Maybe it’s really, really, really hard for rational agents to exist because it’s hard to predict the outcomes of actions and also goals evolve over time. Not sure about this one, my objection is weak.

3. A.I.s will try to preserve their utility functions.  Utility functions for humans (i.e. human values) have clearly evolved over time and are different in different cultures.  Survival might be the ultimate function of all living things, followed by reproduction.  Yet we see some humans sacrificing themselves for others and also some of us (myself included) don’t reproduce.  So even these seemingly top level goals are not absolute.  It may well be that an agent whose utility function doesn’t evolve will be outcompeted by agents whose goals do evolve.  This seems to be the case empirically.

4. A.I.s will try to prevent counterfeit utility.  I don’t really disagree with this.  Though there may be some benefit from taking in SOME information that wouldn’t be part of the normal search space when only pursuing our goals.  The A.I. equivalent of smoking pot might be a source of inspiration that leads to insights and thus actually rational.  But it could certainly APPEAR to be counterfeit utility.

5. A.I.s will be self-protective.  Hard to disagree with this.  This is a reliable goal.  But, as I mentioned earlier in this post, I have questions about where this goal would come from.  DNA based systems have it.  But it’s built into how we function. It didn’t just arise.  AlphaGo doesn’t resist being turned off for some reason.

6. A.I.s will want to acquire resources and use them efficiently.  Omohundro further says, “All computation and physical action requires the physical resources of space, time, matter, and free energy.  Almost any goal can be better accomplished by having more of these resources.” I strongly disagree with this.  Rationalists have told me that Gandhi wouldn’t take a pill that would make him a psycho killer and they want to build a Gandhi like A.I.  But if we take that analogy a bit farther, we see that Gandhi didn’t have much use for physical resources.  There are many examples of this.  A person who prefers to sit on the couch all day and play guitar doesn’t require more physical resources either.  They might acquire them by writing a hit song, but they aren’t instrumental to their success.

Guerrilla warfare can defeat much larger armies without amassing more resources.  Another point a futurist would make is that sufficiently advanced A.I. will have an entirely different view of physics.  Resources like space, time, and matter might not even be relevant or could possibly even be created or repurposed in ways we can’t even imagine.  This is a bit like a bacteria assuming that humans will always need glucose.  We do, of course, but we haven’t taken all of the glucose away from bacteria, far from it.  And we get glucose via mechanisms that a bacteria can’t imagine.

So really, I hope that the safety A.I. community will consider these points and try to base their arguments on stronger premises. … If we are just throwing reasonable explanations around, let’s consider a broader range of positions. … I offer all of this criticism with love though. I really do. Because at the end of the day, I don’t want our entire light cone converted into paper clips either.

So really, I hope that the safety A.I. community will consider these points and try to base their arguments on stronger premises.  Certainly Omohundro’s 2008 paper is in need of a revision of some kind.  If we are just throwing reasonable explanations around, let’s consider a broader range of positions.  Let’s consider the weaknesses of optimizing for one constraint, as opposed to satisficing for a lot of goals.  Because a satisficing A.I. seems much less likely to go down the FOOM path than an optimizing A.I., and, ironically, it would also be more resilient to failure.  I offer all of this criticism with love though.  I really do.  Because at the end of the day, I don’t want our entire light cone converted into paper clips either.

[EDIT 4/10/26]
I appreciate that Steve came and clarified his position in the comments below. I think that my primary objection now boils down to the fact that the list of basic A.I. drives is basically cost and risk insensitive. If we consider the cost and risk of strategies, then an entirely different (more realistic?) list would emerge, providing a different set of premises.

[EDIT 4/11/2016]
When you think about it, Omohundro is basically positing a list of strategies that would literally help you solve any problem.  This is supposed to be a fully general list of instrumental goals for ANY terminal goal.  This is an extraordinary claim. We should be amazed at such a thing!  We should be able to take each of these goals and use them to solve any problem we might have in our OWN lives right now.  When you think of it this way, you realize that this list is pretty arbitrary and shouldn’t be used as the basis for other, stronger arguments or for calculating likelihoods of various AI outcomes such as FOOM Singletons.

[EDIT 4/12/2016]
I was arguing with Tim Tyler about this on Facebook, and he pointed out that a bunch of people have come up with these extraordinary lists of universal instrumental values.  I pointed out that all of these seemed equally arbitrary and that it is amazing to me that cooperation is never included.  Cooperation is basically a prerequisite for all advanced cognition and yet all these AI philosophers are leaving it off their lists?  What a strange blind spot.  These sorts of fundamental oversights are biasing the entire conversation about AI safety.

We see in nature countless examples of solutions to coordination problems from biofilms to social animals and yet so many AI people and rationalists in general spurn evolution as a blind idiot god.  Well this blind idiot god somehow demanded cooperation and that’s what it got!  More AI safety research should focus on proven solutions to these cooperation problems.  What’s the game theory of biofilms?  More Axelrod, less T.H. Huxley!

Health Extension #11: Aging – Death by Damage vs. Death by Design

Sorry for the provocative title,  let me start by clarifying that  I in no way subscribe to intelligent design.  I am just trying to contrast the viewpoints of the two speakers that I saw at Health Extension Salon #11 last week: Cythia Kenyon and Justin Rebo.  More on that later.

The Health Extension Salon was held at Runway SF this month, and it was outstanding as usual.  I haven’t been getting out enough lately, so it was great to chat with interesting people and hear about amazing science.  Runway SF, as you may know, is an incubator/co-work sort of thing in the Twitter building on Market Street in San Francisco.  I guess it’s by invitation only.  They have an igloo, and I saw some quadra-copters laying around and whatnot.  So  you know, it’s pretty cool.

I bumped into Hank Pellissier, who I first met years ago at my East Bay Futurist Meetup, and he told me a bit more about his new book, Brighter Brains.  Hank has compiled a huge list of factors that affect intelligence from environmental factors to inbreeding.  It seems like an interesting survey.

Then I listened in to a conversation with some blindingly smart people, R.J. and J.Y. among others, and wisely kept my mouth shut.  J.Y. suggested that programmed death might be an adaptive trait that increases a species’ evolvability.  More on that later as well.  He also blew my mind by wondering aloud if the lunar cycles of women were a throwback to our ancient ancestors that dwelled in tidal pools.  He pointed out that many illnesses varied in intensity of symptoms based on the time period during a woman’s menstrual cycle, but that the medical profession failed to take this into account when prescribing dosages of medicine.  Thus, many women find themselves overdosed for half the month and underdosed for the other half.  He suggested that there is a vast potential to exploit this to improve women’s health.  I hope some bio-hackers look into this further.

J.Y. also suggested that anaphylaxis (like from a severe nut allergy) might be the result of a sort of epinephrine (adrenaline) regulation problem.  This was an idea his young child apparently suggested upon learning that an epinephrine injection was the only reliable treatment.  From the mouth of babes.  I got the impression that J.Y. was brimming with ideas for potential medical breakthroughs.

Before introducing the speakers, the charismatic and charming Dinelle Lucchesi challenged the crowd to call out potential roadblocks standing in the way of progress in anti-aging research.  There was some disagreement about whether the fact that aging is not designated as an illness by the FDA is an issue.  Justin Rebo thought this was unimportant since any effective anti-aging treatment would be sure to combat any number of illnesses.  It was also suggested that aging is difficult to measure with bio-markers.  But my favorite roadblock was that “biology is hard.”  Yep, that sums it up.

Health Extension founder and awesome person, Joe Betts-LaCroix, then took to the stage to reiterate the fact that aging research is underfunded:

  1. Most healthcare money treats age-related diseases.

  2. Aging is the single biggest risk factor for these diseases.

  3. But funding to address the biochemical processes of aging is <0.01% of healthcare spending!

Typical shortsighted narrow-mindedness prevents us from exploring preventative medicine to the degree that we should.  But I was also excited to hear that Health Extension has commissioned a study by students from Moscow’s Skolkovo Management School* to make a quantitative case for more funding in aging research.  I guess Joe will be heading off to Washington with this in hand to beat Congressmen over the head with it or something.  I wish him the best of luck, but he might be better off packing a suitcase full of money.

The first guest presenter of the evening was Justin Rebo, CEO of Open Biotechnology.  In 2009, working with SENS, he built a device to filter out senescent immune cells from the blood.  This mechanism was interesting in that he attached metallic particles to antibodies which selectively bound to defective T cells, and then was able to pull them from the blood using a magnet.  There is something brutal and almost mechanical about this approach.  I like it.  I guess it might help with the ineffectiveness of flu vaccines for the old.  This blood scrubber seems to be something like a dialysis machine in that it filters all the blood of an animal and replaces it.  This work focuses on bioremediation of the blood, which reminds me of the work being done around rejuvenation of old mice using blood from young mice.  Rebo is now working on a new version of this device, which will add positive factors in addition to removing the negative ones.  He sees great promise in getting the blood compounds of older creatures to match the levels found in young animals.

So Rebo’s approach seems well aligned with the SENS model, in that it both treats aging as an accumulation of damage and toxins and seeks to remediate the damage.  This looks to be a sensible short-term solution (Well, except for this whole move the mitochondrial DNA into the nucleus business, that seems crazy.  But what do I know?).  The next speaker of the evening seemed to suggest a deeper cause of aging: it’s programmed by our genes.

Cynthia Kenyon is a distinguished scientist based at UCSF:

In 1993, Kenyon and colleagues’ discovery that a single gene mutation could double the lifespan of C. elegans sparked an intensive study of the molecular biology of aging. These findings have now led to the discovery that an evolutionarily conserved hormone signaling system controls aging in other organisms as well, including mammals.

– from her Biosketch

She gave a presentation similar to her 2011 TED Talk, which is definitely worth watching.  Kenyon’s sparkling wit is a pleasure to experience.  The upshot of her presentation was that this longevity mutation she found in C. elegans (Daf-2) somewhat impaired the worms ability to bind to insulin and IGF-1, and this caused another gene called Daf-16 (if it was in the nucleus) to trigger all sorts of protective pathways and thus extend life **.  Sugar impairs this process, which is why Kenyon reluctantly admits that she eats a low glycemic diet.  This was a big topic of interest among the folks that thronged her with questions after her talk.  But Kenyon is a real scientist and cautiously avoided advocating for this diet since she says it hasn’t been proven to extend life.

As I mulled the two presentations over preparing to write this post, it occurred to me that there was some tension between the two talks.  Rebo and SENS are boldly striding ahead assuming that aging is a process of damage and that we can combat it by repairing damage.  But Kenyon seems to suggest a deeper, perhaps longer-term strategy of activating the body’s built in protective pathways to extend life.  She prefers small molecules for this, since they are easier to test.  Also, this modulation requires some finesse.  You can’t just go knocking genes out entirely.  If you couldn’t bind insulin at all, that would be a problem.

Kenyon’s work also suggests that aging might be a process that is controlled by genetic timers.  “How does the body know when menopause should occur?” she mused.  Perhaps the entire aging process is carefully timed by genetic pathways.  Maybe age-related death is an adaptive trait.  Wait, what?  Yep.  Think back to what J.Y. said earlier.  Death improves evolvability.  You would expect that an organism that died on a timer to evolve more.  Consider an environment that can support 100 organisms.  The more frequently those creatures die (assuming they can still reproduce), the greater the genetic diversity.  Uh, maybe I better stop here and go ask Razib.

For the sake of argument, let’s just say that aging and death are programmed, and that this does improve evolvability.  Well that suggests that the “repair the damage” guys are missing the boat somewhat.  After all, the body seems to have these protective pathways waiting to be activated.  That’s sort of how calorie restriction might work.  It tricks the body into activating protective genetic pathways.  Because a timed death is fine as long as you get to reproduce, but during time of stress, such as famine, our genes have a special bag of tricks that can help us survive.

But there is a further twist.  Kenyon mentioned that deactivating sensory input extends life in fruit flies.   They can’t sense their food and thus live longer.  I guess it has been shown that insulin rises more if you smell food.  So you calorie restriction people are best off skipping dinner with non-CR friends entirely.  It’s not just the food itself, but the signal of the food, that works it’s way into your genetic expression somehow.

But now we are getting into hormesis territory.  Someone get Seth Roberts on the phone.  A little bit of toxin triggers the body’s natural defenses.  Kenyon pointed out that mildly inhibited respiration was associated with extended lifespans and wondered if the resulting increase in toxins such as ROS were the cause.   Get my homeopathist on the phone.  So are the small amounts of herbicide on that non-organic food I disdain actually helpful?  Oh brother, now I have to rethink everything.  Maybe the SENS people should too, given that some of the supposedly damaging toxins like amyloid plaques might turn out to be protective mechanisms.  I guess this goes back to my favorite quote of the evening, “Biology is hard.”

Overall, I was impressed by both speakers.  Both the pragmatic Rebo and the deeply insightful Kenyon are striving to extend human health spans.  I don’t want to lose sight of this when I drill down into the details.  At the end of the day, successful anti-aging treatments will reduce suffering and increase health and happiness.  Imagine an 80-year-old as vibrant and healthy as a 20-year-old.  Even if I dropped dead right at 81, I would take that sort of old age in a heartbeat.  It’s a real shame that these aging researchers are so bereft of funding.  If anyone reading this knows any good policy wonks or lobbyists who care about longevity, you should direct them to the next Health Extension Salon so they can get involved.  Hey, I’m doing my part.  I’m getting the word out.

* Skolkovo might be the world’s coolest looking school by the way.

** It’s worth noting that at least some of Kenyon’s insulin/IGF mutants had normal reproduction. Thus there doesn’t seem to be a tradeoff between fertility and longevity.

Foresight 2013 – Day 3, Part 2

This article is a continuation of my commentary on the Foresight 2013 conference.  As I mentioned in my Day 1Day 2, and Day 3 posts, the Foresight folks have a strict media policy in place.  So while I can’t really blog about the content of the presentations, I will discuss the work these speakers have previously made public.

I would love to say that anyone who thinks they understand quantum mechanics doesn’t understand quantum mechanics, but I really just don’t understand it.  When Harvard’s Alan Aspuru-Guzik gave his Foresight 2013 talk “Simulating Quantum Mechanics with Quantum Devices,I listened with more enthusiasm than comprehension.  So bear with me.  Aspuru-Guzik likes to use quantum simulation to go after electronic structure calculations which are some of the most computationally intensive problems in science.  “The calculation time for the energy of atoms and molecules scales exponentially with system size on a classical computer but polynomially using quantum algorithms.”  Aspuru-Guzik points out that theory is ahead of experimentation in this field, but he has found and built some toys to play with.

So the idea here is to leverage quantum devices to simulate quantum mechanics.  I guess the NIST has some device with hundreds of qubits.  But the systems Aspuru-Guzik gets to play with are more modest.  He ran a simplified protein folding problem on an 81 qubit D-Wave system and got 13 correct results out of 10000 runs.  “The fact that it worked at all was significant.”  The investors must be thrilled.  I have heard that aside from factoring numbers, there aren’t many uses for this quantum computing.  But if you can factor numbers, you basically break all encryption.  Of course when I say “you” I mean the NSA.  But Aspuru-Guzik’s stuff is more benign.  He will be folding proteins and figuring out photosynthesis and stuff.  So he’s cool.

Next, Gerhard Klimek gave a talk about   Here’s what they say about themselves:

What is is the place for computational nanotechnology research, education, and collaboration. nanoHUB hosts a rapidly growing collection of Simulation Programs for nanoscale phenomena that run in the cloud and are accessed through your web browser. In addition there are Online PresentationsCoursesLearning ModulesPodcastsAnimationsTeaching Materials, and more to help you learn about the simulation programs and about nanotechnology. nanoHUB supports collaboration via Workspaces and User groups.

So there are clearly educational resources for students, but I understand that researchers and industry folks get into the simulation stuff.   Boasting 900 papers with an h-index of 41, Nanohub is a serious scientific resource.  So why head head on over and simulate a carbon nanotube or something?

Carrying on in the simulation vein, Ron Dror of D.E. Shaw Research talked about their custom supercomputer, Anton.  Anton is a massively parallel ASIC based pocket calculator that can figure out how drugs bind to receptors.  Dror has published work on G-protein-coupled receptor modulators in particular, which represent one third of all drugs.  Who knew? Pretty cool stuff.  And this David E. Shaw fellow is an “intriguing and mysterious” character.  He saunters from his Stanford PhD over to Columbia, toys with parallel supercomputing, yawns, strolls down to Wall Street, dabbles with high frequency trading, stretches, casually sets aside the resulting $27 billion hedge fund and sets up a computational biochemistry research group to model molecular dynamics simulations of proteins.  What a slacker.

Topping off the conference was the venerable CalTech theorist, William A. Goddard, III.  Your guess is as good as mine as to what he said… and I was in the audience.  There was something about a ReaxFF force field which lets you model chemical reactions.   He also said he was happy to see theory starting to be able to predict something useful, which I am sure is a huge understatement.  But there was just too much math for me to really get a grasp on his talk.

I was incredibly awed by these sober scientists toiling away at the edge of human knowledge, delving into the the very underpinnings of chemistry and biology.What new wonders will be within our grasp as we come to  understand and manipulate complex molecular interactions at the atomic level?   Dare I hope for my beloved utility fog someday?  If so, we will have them to thank.  And uh, possibly pay royalties to, depending on how the IP plays out.