Why the Back to Nature Movement Failed

modern caveman on computer

The paleo diet has been popular for a while now, and it prescribes a “back to nature” way of life that’s interesting. The premise is that humans evolved in an environment devoid of processed foods and high-glycemic carbs, so we should eat a diet that more closely mimics our paleolithic ancestors.  It also suggests that everyone should be outside, moving around all of the time.  I’m not going to try to defend the paleo diet per se, some people lose weight on it, whatever.  But it’s an interesting framework for considering what environments we as humans are adapted to and how we can apply that to the problems of modern life.

Consider depression. Two of the top cures for depression are exercise and light therapy.  It’s clear that humans evolved for at least 100,000 years, largely outdoors, moving around in the sunlight.  Depression is probably best thought of as a disease of modern life, where we’re living indoors and are largely sedentary.

Another aspect of modern, developed cultures is social isolation.  Humans are social animals, and we arguably evolved in tribes of roughly 150 members, according to the Dunbar number.  (I know that Dunbar has been supplanted by newer research, let’s just use this number as a starting point.)

Depression is probably best thought of as a disease of modern life, where we’re living indoors and are largely sedentary. . . Another aspect of modern, developed cultures is social isolation. . . So let’s consider these three aspects of an evolved human lifestyle: 1) Living outdoors in the sun, 2) Moving around continually, and 3) Being surrounded by a community of other humans invested in our survival.  These are all things that many of us struggle with in modern life.

So let’s take these three aspects of an evolved human lifestyle: 1) Living outdoors in the sun, 2) Moving around continually, and 3) Being surrounded by a community of other humans invested in our survival.  These are all things that many of us struggle with in modern life.  Sure, maybe some people still live in tight-knit, traditional farm communities that fulfill these needs, but, here in the US, economic forces have largely broken the cohesion of these rural places and we see drug abuse epidemics as a consequence.

Transhumanists can rightly argue that our need for sunlight, exercise, and social support are just kludgy legacy code tied to our messy biological bodies.  Maybe we can upgrade humans to be more machine-like with replaceable parts and we can do away with these outdated needs.  That’s a valid argument.  I don’t happen to agree with it, but it’s coherent at least.  For the sake of this discussion, I ask my transhumanist friends to acknowledge that these human 2.0 upgrades don’t seem to be right around the corner, so it probably makes sense to make accommodations for the hardware we humans are running right now.

Hippies tried to solve the problems of modern life in the sixties with their back to nature movement. . . But what ever happened to that movement, anyway? . . I asked a fellow named Frosty, an old hippie scientist at one of my clients, who said that when his friends from the city showed up at the rural commune, they blanched at how much work needed to be done.  They didn’t have the skills needed to build structures by hand, grow food, or dig latrines.  And then they would look around and ask, “Where’s the bar?”  They wanted to get drunk and hang out.  Who can blame them?

Hippies tried to solve the problems of modern life in the sixties with their back to nature movement.  Good old Stewart Brand was in the thick of it with his Whole Earth Catalog.  Many long-haired freaks trekked out to the middle of nowhere to build geodesic domes out of logs and get naked in the mud together.  Awesome!

But what ever happened to that movement, anyway?  What went wrong?  Brand himself said at a Long Now talk that the hippies discovered that the cities were where the action was.  I’m fortunate to work with these old hippie scientists at one of my clients, and I asked a fellow named Frosty why the back to nature movement didn’t properly take hold.  He laughed and said that when his friends from the city showed up at the rural commune, they blanched at how much work needed to be done.  They didn’t have the skills needed to build structures by hand, grow food, or dig latrines.  And then they would look around and ask, “Where’s the bar?”  They wanted to get drunk and hang out.  Who can blame them?

Twentieth century communists in Asia attempted their own versions of the back to nature movement.  They took what appears to be a sound hypothesis and effectively implemented it as genocide.  Mao’s Cultural Revolution forced the relocation of city dwellers to the countryside, resulting in disaster.  Pol Pot’s Year Zero also involved a violent reset of the clock, trying to turn back time and force modern people to live as our ancestors did, also a terrible failure.  So yes, as Scott Alexander says, we “see the skulls.”  We need to learn the lessons of previous failed attempts before we can rectify the problems with modern life.

Cities are where the power is accumulating.  Cities are more energy efficient.  Cities are where the action is.  But how can we remake our lifestyles to fit them? . . We see the first glimmers of a solution with Silicon Valley’s obsession with social, mobile, and augmented reality. . . Maybe augmented reality will give us the ability to move freely around the city, connect with our communities, and still do modern work, but while getting exercise and sunlight at the same time.  Call it the “Back to the City, But Working Outside, Walking Around Movement?”  Not catchy, but you get the picture.

We can’t turn back the clock.  We have to start where we are and assume that progress will keep happening whether we like it or not.  Cities are where the power is accumulating.  Cities are more energy efficient.  Cities are where the action is.  But how can we remake our lifestyles to fit them?  We see the first glimmers of a solution with Silicon Valley’s obsession with social, mobile, and augmented reality.  Perhaps we can find our communities via social network technology.  I certainly feel vastly enriched by my East Bay Futurists Meetup.  I’ve made good friends there, who help me grow and teach me a lot.  Mobile technology has made it easier and easier for people to do real work on the move.  Maybe augmented reality will close the loop and give us the ability to move freely around the city, connect with our communities, and still do modern work, but while getting exercise and sunlight at the same time.  Call it the “Back to the City, But Working Outside, Walking Around Movement?”  Ahh, well, not catchy, but you get the picture.  We just need to start redesigning our cities a little bit.  Step One: More parks!

Superintelligence Skepticism: A Rebuttal to Omohundro’s “Basic A.I. Drives”

Superintelligence by Nick Bostrom

In the past couple of years, we have seen a number of high profile figures in the science and tech fields warn us of the dangers of artificial intelligence.  (See comments by Stephen Hawking, Elon Musk, and Bill Gates, all expressing concern that A.I. could pose a danger to humanity.)  In the midst of this worrisome public discussion, Nick Bostrom released a book called Superintelligence, which outlines the argument that A.I. poses a real existential threat to humans.  A simplified version of the argument says that a self-improving A.I. will so rapidly increase in intelligence that it will go “FOOM” and far surpass all human intelligence in the blink of an eye.  This godlike A.I. will then have the power to rearrange all of the matter and energy in the entire solar system and beyond to suit its preferences.  If its preferences are not fully aligned with what humans want, then we’re in trouble.

A lot of people are skeptical about this argument, myself included.  Ben Goertzel has offered the most piercing and even-handed analysis of this argument that I have seen.  He points out that Bostrom’s book is really a restatement of ideas which Eliezer Yudkowsky has been espousing for a long time.  Then Goertzel digs very carefully through the argument and points out that the likelihood of an A.I. destroying humanity is probably lower than Bostrom and Yudkowsky think it is, which I agree with.  He also points out the opportunity costs of NOT pursuing A.I., but I don’t think we actually need to worry about that, given how the A.I. community seems to be blasting full speed ahead and A.I. safety concerns don’t seem to be widely heeded.

Even though I assign a low probability that A.I. will destroy all humans, I don’t rule it out. It would clearly be a very bad outcome and I am glad that people are trying to prevent this. What concerns me is that some of the premises that Bostrom bases his arguments on seem deeply flawed. I actually think that the A.I. safety crowd would be able to make a STRONGER argument if they would shore up some of these faulty premises, so I want to focus on one of them, basic A.I. drives, in this post.

Now, even though I assign a low probability that A.I. will destroy all humans, I don’t rule it out.  It would clearly be a very bad outcome and I am glad that people are trying to prevent this. What concerns me is that some of the premises that Bostrom bases his arguments on seem deeply flawed.  I actually think that the A.I. safety crowd would be able to make a STRONGER argument if they would shore up some of these faulty premises, so I want to focus on one of them, basic A.I. drives, in this post.

In Superintelligence, Bostrom cites a 2008 paper by Stephen Omohundro called “The Basic A.I. Drives.”  From the abstract:

We identify a number of “drives” that will appear in sufficiently advanced A.I. systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted.

Now this is already raising warning bells for me, since obviously we have a bunch of A.I. systems with goals and none of them seem to be exhibiting any of the drives that Omohundro is warning about.  Maybe they aren’t “sufficiently” advanced yet?  It also seems odd that Omohundro predicts that these drives will be present without having been designed in by the programmers.  That seems quite odd.  He doesn’t really offer a mechanism for how these drives might arise.  I can imagine a version of this argument that says “A.I. with these drives will outcompete A.I. without these drives” or something.  But that still requires that a programmer would need to put the drives in, they don’t just magically emerge.

It … seems odd that Omohundro predicts that these drives will be present (in A.I.) without having been designed in by the programmers.  That seems quite odd.  He doesn’t really offer a mechanism for how these drives might arise.  I can imagine a version of this argument that says “A.I. with these drives will outcompete A.I. without these drives” or something.  But that still requires that a programmer would need to put the drives in, they don’t just magically emerge. … Anyway, let’s dig in a bit further and examine each of these drives.

Biological systems have inherent drives, but I don’t see how any artificial system could spontaneously acquire drives unless it had similar machinery that gave rise to the drives in living things.  And biological systems are constrained by things like the need to survive.  Humans get hungry, so they have to eat to survive, this is a basic drive that is driven by biology.  The state of “wanting” something doesn’t just show up unannounced, it’s the result of complex systems; and the only existing examples of wanting we see are in biological systems, not artificial ones.  If someone posits an artificial system that has the drives of a living thing, but not the constraints, then I need to see the mechanism that they think could make this happen.

So that’s a huge problem.  What does it even mean to say that A.I. will “have” these drives?  Where do these drives come from?  Big problem.  Huge.

Anyway, let’s dig in a bit further and examine each of these drives.  What we see is that, in each case, Omohundro sort of posits a reasonable sounding explanation of why each drive would be “wanted” by an A.I..  But even though this is a paper written in sort of an academic style with citations and everything, it’s not much more than a set of reasonable sounding explanations.  So I will take a cue from rational blogger Ozymandias, and I will list each of Omohundro’s drives and then offer my own list of plausible explanations for why each drive would be entirely different.

1. A.I.s will want to self-improve. Why self modify when you can make tools?  Tools are a safer way to add functionality than self modification.  This is the same argument I use against current generation grinders.  Don’t cut yourself open to embed a thermometer.  Just grab one when you need it and then put it aside.  Also, it’s easy to maintain a utility function if the A.I. just straps on a module as opposed to messing with its own source code.  Upgrades to tools are easy too. It’s foolish and risky to self modify when you can just use tools.

When I first posted this to Facebook, I got into this whole debate with Alexei, who has insight into MIRI’s thinking.  He insisted that the optimization of decision making processes will lead to overwhelming advantages over time.  I countered with the argument that competing agents don’t get unbounded time to work on problems and that’s why we see these “good enough,” satisficing strategies throughout nature.  But a lot of safety A.I. people won’t allow that there can ever be any competition between A.I., because once a single A.I. goes FOOM and becomes godlike, no others can compete with it and it becomes the one to rule them all.  But the period leading up to takeoff would certainly involve competition with other agents, and I also believe that problem solving intelligence does not exist independently, outside of a group, but I won’t get into that here.

2. A.I.s will want to be rational.  This seems correct in theory.  Shouldn’t we predict that rational agents will outcompete irrational agents?  Yet, when we look at the great competition engine of evolution, we see humans at the top, and we aren’t that rational.  Maybe it’s really, really, really hard for rational agents to exist because it’s hard to predict the outcomes of actions and also goals evolve over time. Not sure about this one, my objection is weak.

3. A.I.s will try to preserve their utility functions.  Utility functions for humans (i.e. human values) have clearly evolved over time and are different in different cultures.  Survival might be the ultimate function of all living things, followed by reproduction.  Yet we see some humans sacrificing themselves for others and also some of us (myself included) don’t reproduce.  So even these seemingly top level goals are not absolute.  It may well be that an agent whose utility function doesn’t evolve will be outcompeted by agents whose goals do evolve.  This seems to be the case empirically.

4. A.I.s will try to prevent counterfeit utility.  I don’t really disagree with this.  Though there may be some benefit from taking in SOME information that wouldn’t be part of the normal search space when only pursuing our goals.  The A.I. equivalent of smoking pot might be a source of inspiration that leads to insights and thus actually rational.  But it could certainly APPEAR to be counterfeit utility.

5. A.I.s will be self-protective.  Hard to disagree with this.  This is a reliable goal.  But, as I mentioned earlier in this post, I have questions about where this goal would come from.  DNA based systems have it.  But it’s built into how we function. It didn’t just arise.  AlphaGo doesn’t resist being turned off for some reason.

6. A.I.s will want to acquire resources and use them efficiently.  Omohundro further says, “All computation and physical action requires the physical resources of space, time, matter, and free energy.  Almost any goal can be better accomplished by having more of these resources.” I strongly disagree with this.  Rationalists have told me that Gandhi wouldn’t take a pill that would make him a psycho killer and they want to build a Gandhi like A.I.  But if we take that analogy a bit farther, we see that Gandhi didn’t have much use for physical resources.  There are many examples of this.  A person who prefers to sit on the couch all day and play guitar doesn’t require more physical resources either.  They might acquire them by writing a hit song, but they aren’t instrumental to their success.

Guerrilla warfare can defeat much larger armies without amassing more resources.  Another point a futurist would make is that sufficiently advanced A.I. will have an entirely different view of physics.  Resources like space, time, and matter might not even be relevant or could possibly even be created or repurposed in ways we can’t even imagine.  This is a bit like a bacteria assuming that humans will always need glucose.  We do, of course, but we haven’t taken all of the glucose away from bacteria, far from it.  And we get glucose via mechanisms that a bacteria can’t imagine.

So really, I hope that the safety A.I. community will consider these points and try to base their arguments on stronger premises. … If we are just throwing reasonable explanations around, let’s consider a broader range of positions. … I offer all of this criticism with love though. I really do. Because at the end of the day, I don’t want our entire light cone converted into paper clips either.

So really, I hope that the safety A.I. community will consider these points and try to base their arguments on stronger premises.  Certainly Omohundro’s 2008 paper is in need of a revision of some kind.  If we are just throwing reasonable explanations around, let’s consider a broader range of positions.  Let’s consider the weaknesses of optimizing for one constraint, as opposed to satisficing for a lot of goals.  Because a satisficing A.I. seems much less likely to go down the FOOM path than an optimizing A.I., and, ironically, it would also be more resilient to failure.  I offer all of this criticism with love though.  I really do.  Because at the end of the day, I don’t want our entire light cone converted into paper clips either.

[EDIT 4/10/26]
I appreciate that Steve came and clarified his position in the comments below. I think that my primary objection now boils down to the fact that the list of basic A.I. drives is basically cost and risk insensitive. If we consider the cost and risk of strategies, then an entirely different (more realistic?) list would emerge, providing a different set of premises.

[EDIT 4/11/2016]
When you think about it, Omohundro is basically positing a list of strategies that would literally help you solve any problem.  This is supposed to be a fully general list of instrumental goals for ANY terminal goal.  This is an extraordinary claim. We should be amazed at such a thing!  We should be able to take each of these goals and use them to solve any problem we might have in our OWN lives right now.  When you think of it this way, you realize that this list is pretty arbitrary and shouldn’t be used as the basis for other, stronger arguments or for calculating likelihoods of various AI outcomes such as FOOM Singletons.

[EDIT 4/12/2016]
I was arguing with Tim Tyler about this on Facebook, and he pointed out that a bunch of people have come up with these extraordinary lists of universal instrumental values.  I pointed out that all of these seemed equally arbitrary and that it is amazing to me that cooperation is never included.  Cooperation is basically a prerequisite for all advanced cognition and yet all these AI philosophers are leaving it off their lists?  What a strange blind spot.  These sorts of fundamental oversights are biasing the entire conversation about AI safety.

We see in nature countless examples of solutions to coordination problems from biofilms to social animals and yet so many AI people and rationalists in general spurn evolution as a blind idiot god.  Well this blind idiot god somehow demanded cooperation and that’s what it got!  More AI safety research should focus on proven solutions to these cooperation problems.  What’s the game theory of biofilms?  More Axelrod, less T.H. Huxley!

Health Extension #11: Aging – Death by Damage vs. Death by Design

Sorry for the provocative title,  let me start by clarifying that  I in no way subscribe to intelligent design.  I am just trying to contrast the viewpoints of the two speakers that I saw at Health Extension Salon #11 last week: Cythia Kenyon and Justin Rebo.  More on that later.

The Health Extension Salon was held at Runway SF this month, and it was outstanding as usual.  I haven’t been getting out enough lately, so it was great to chat with interesting people and hear about amazing science.  Runway SF, as you may know, is an incubator/co-work sort of thing in the Twitter building on Market Street in San Francisco.  I guess it’s by invitation only.  They have an igloo, and I saw some quadra-copters laying around and whatnot.  So  you know, it’s pretty cool.

I bumped into Hank Pellissier, who I first met years ago at my East Bay Futurist Meetup, and he told me a bit more about his new book, Brighter Brains.  Hank has compiled a huge list of factors that affect intelligence from environmental factors to inbreeding.  It seems like an interesting survey.

Then I listened in to a conversation with some blindingly smart people, R.J. and J.Y. among others, and wisely kept my mouth shut.  J.Y. suggested that programmed death might be an adaptive trait that increases a species’ evolvability.  More on that later as well.  He also blew my mind by wondering aloud if the lunar cycles of women were a throwback to our ancient ancestors that dwelled in tidal pools.  He pointed out that many illnesses varied in intensity of symptoms based on the time period during a woman’s menstrual cycle, but that the medical profession failed to take this into account when prescribing dosages of medicine.  Thus, many women find themselves overdosed for half the month and underdosed for the other half.  He suggested that there is a vast potential to exploit this to improve women’s health.  I hope some bio-hackers look into this further.

J.Y. also suggested that anaphylaxis (like from a severe nut allergy) might be the result of a sort of epinephrine (adrenaline) regulation problem.  This was an idea his young child apparently suggested upon learning that an epinephrine injection was the only reliable treatment.  From the mouth of babes.  I got the impression that J.Y. was brimming with ideas for potential medical breakthroughs.

Before introducing the speakers, the charismatic and charming Dinelle Lucchesi challenged the crowd to call out potential roadblocks standing in the way of progress in anti-aging research.  There was some disagreement about whether the fact that aging is not designated as an illness by the FDA is an issue.  Justin Rebo thought this was unimportant since any effective anti-aging treatment would be sure to combat any number of illnesses.  It was also suggested that aging is difficult to measure with bio-markers.  But my favorite roadblock was that “biology is hard.”  Yep, that sums it up.

Health Extension founder and awesome person, Joe Betts-LaCroix, then took to the stage to reiterate the fact that aging research is underfunded:

  1. Most healthcare money treats age-related diseases.

  2. Aging is the single biggest risk factor for these diseases.

  3. But funding to address the biochemical processes of aging is <0.01% of healthcare spending!

Typical shortsighted narrow-mindedness prevents us from exploring preventative medicine to the degree that we should.  But I was also excited to hear that Health Extension has commissioned a study by students from Moscow’s Skolkovo Management School* to make a quantitative case for more funding in aging research.  I guess Joe will be heading off to Washington with this in hand to beat Congressmen over the head with it or something.  I wish him the best of luck, but he might be better off packing a suitcase full of money.

The first guest presenter of the evening was Justin Rebo, CEO of Open Biotechnology.  In 2009, working with SENS, he built a device to filter out senescent immune cells from the blood.  This mechanism was interesting in that he attached metallic particles to antibodies which selectively bound to defective T cells, and then was able to pull them from the blood using a magnet.  There is something brutal and almost mechanical about this approach.  I like it.  I guess it might help with the ineffectiveness of flu vaccines for the old.  This blood scrubber seems to be something like a dialysis machine in that it filters all the blood of an animal and replaces it.  This work focuses on bioremediation of the blood, which reminds me of the work being done around rejuvenation of old mice using blood from young mice.  Rebo is now working on a new version of this device, which will add positive factors in addition to removing the negative ones.  He sees great promise in getting the blood compounds of older creatures to match the levels found in young animals.

So Rebo’s approach seems well aligned with the SENS model, in that it both treats aging as an accumulation of damage and toxins and seeks to remediate the damage.  This looks to be a sensible short-term solution (Well, except for this whole move the mitochondrial DNA into the nucleus business, that seems crazy.  But what do I know?).  The next speaker of the evening seemed to suggest a deeper cause of aging: it’s programmed by our genes.

Cynthia Kenyon is a distinguished scientist based at UCSF:

In 1993, Kenyon and colleagues’ discovery that a single gene mutation could double the lifespan of C. elegans sparked an intensive study of the molecular biology of aging. These findings have now led to the discovery that an evolutionarily conserved hormone signaling system controls aging in other organisms as well, including mammals.

– from her Biosketch

She gave a presentation similar to her 2011 TED Talk, which is definitely worth watching.  Kenyon’s sparkling wit is a pleasure to experience.  The upshot of her presentation was that this longevity mutation she found in C. elegans (Daf-2) somewhat impaired the worms ability to bind to insulin and IGF-1, and this caused another gene called Daf-16 (if it was in the nucleus) to trigger all sorts of protective pathways and thus extend life **.  Sugar impairs this process, which is why Kenyon reluctantly admits that she eats a low glycemic diet.  This was a big topic of interest among the folks that thronged her with questions after her talk.  But Kenyon is a real scientist and cautiously avoided advocating for this diet since she says it hasn’t been proven to extend life.

As I mulled the two presentations over preparing to write this post, it occurred to me that there was some tension between the two talks.  Rebo and SENS are boldly striding ahead assuming that aging is a process of damage and that we can combat it by repairing damage.  But Kenyon seems to suggest a deeper, perhaps longer-term strategy of activating the body’s built in protective pathways to extend life.  She prefers small molecules for this, since they are easier to test.  Also, this modulation requires some finesse.  You can’t just go knocking genes out entirely.  If you couldn’t bind insulin at all, that would be a problem.

Kenyon’s work also suggests that aging might be a process that is controlled by genetic timers.  “How does the body know when menopause should occur?” she mused.  Perhaps the entire aging process is carefully timed by genetic pathways.  Maybe age-related death is an adaptive trait.  Wait, what?  Yep.  Think back to what J.Y. said earlier.  Death improves evolvability.  You would expect that an organism that died on a timer to evolve more.  Consider an environment that can support 100 organisms.  The more frequently those creatures die (assuming they can still reproduce), the greater the genetic diversity.  Uh, maybe I better stop here and go ask Razib.

For the sake of argument, let’s just say that aging and death are programmed, and that this does improve evolvability.  Well that suggests that the “repair the damage” guys are missing the boat somewhat.  After all, the body seems to have these protective pathways waiting to be activated.  That’s sort of how calorie restriction might work.  It tricks the body into activating protective genetic pathways.  Because a timed death is fine as long as you get to reproduce, but during time of stress, such as famine, our genes have a special bag of tricks that can help us survive.

But there is a further twist.  Kenyon mentioned that deactivating sensory input extends life in fruit flies.   They can’t sense their food and thus live longer.  I guess it has been shown that insulin rises more if you smell food.  So you calorie restriction people are best off skipping dinner with non-CR friends entirely.  It’s not just the food itself, but the signal of the food, that works it’s way into your genetic expression somehow.

But now we are getting into hormesis territory.  Someone get Seth Roberts on the phone.  A little bit of toxin triggers the body’s natural defenses.  Kenyon pointed out that mildly inhibited respiration was associated with extended lifespans and wondered if the resulting increase in toxins such as ROS were the cause.   Get my homeopathist on the phone.  So are the small amounts of herbicide on that non-organic food I disdain actually helpful?  Oh brother, now I have to rethink everything.  Maybe the SENS people should too, given that some of the supposedly damaging toxins like amyloid plaques might turn out to be protective mechanisms.  I guess this goes back to my favorite quote of the evening, “Biology is hard.”

Overall, I was impressed by both speakers.  Both the pragmatic Rebo and the deeply insightful Kenyon are striving to extend human health spans.  I don’t want to lose sight of this when I drill down into the details.  At the end of the day, successful anti-aging treatments will reduce suffering and increase health and happiness.  Imagine an 80-year-old as vibrant and healthy as a 20-year-old.  Even if I dropped dead right at 81, I would take that sort of old age in a heartbeat.  It’s a real shame that these aging researchers are so bereft of funding.  If anyone reading this knows any good policy wonks or lobbyists who care about longevity, you should direct them to the next Health Extension Salon so they can get involved.  Hey, I’m doing my part.  I’m getting the word out.

* Skolkovo might be the world’s coolest looking school by the way.

** It’s worth noting that at least some of Kenyon’s insulin/IGF mutants had normal reproduction. http://rstb.royalsocietypublishing.org/content/366/1561/9.full Thus there doesn’t seem to be a tradeoff between fertility and longevity.