Against Hedonism

(The reader might enjoy this post more by picturing a 1940’s movie character pounding a table or pointing sheavenward while he rants this.)

Some of us are stuck in a world where we have to discover the meaning of life for ourselves.  In some places and times, people could rely on tribal roles to guide their lives and to give them meaning.  As human societies became systemized, these systems guided our lives, but the narratives to provide meaning started to multiply.  Systems don’t care about tribe. Capitalism strips away values blindly, cutting away tribal prejudices as well as virtues. System operators who hold values that reduce profitability are simply removed from the playing field, outcompeted by other players who are willing to discard more and more of their values.

I myself turned to subcultures to provide meaning when I was still a teen.  I read all about the emptiness of a systematized existence from the Beat writers.  I identified with the punk rockers shrieking over the banality of our suburban existence.  I unplugged from the system, dropped out of college, tried to find meaning in art and rebellion.  But realities started kicking in as I grew older and I chose to discard my meaning rich life for one that provided healthcare.  I became systemized, entered the corporate world. I became the repulsive salaryman.

The question of meaning is really the question of what values SHOULD we hold.  What is valuable enough to devote our energies toward? What stories inspire us enough to get out of bed each morning?  To fulfill our roles in some project greater than ourselves? Now of course, the elites have always been struggling with these questions because they are the ones that craft the stories to guide their tribes.  Consider the Greek philosophers obsessed with defining virtue. Perhaps it’s a shame that the elites have failed the common man and left him flailing without guidance. Or perhaps no one can listen to stories when their stomachs are empty and their prospects are dimmed by the systems that rampage the planet.

The Bay Area intelligentsia that bothers to concern itself with such questions seems to have converged on the notion that we should strive for the most pleasure for the greatest number of people over the greatest periods of time, what we might call hedonic utilitarianism.  Now of course this raises some sticky questions since pleasure is hard to define and measure, which is the problem that my friends at QRI are focused on. Also there is the question of how to value the pleasure of populations? Is a society with a small number of people whose lives are awesome and vast number whose lives are terrible better or worse than one where everyone is just ok, but no one is terribly happy or sad?

And if this were just a question of morals, then you can claim whatever values you want. Expressed values are just preferences, similar to preferring vanilla over chocolate, or the axioms that serve as the foundations of mathematical equations.  If I wanted to take a properly sociopathic position, I would argue that our values are simply the rules that allow us to operate in a given subculture. But neurotypicals get nervous when I talk like that, so I tend to avoid it. (Avoid hanging out with neurotypicals, that is.)

The real thing that annoys me is when people start saying things like values are universal or that humans provably value pleasure.  You can say that people OUGHT to value X or Y or whatever, I don’t care (1). When you try to say that humans DO value pleasure, it gets my hackles up.  And that’s why I get in fights on Facebook.

It’s not that I don’t enjoy pleasure, ask anyone who has seen me drunk.  It’s more that if I were able to offer a hedonist a box that would allow them to feel a broad range of pleasures from orgasm to the joy of discovery, to the warm glow of helping humanity thrive for millions of years, while in fact, unknown to them it could all be an illusion and all of humanity might be suffering, I predict the hedonist wouldn’t like that.

And then they will say something like, well I’m a utilitarian, so I value all life and all future life, blah blah blah, and I’m back to not listening again.  Or Andres will say something like, yeah well, what if all these are true: you have experience, all experiences exist eternally, and you are all beings experiencing all experiences.  And then I pick up the pieces of my brain, and go back to the question that really annoys me. Is all human behavior really driven by pleasure?

On the one hand, I’m sympathetic to this position.  I used to say this about altruism: we help others selfishly, because it benefits us.  It either makes us feel good about ourselves or it is rewarded by the group or it impresses potential mates or whatever.  And I can’t even argue that I don’t enjoy some virtuous acts. But the idea that all human behavior is driven by pleasure seeking seems to imply something else as well: that no behavior is instinctual or habitual. I do want to argue that some virtuous acts simply aren’t rewarded with pleasure or even rewarded with reduced suffering. But I will start with the instinctual and habitual cases because those are easier.

It seems obvious that a lot of what we do is simply force of habit.  I get up and go brush my teeth every morning, less to avoid the suffering of film on my teeth and more because it’s simply the habit that has been installed through repetition.(1.1)  Can a case be made that we prefer familiar tasks because they are less costly from an energy expenditure perspective? Sure. Are we aware of that as pleasure? Unlikely.(2) Is the familiar always a preferred state?  No, sometime we seek novelty. Maybe we only seek novelty when we have an excess of energy to process it with?  Not sure on that one.

A side point I would like to make is that certain friends of mine *cough Mike* refer to hedonism as preferring preferred states which just so… tautological.  Well yes, we prefer preferred states. But do we do things that we would prefer NOT to do? Sure. All the time. And then comes some argument about avoiding damage to self image (3) or computing the total future pleasure or some other complex explanation easily trimmed away by Occam’s razor.  Perhaps SOME of us are simply programmed by evolution to be dutiful? Would that be so hard to buy? I can see all manner of ways in which a predictably dutiful agent will be rewarded in environments that require a lot of (sometimes costly) cooperation.

And I’ve been dutiful.  And being dutiful feels truly awful sometimes.  So awful that in hindsight I really can’t believe that I fulfilled some of those duties.  And I might have said that I couldn’t have looked myself in the mirror afterward if I hadn’t fulfilled my self-perceived duties.  But it’s not like I did the math in my head and added up the number of hours I would have suffered. Because I can tell you, the suffering of my conscience from neglecting some duties would have been tiny compared to the suffering of fulfilling them.  Rationalization is a painkiller for sure.

Or consider those who choose suffering for a cause over the comforts of staying at home.  Are they really just blissing out on how cool they are being as the enemy drops mortars on them?  They do their duty, and it’s a grim and terrible thing sometimes and it’s an impulse that has been corrupted countless times, high and low, by leaders and by dysfunctional partners alike.   BUT, it’s not an impulse properly captured by “hedonism.”

To the degree that humans ARE driven by pleasure seeking, then the most likely reason WHY that would be adaptive would be that environments aren’t predictable and autonomous behavior can’t always be genetically encoded.  Sometimes behavior should be rewarded, other times it should be punished.  But is this true of ALL behavior?  That would suggest that there are simply no invariant aspects of fitness landscapes over time.  I mean, clearly a hedonist would allow that breathing isn’t rewarded per se, so IT can be autonomic, oxygen being a predictable environmental resource over evolutionary timeframes.  But what about parenting?  If a child became too annoying and parents simply abandoned them, then the species wouldn’t last very long.

Parenting is difficult to describe in hedonistic terms. Most parents admit to being less happy while their children are in the home.  Caregiving sort of sucks and is not super rewarding. Don’t let anyone fool you. But our species keeps doing it.

We can note that sex is rewarded much more than parenting.  Which suggests that we need to learn which partners to connect with, but we don’t get much choice over which children we care for.  Or more generally, more rewarded behaviors might be more dependent on learning as a consequence of relating to aspects of environmental interaction that are more variable. 

The problem is that good models of what drives human behavior are being developed and refined more and more. These models are allowing human behavior to be controlled in ways we haven’t seen since, uh, well since religion and tribal roles dictated our behavior actually.  I guess surveillance capitalism will in fact solve the human need to have their lives guided and give everyone a purpose in life again. I’m not sure if it’s so much worse to serve the Church of Zuck than to serve Rome actually. But if we want to build better futures, help rescue the masses from this post-modern world devoid of meaning, then we need to get to the heart of the question and discard this outdated hedonist model. It’s been stretched beyond the breaking point of credibility.

1 Actually, that’s not true, a lot of stated values annoy me.
1.1 I grant that operant conditioning suggests that habit formation likes a reward in the loop, but I was on a roll, so this concession ends up in the footnotes.
2 Based on Libet’s work, I’ll probably get myself into trouble if I try asserting that any decisions are conscious. Consciousness is probably just a social storytelling skill or maybe a meta level to resolve conflicting urges. Then again how can descriptive hedonists make their claim if behavior isn’t conscious?
3 Actually I might buy some version of the argument that preservation of self-image drives some behavior, but not because of pleasure or avoidance of pain, but because behaviors that violate self-image are illegible to us.

Satisficing is Safer Than Maximizing

Before I begin, let me just say that if you haven’t read Bostrom’s SuperIntelligence and you haven’t read much about the AI Alignment problem, then you will probably find this post confusing and annoying. If you agree with Bostrom, you will DEFINITELY find my views annoying. This is just the sort of post my ex-girlfriend used to forbid me to write, so in honor of her good sense, I WILL try to state my claims as simply as possible and avoid jargon as much as I can.

[Epistemic Status: less confident in the hardest interpretations of “satisficing is safer,” more confident that maximization strategies are continually smuggled into the debate of AI safety and that acknowledging this will improve communication.]

Let me also say that I THINK AI ALIGNMENT IS AN IMPORTANT TOPIC THAT SHOULD BE STUDIED. My main disagreement with most people studying AI safety is that they seem to be focusing more on AI becoming god-like and destroying all living things forever and less on tool AI becoming a super weapon that China, Russia, and the West direct at each other. Well, that’s not really true, we tend to differ on whether intelligence is fundamentally social and embodied or not and a bunch of other things really, but I do truly love the rationalist community even though we drink different brands of kool-aid.

So ok, I know G is reading this and already writing angry comments criticizing me for all the jargon. So let me just clarify what I mean by a few of these terms. The “AI Alignment” problem is the idea that we might be able to create an Artificial Intelligence that takes actions that are not aligned with human values. Now one may say, well most humans take actions that are not aligned with the values of other humans. The only universal human value that I acknowledge is the will to persist in the environment. But for the sake of argument, let’s say that AI might decide that humans SHOULDN’T persist in the environment. That would sort of suck. Unless the AI just upgraded all of us to super-transhumans with xray vision and stuff. That would be cool I guess.

So then Eliezer, err, Nick Bostrom writes this book SuperIntelligence outlining how we are all fucked unless we figure out how to make AI safe and (nearly) all the nerds who thought AI safety might not matter much read it and decided “holy shit, it really matters!” And so I’m stuck arguing this shit every time I get within 10 yards of a rationalist. One thing I noticed is that rationalists tend to be maximizers. They want to optimize the fuck out of everything. Perfectionism is another word for this. Cost insensitivity is another word for it in my book.

So people who tend toward a maximizing strategy always fall in love with this classic thought experiment: the paper clip maximizer. Suppose you create an AI and tell it to make paper clips. Well what is to stop this AI from converting all matter in the solar system, galaxy, or even light cone into paperclips? To a lot of people, this just seems stupid. “Well that wouldn’t make sense, why would a superintelligent thing value paperclips?” To which the rationalist smugly replies “Orthogonality theory,” which states that there is NO correlation between intelligence and values. So you could be stupid and value world peace or a super-genius and value paper clips. And although I AM sympathetic to the layman who wants to believe that intelligence implies benevolence, I’m not entirely convinced of this. I’m sure we have some intelligent psychopaths laying around here somewhere.

But a better response might be. “Wow, unbounded maximizing algorithms could be sort of dangerous, huh? How about just telling the AI to create 100 paper clips? That should work fine, right?” This is called satisficing. Just work till you reach a predefined limit and stop.

I am quite fond of this concept myself. The first 20% of effort yields 80% of value in nearly every domain. So the final 80% of effort is required to wring out that final 20% of value. Now in some domains like design, I can see the value of this. 5 mediocre products aren’t as cool as one super product, and this is one reason I think Apple has captured so much profit historically. But even Jobs wasn’t a total maximizer, “Real artists ship.”

But, I’m not a designer, I’m an IT guy who dropped out of highschool. So I’m biased, and I think satisficing is awesome. I can get 80% of the value out of like five different domains for the same amount of effort that a maximizer invests in achieving total mastery of just one domain. But then Bostrom throws cold water on the satisficing idea in Superintelligence. He basically says that the satisficing AI will eat up all available resources in the universe checking and rechecking their work to ensure that they really created exactly 100 paper clips. Because “the AI, if reasonable, never assigns exactly zero probability to it having failed to achieve its goal.” (kindle loc 2960) Which seems very unreasonable really, and if a human spent all their time rechecking their work, we would call this OCD or something.

This idea doesn’t even make sense unless we just assume that Bostrom equates “reasonable” with maximizing confidence. So he is basically saying that maximizing strategies are bad, but satisficing strategies are also bad because there is always a maximizing strategy that could sneak in. As though maximizing strategies were some sort of logical fungus that spread through computer code of their own accord. Then Bostrom goes on to suggest, well, maybe a satisficer could be told to quit after a 95% probability of success. And there is some convoluted logic that I can’t follow exactly, but he basically says, well suppose the satisficing AI comes up with a maximizing strategy on its own that will guarantee 95% probability of success. Boom, Universe tiled with paper clips. Uh, how about a rule that checks for maximizing strategies? They get smuggled into books on AI a lot easier than they get spontaneously generated by computer programs.

I sort of feel that maximizers have a mental filter which assumes that maximizing is the default way to accomplish anything in the world. But in fact, we all have to settle in the real world. Maximizing is cost insensitive.  In fact, I might just be saying that cost insensitivity itself is what’s dangerous. Yeah, we could make things perfect if we could suck up all the resources in the light cone, but at what cost? And really, it would be pretty tricky for AI to gobble up resources that quickly too. There are a lot of agents keeping a close eye on resources. But that’s another question.

My main point is that the AI Alignment debate should include more explicit recognition that maximization run amok is dangerous <cough>as in modern capitalism<cough> and that pure satisficing strategies are much safer as long as you don’t tie them to unbounded maximizing routines. Bostrom’s entire argument against the safety of satisficing agents is that it they might include insane maximizing routines.  And that is a weak argument.

Ok, now I feel better. That was just one small point, I know, but I feel that Bostrom’s entire thesis is a house of cards built on flimsy premises such as this. See my rebuttal to the idea that human values are fragile or Omohundro’s basic AI drives.  Also, see Ben Goetzel’s very civil rebuttal to Superintelligence.  Even MIRI seems to agree that some version of satisficing should be pursued.

I am no great Bayesian myself, but if anyone cares to show me the error of my ways in the comment section, I will do my best to bite the bullet and update my beliefs.

Why the Back to Nature Movement Failed

modern caveman on computer

The paleo diet has been popular for a while now, and it prescribes a “back to nature” way of eating that’s interesting. The premise is that humans evolved in an environment devoid of processed foods and high-glycemic carbs, so we should eat a diet that more closely mimics our paleolithic ancestors. I’m not going to try to defend the paleo diet per se, some people lose weight on it, whatever.  But it’s an interesting framework for considering what environments we as humans are adapted to and how we can apply that to the problems of modern life.

Consider depression. Two of the top cures for depression are exercise and light therapy.  It’s clear that humans evolved for at least 100,000 years, largely outdoors, moving around in the sunlight.  Depression is probably best thought of as a disease of modern life, where we’re living indoors and are largely sedentary.

Another aspect of modern, developed cultures is social isolation.  Humans are social animals, and we arguably evolved in tribes of roughly 150 members, according to the Dunbar number.  (I know that Dunbar has been supplanted by newer research, let’s just use this number as a starting point.)

Depression is probably best thought of as a disease of modern life, where we’re living indoors and are largely sedentary. . . Another aspect of modern, developed cultures is social isolation. . . So let’s consider these three aspects of an evolved human lifestyle: 1) Living outdoors in the sun, 2) Moving around continually, and 3) Being surrounded by a community of other humans invested in our survival.  These are all things that many of us struggle with in modern life.

So let’s take these three aspects of an evolved human lifestyle: 1) Living outdoors in the sun, 2) Moving around continually, and 3) Being surrounded by a community of other humans invested in our survival.  These are all things that many of us struggle with in modern life.  Sure, maybe some people still live in tight-knit, traditional farm communities that fulfill these needs, but, here in the US, economic forces have largely broken the cohesion of these rural places and we see drug abuse epidemics as a consequence.

Transhumanists can rightly argue that our need for sunlight, exercise, and social support are just kludgy legacy code tied to our messy biological bodies.  Maybe we can upgrade humans to be more machine-like with replaceable parts and we can do away with these outdated needs.  That’s a valid argument.  I don’t happen to agree with it, but it’s coherent at least.  For the sake of this discussion, I ask my transhumanist friends to acknowledge that these human 2.0 upgrades don’t seem to be right around the corner, so it probably makes sense to make accommodations for the hardware we humans are running right now.

Hippies tried to solve the problems of modern life in the sixties with their back to nature movement. . . But what ever happened to that movement, anyway? . . I asked a fellow named Frosty, an old hippie scientist at one of my clients, who said that when his friends from the city showed up at the rural commune, they blanched at how much work needed to be done.  They didn’t have the skills needed to build structures by hand, grow food, or dig latrines.  And then they would look around and ask, “Where’s the bar?”  They wanted to get drunk and hang out.  Who can blame them?

Hippies tried to solve the problems of modern life in the sixties with their back to nature movement.  Good old Stewart Brand was in the thick of it with his Whole Earth Catalog.  Many long-haired freaks trekked out to the middle of nowhere to build geodesic domes out of logs and get naked in the mud together.  Awesome!

But what ever happened to that movement, anyway?  What went wrong?  Brand himself said at a Long Now talk that the hippies discovered that the cities were where the action was.  I’m fortunate to work with these old hippie scientists at one of my clients, and I asked a fellow named Frosty why the back to nature movement didn’t properly take hold.  He laughed and said that when his friends from the city showed up at the rural commune, they blanched at how much work needed to be done.  They didn’t have the skills needed to build structures by hand, grow food, or dig latrines.  And then they would look around and ask, “Where’s the bar?”  They wanted to get drunk and hang out.  Who can blame them?

Twentieth century communists in Asia attempted their own versions of the back to nature movement.  They took what appears to be a sound hypothesis and effectively implemented it as genocide.  Mao’s Cultural Revolution forced the relocation of city dwellers to the countryside, resulting in disaster.  Pol Pot’s Year Zero also involved a violent reset of the clock, trying to turn back time and force modern people to live as our ancestors did, also a terrible failure.  So yes, as Scott Alexander says, we “see the skulls.”  We need to learn the lessons of previous failed attempts before we can rectify the problems with modern life.

Cities are where the power is accumulating.  Cities are more energy efficient.  Cities are where the action is.  But how can we remake our lifestyles to fit them? . . We see the first glimmers of a solution with Silicon Valley’s obsession with social, mobile, and augmented reality. . . Maybe augmented reality will give us the ability to move freely around the city, connect with our communities, and still do modern work, but while getting exercise and sunlight at the same time.  Call it the “Back to the City, But Working Outside, Walking Around Movement?”  Not catchy, but you get the picture.

We can’t turn back the clock.  We have to start where we are and assume that progress will keep happening whether we like it or not.  Cities are where the power is accumulating.  Cities are more energy efficient.  Cities are where the action is.  But how can we remake our lifestyles to fit them?  We see the first glimmers of a solution with Silicon Valley’s obsession with social, mobile, and augmented reality.  Perhaps we can find our communities via social network technology.  I certainly feel vastly enriched by my East Bay Futurists Meetup.  I’ve made good friends there, who help me grow and teach me a lot.  Mobile technology has made it easier and easier for people to do real work on the move.  Maybe augmented reality will close the loop and give us the ability to move freely around the city, connect with our communities, and still do modern work, but while getting exercise and sunlight at the same time.  Call it the “Back to the City, But Working Outside, Walking Around Movement?”  Ahh, well, not catchy, but you get the picture.  We just need to start redesigning our cities a little bit.  Step One: More parks!