Against Hedonism

(The reader might enjoy this post more by picturing a 1940’s movie character pounding a table or pointing sheavenward while he rants this.)

Some of us are stuck in a world where we have to discover the meaning of life for ourselves.  In some places and times, people could rely on tribal roles to guide their lives and to give them meaning.  As human societies became systemized, these systems guided our lives, but the narratives to provide meaning started to multiply.  Systems don’t care about tribe. Capitalism strips away values blindly, cutting away tribal prejudices as well as virtues. System operators who hold values that reduce profitability are simply removed from the playing field, outcompeted by other players who are willing to discard more and more of their values.

I myself turned to subcultures to provide meaning when I was still a teen.  I read all about the emptiness of a systematized existence from the Beat writers.  I identified with the punk rockers shrieking over the banality of our suburban existence.  I unplugged from the system, dropped out of college, tried to find meaning in art and rebellion.  But realities started kicking in as I grew older and I chose to discard my meaning rich life for one that provided healthcare.  I became systemized, entered the corporate world. I became the repulsive salaryman.

The question of meaning is really the question of what values SHOULD we hold.  What is valuable enough to devote our energies toward? What stories inspire us enough to get out of bed each morning?  To fulfill our roles in some project greater than ourselves? Now of course, the elites have always been struggling with these questions because they are the ones that craft the stories to guide their tribes.  Consider the Greek philosophers obsessed with defining virtue. Perhaps it’s a shame that the elites have failed the common man and left him flailing without guidance. Or perhaps no one can listen to stories when their stomachs are empty and their prospects are dimmed by the systems that rampage the planet.

The Bay Area intelligentsia that bothers to concern itself with such questions seems to have converged on the notion that we should strive for the most pleasure for the greatest number of people over the greatest periods of time, what we might call hedonic utilitarianism.  Now of course this raises some sticky questions since pleasure is hard to define and measure, which is the problem that my friends at QRI are focused on. Also there is the question of how to value the pleasure of populations? Is a society with a small number of people whose lives are awesome and vast number whose lives are terrible better or worse than one where everyone is just ok, but no one is terribly happy or sad?

And if this were just a question of morals, then you can claim whatever values you want. Expressed values are just preferences, similar to preferring vanilla over chocolate, or the axioms that serve as the foundations of mathematical equations.  If I wanted to take a properly sociopathic position, I would argue that our values are simply the rules that allow us to operate in a given subculture. But neurotypicals get nervous when I talk like that, so I tend to avoid it. (Avoid hanging out with neurotypicals, that is.)

The real thing that annoys me is when people start saying things like values are universal or that humans provably value pleasure.  You can say that people OUGHT to value X or Y or whatever, I don’t care (1). When you try to say that humans DO value pleasure, it gets my hackles up.  And that’s why I get in fights on Facebook.

It’s not that I don’t enjoy pleasure, ask anyone who has seen me drunk.  It’s more that if I were able to offer a hedonist a box that would allow them to feel a broad range of pleasures from orgasm to the joy of discovery, to the warm glow of helping humanity thrive for millions of years, while in fact, unknown to them it could all be an illusion and all of humanity might be suffering, I predict the hedonist wouldn’t like that.

And then they will say something like, well I’m a utilitarian, so I value all life and all future life, blah blah blah, and I’m back to not listening again.  Or Andres will say something like, yeah well, what if all these are true: you have experience, all experiences exist eternally, and you are all beings experiencing all experiences.  And then I pick up the pieces of my brain, and go back to the question that really annoys me. Is all human behavior really driven by pleasure?

On the one hand, I’m sympathetic to this position.  I used to say this about altruism: we help others selfishly, because it benefits us.  It either makes us feel good about ourselves or it is rewarded by the group or it impresses potential mates or whatever.  And I can’t even argue that I don’t enjoy some virtuous acts. But the idea that all human behavior is driven by pleasure seeking seems to imply something else as well: that no behavior is instinctual or habitual. I do want to argue that some virtuous acts simply aren’t rewarded with pleasure or even rewarded with reduced suffering. But I will start with the instinctual and habitual cases because those are easier.

It seems obvious that a lot of what we do is simply force of habit.  I get up and go brush my teeth every morning, less to avoid the suffering of film on my teeth and more because it’s simply the habit that has been installed through repetition.(1.1)  Can a case be made that we prefer familiar tasks because they are less costly from an energy expenditure perspective? Sure. Are we aware of that as pleasure? Unlikely.(2) Is the familiar always a preferred state?  No, sometime we seek novelty. Maybe we only seek novelty when we have an excess of energy to process it with?  Not sure on that one.

A side point I would like to make is that certain friends of mine *cough Mike* refer to hedonism as preferring preferred states which just so… tautological.  Well yes, we prefer preferred states. But do we do things that we would prefer NOT to do? Sure. All the time. And then comes some argument about avoiding damage to self image (3) or computing the total future pleasure or some other complex explanation easily trimmed away by Occam’s razor.  Perhaps SOME of us are simply programmed by evolution to be dutiful? Would that be so hard to buy? I can see all manner of ways in which a predictably dutiful agent will be rewarded in environments that require a lot of (sometimes costly) cooperation.

And I’ve been dutiful.  And being dutiful feels truly awful sometimes.  So awful that in hindsight I really can’t believe that I fulfilled some of those duties.  And I might have said that I couldn’t have looked myself in the mirror afterward if I hadn’t fulfilled my self-perceived duties.  But it’s not like I did the math in my head and added up the number of hours I would have suffered. Because I can tell you, the suffering of my conscience from neglecting some duties would have been tiny compared to the suffering of fulfilling them.  Rationalization is a painkiller for sure.

Or consider those who choose suffering for a cause over the comforts of staying at home.  Are they really just blissing out on how cool they are being as the enemy drops mortars on them?  They do their duty, and it’s a grim and terrible thing sometimes and it’s an impulse that has been corrupted countless times, high and low, by leaders and by dysfunctional partners alike.   BUT, it’s not an impulse properly captured by “hedonism.”

To the degree that humans ARE driven by pleasure seeking, then the most likely reason WHY that would be adaptive would be that environments aren’t predictable and autonomous behavior can’t always be genetically encoded.  Sometimes behavior should be rewarded, other times it should be punished.  But is this true of ALL behavior?  That would suggest that there are simply no invariant aspects of fitness landscapes over time.  I mean, clearly a hedonist would allow that breathing isn’t rewarded per se, so IT can be autonomic, oxygen being a predictable environmental resource over evolutionary timeframes.  But what about parenting?  If a child became too annoying and parents simply abandoned them, then the species wouldn’t last very long.

Parenting is difficult to describe in hedonistic terms. Most parents admit to being less happy while their children are in the home.  Caregiving sort of sucks and is not super rewarding. Don’t let anyone fool you. But our species keeps doing it.

We can note that sex is rewarded much more than parenting.  Which suggests that we need to learn which partners to connect with, but we don’t get much choice over which children we care for.  Or more generally, more rewarded behaviors might be more dependent on learning as a consequence of relating to aspects of environmental interaction that are more variable. 

The problem is that good models of what drives human behavior are being developed and refined more and more. These models are allowing human behavior to be controlled in ways we haven’t seen since, uh, well since religion and tribal roles dictated our behavior actually.  I guess surveillance capitalism will in fact solve the human need to have their lives guided and give everyone a purpose in life again. I’m not sure if it’s so much worse to serve the Church of Zuck than to serve Rome actually. But if we want to build better futures, help rescue the masses from this post-modern world devoid of meaning, then we need to get to the heart of the question and discard this outdated hedonist model. It’s been stretched beyond the breaking point of credibility.

1 Actually, that’s not true, a lot of stated values annoy me.
1.1 I grant that operant conditioning suggests that habit formation likes a reward in the loop, but I was on a roll, so this concession ends up in the footnotes.
2 Based on Libet’s work, I’ll probably get myself into trouble if I try asserting that any decisions are conscious. Consciousness is probably just a social storytelling skill or maybe a meta level to resolve conflicting urges. Then again how can descriptive hedonists make their claim if behavior isn’t conscious?
3 Actually I might buy some version of the argument that preservation of self-image drives some behavior, but not because of pleasure or avoidance of pain, but because behaviors that violate self-image are illegible to us.

Satisficing is Safer Than Maximizing

Before I begin, let me just say that if you haven’t read Bostrom’s SuperIntelligence and you haven’t read much about the AI Alignment problem, then you will probably find this post confusing and annoying. If you agree with Bostrom, you will DEFINITELY find my views annoying. This is just the sort of post my ex-girlfriend used to forbid me to write, so in honor of her good sense, I WILL try to state my claims as simply as possible and avoid jargon as much as I can.

[Epistemic Status: less confident in the hardest interpretations of “satisficing is safer,” more confident that maximization strategies are continually smuggled into the debate of AI safety and that acknowledging this will improve communication.]

Let me also say that I THINK AI ALIGNMENT IS AN IMPORTANT TOPIC THAT SHOULD BE STUDIED. My main disagreement with most people studying AI safety is that they seem to be focusing more on AI becoming god-like and destroying all living things forever and less on tool AI becoming a super weapon that China, Russia, and the West direct at each other. Well, that’s not really true, we tend to differ on whether intelligence is fundamentally social and embodied or not and a bunch of other things really, but I do truly love the rationalist community even though we drink different brands of kool-aid.

So ok, I know G is reading this and already writing angry comments criticizing me for all the jargon. So let me just clarify what I mean by a few of these terms. The “AI Alignment” problem is the idea that we might be able to create an Artificial Intelligence that takes actions that are not aligned with human values. Now one may say, well most humans take actions that are not aligned with the values of other humans. The only universal human value that I acknowledge is the will to persist in the environment. But for the sake of argument, let’s say that AI might decide that humans SHOULDN’T persist in the environment. That would sort of suck. Unless the AI just upgraded all of us to super-transhumans with xray vision and stuff. That would be cool I guess.

So then Eliezer, err, Nick Bostrom writes this book SuperIntelligence outlining how we are all fucked unless we figure out how to make AI safe and (nearly) all the nerds who thought AI safety might not matter much read it and decided “holy shit, it really matters!” And so I’m stuck arguing this shit every time I get within 10 yards of a rationalist. One thing I noticed is that rationalists tend to be maximizers. They want to optimize the fuck out of everything. Perfectionism is another word for this. Cost insensitivity is another word for it in my book.

So people who tend toward a maximizing strategy always fall in love with this classic thought experiment: the paper clip maximizer. Suppose you create an AI and tell it to make paper clips. Well what is to stop this AI from converting all matter in the solar system, galaxy, or even light cone into paperclips? To a lot of people, this just seems stupid. “Well that wouldn’t make sense, why would a superintelligent thing value paperclips?” To which the rationalist smugly replies “Orthogonality theory,” which states that there is NO correlation between intelligence and values. So you could be stupid and value world peace or a super-genius and value paper clips. And although I AM sympathetic to the layman who wants to believe that intelligence implies benevolence, I’m not entirely convinced of this. I’m sure we have some intelligent psychopaths laying around here somewhere.

But a better response might be. “Wow, unbounded maximizing algorithms could be sort of dangerous, huh? How about just telling the AI to create 100 paper clips? That should work fine, right?” This is called satisficing. Just work till you reach a predefined limit and stop.

I am quite fond of this concept myself. The first 20% of effort yields 80% of value in nearly every domain. So the final 80% of effort is required to wring out that final 20% of value. Now in some domains like design, I can see the value of this. 5 mediocre products aren’t as cool as one super product, and this is one reason I think Apple has captured so much profit historically. But even Jobs wasn’t a total maximizer, “Real artists ship.”

But, I’m not a designer, I’m an IT guy who dropped out of highschool. So I’m biased, and I think satisficing is awesome. I can get 80% of the value out of like five different domains for the same amount of effort that a maximizer invests in achieving total mastery of just one domain. But then Bostrom throws cold water on the satisficing idea in Superintelligence. He basically says that the satisficing AI will eat up all available resources in the universe checking and rechecking their work to ensure that they really created exactly 100 paper clips. Because “the AI, if reasonable, never assigns exactly zero probability to it having failed to achieve its goal.” (kindle loc 2960) Which seems very unreasonable really, and if a human spent all their time rechecking their work, we would call this OCD or something.

This idea doesn’t even make sense unless we just assume that Bostrom equates “reasonable” with maximizing confidence. So he is basically saying that maximizing strategies are bad, but satisficing strategies are also bad because there is always a maximizing strategy that could sneak in. As though maximizing strategies were some sort of logical fungus that spread through computer code of their own accord. Then Bostrom goes on to suggest, well, maybe a satisficer could be told to quit after a 95% probability of success. And there is some convoluted logic that I can’t follow exactly, but he basically says, well suppose the satisficing AI comes up with a maximizing strategy on its own that will guarantee 95% probability of success. Boom, Universe tiled with paper clips. Uh, how about a rule that checks for maximizing strategies? They get smuggled into books on AI a lot easier than they get spontaneously generated by computer programs.

I sort of feel that maximizers have a mental filter which assumes that maximizing is the default way to accomplish anything in the world. But in fact, we all have to settle in the real world. Maximizing is cost insensitive.  In fact, I might just be saying that cost insensitivity itself is what’s dangerous. Yeah, we could make things perfect if we could suck up all the resources in the light cone, but at what cost? And really, it would be pretty tricky for AI to gobble up resources that quickly too. There are a lot of agents keeping a close eye on resources. But that’s another question.

My main point is that the AI Alignment debate should include more explicit recognition that maximization run amok is dangerous <cough>as in modern capitalism<cough> and that pure satisficing strategies are much safer as long as you don’t tie them to unbounded maximizing routines. Bostrom’s entire argument against the safety of satisficing agents is that it they might include insane maximizing routines.  And that is a weak argument.

Ok, now I feel better. That was just one small point, I know, but I feel that Bostrom’s entire thesis is a house of cards built on flimsy premises such as this. See my rebuttal to the idea that human values are fragile or Omohundro’s basic AI drives.  Also, see Ben Goetzel’s very civil rebuttal to Superintelligence.  Even MIRI seems to agree that some version of satisficing should be pursued.

I am no great Bayesian myself, but if anyone cares to show me the error of my ways in the comment section, I will do my best to bite the bullet and update my beliefs.

If You Use Tools, Then You’re “Transhuman”


I’m reading Nexus, by Ramez Naam. I like the book so far, except for the early chapters, which are a little creepy and repetitive. But one of the themes is that there will be a war between transhumans and unaugmented humans. This idea has gained popularity and it’s starting to annoy me.

When I saw Ramez Naam speak at the H+ conference in 2012, he scoffed at the very term “transhuman.” He made the point that ALL humans augment, and the term “transhuman” seems to imply a mythical, non-augmenting human. His examples of transhumans were people who have pacemakers or who use birth control. Look at professional athletes. Does anyone really think those are unaugmented humans? They’re on steroids, practically mutants. Cyclists are using blood doping until their blood is so packed with red blood cells that it sometimes stops flowing through their veins and kills them. Or consider how new prosthetic legs make amputee runners superhuman.

I would go so far as to say that using a sharpened stick is as transhuman as using a brain-computer interface. So transhumanism is a misnomer. Transhumanism is really just extreme tool use. And politics being what they are, the elite will always control the wielders of technology, just as kings controlled the knights of medieval times.

But writers keep setting up conflicts between tool users and non-tool users. Zoltan Istvan has previously called for transhumans to deliberately create conflicts with religious people, who he imagines don’t like tools. Yuval Harari in Homo Deus suggests that transhumans could dominate humans as 19th century Europeans dominated Africans. In a recent Forbes article, Jeff Stibel warns that brain-computer interfaces could destroy humanity and calls for ethicists and philosophers to guide us. And, you know what, they are all correct (aside from the ethicists guiding us idea, that’s utterly laughable bullshit), but they are missing a key point.

Tool using populations destroy non-tool using populations. In the past, farmers used crazy technology to create food on demand and then consolidated resources and crushed the hunter-gatherers around them. Zoltan needn’t call for deliberate conflict. The wielders of the most advanced technology INEVITABLY overwhelm or convert those without it. Transhumanism will be no different. Harari might take a moment to note how the technology rich Global North dominates the poor Global South TODAY. Wealthy Westerners are already transhuman compared to the poorest in the world. We have longer lives and amazing influence.

This competition is very essentially human, and I’m not even sure that it’s entirely bad. As I’ve said before, cooperative groups turn out to be more competitive. Compassion is an evolved superweapon. Stibel is deluded. There is no stopping this process. He can hold back his patents for BrainGate all he wants. The physical world will continually yield up its mechanisms of action to the prying minds of restless humans. His discoveries will be reproduced. Cultures that seek to repress technology will be surpassed and dominated by cultures that don’t. That’s just the way the world works. Partly because ethics aren’t universal. Some players have legitimately diverging interests.

But in the broader scheme of things, we aren’t meant to stop and rest. Life has been evolving for billions of years. It will keep evolving. That’s physics. Entropy must be maximized and the negative entropy of more and more complex living things must fulfill the requirement of the physics engine running our universe. What if life had stopped evolving at bacteria? From a human’s perspective, that would have sucked. How can we begrudge the post-humans their place? Hint, we don’t get to. On the plus side, bacteria ARE still around and we need them to survive. On the negative side, neanderthals only exist as DNA remnants. I hope that the jump to the next level of evolution will be so extreme that, to the next generation, humans are more like bacteria and less like neanderthals. There’s a strange toast. Cheers!