Italian Futurism

Dynamism of a Car by Luigi Russolo (1913)

When I started organizing futurist meetups in 2009, one of the first things I did was search for images related to futurism so that I could make the meetup event pages more interesting.  A lot of the results seemed to be of paintings with a bunch of motion lines which confused me until I discovered the Italian Futurist art movement of the early 20th century. Their fascist political orientation had suppressed interest in them for years, but I liked the art and have used a lot of their imagery to promote my events over the years.  When people ask me what futurism is, I often jokingly respond “An Italian Fascist art movement of the early 20th century.” Which tends to generate more confusion than laughter.

Portrait of Il Duce by Gerardo Dottori (1933)

As I dug more into Italian Futurism, I started seeing a surprising number of parallels with modern futurism.  Revisiting the ideas of past futurists can offer insights into how modern thinkers are responding to technological disruption.

The Futurist art movement was founded by a wealthy, internationally educated, Italian poet named Filippo Marinetti.  He kicked off the movement with the Futurist Manifesto in 1909 in which he made bombastic claims that glorified the speed of motorcars over the static beauty of classical art.  One can imagine how Italian artists in particular must have felt crushed by the weight of antiquity all around them – from the Colosseum to the Sistine Chapel. And their vigorous mediterranean spirits rebelled and seized on the opportunity for change that technology was presenting all around them.

Marinetti in his 4 cylinder Fiat, 1908.

Motorcars and aeroplanes must have seemed just as disruptive in the early 1900’s as the internet seemed in the 1990’s.  The world was shrunk by both. And both disruptions lead to changes in human identity. Marinetti called for a “new man” whose psyche was transformed by machinery, accelerated communication, and transportation networks.  Contrast that with the transhumanists of today who see not just the psychological transformation of man, but even physical transformation as we are augmented with computer interfaces, or modified to live extended lifespans through longevity science.

Nose Dive on the City by Tullio Crali (1939)

The underlying driver in both cases is technology tearing through the world and reforming society into novel configurations.  The anarcho-fascist Marinetti wanted to tear down institutions, but one may argue that he was simply reacting to the fact that technology was already reshaping society.  Here of course, the analogy between the two movements becomes tenuous because there is of course no fascism in modern America. (Let’s not talk about Nick Land right now, ok?)  Americans today aren’t shelled shocked into a fascist response after seeing traditional ways of life torn to shreds by modern systems.  

Brooklyn Bridge by Joseph Stella (1919)

In his own era, prior to WWI, Marinetti openly called for war as “the world’s only hygiene.”  And he went and served in war zones on multiple occasions in his life. But even the fiery and bellicose Italian must have been impacted by the carnage of that first Great War.  In his 1921 Tactilism manifesto, we see The Futurist focused more on fusion and the destruction of barriers that keep people separated.

Armored Train in Action by Gino Severini (1915)

From our own stance, we can see not only the devastation of wars throughout the previous century, but also the soul crushing isolation that modernity has imposed on society and which sowed the seeds of postmodernism.  The meetup is a modern response to this dilemma, bringing affinity groups together to escape the loneliness of their barren apartments.

Depending on how schizophrenic you are, it’s easy to see countless parallels between the two futurist movements.  Futurist architects in 1912 designed skyscrapers that were impossible to build with the materials of the time. Today, we have the ethereum network.

Antonio Sant’Elia, New City: Tenement Building  1914.

Each era presents an array of tensions between opposing forces.  The tension between the dynamism of the Futurists and the stasis of the Cubists left Picasso in the popular imagination but relegated Balla to the art students.  Today a battle rages for attention control as even narrative breaks down and micro-slices of consciousness are squabbled over between corporate algorithms and memetic infections.

Velocity Of An Automobile by Giacomo Balla (1913)

One take away from this is that futurists have a role to play in society separate from the technologists.  Technology inevitably destroys pre-existing patterns of human behavior. Humans are thrown off balance by this.  If we prefer not to charge madly into war as Marinetti and his cohort did, then perhaps we can contribute integration services. We might help our fellow humans make sense of and adapt to the current disruptions that we all face, as well as the coming changes.  

Against Hedonism

(The reader might enjoy this post more by picturing a 1940’s movie character pounding a table or pointing sheavenward while he rants this.)

Some of us are stuck in a world where we have to discover the meaning of life for ourselves.  In some places and times, people could rely on tribal roles to guide their lives and to give them meaning.  As human societies became systemized, these systems guided our lives, but the narratives to provide meaning started to multiply.  Systems don’t care about tribe. Capitalism strips away values blindly, cutting away tribal prejudices as well as virtues. System operators who hold values that reduce profitability are simply removed from the playing field, outcompeted by other players who are willing to discard more and more of their values.

I myself turned to subcultures to provide meaning when I was still a teen.  I read all about the emptiness of a systematized existence from the Beat writers.  I identified with the punk rockers shrieking over the banality of our suburban existence.  I unplugged from the system, dropped out of college, tried to find meaning in art and rebellion.  But realities started kicking in as I grew older and I chose to discard my meaning rich life for one that provided healthcare.  I became systemized, entered the corporate world. I became the repulsive salaryman.

The question of meaning is really the question of what values SHOULD we hold.  What is valuable enough to devote our energies toward? What stories inspire us enough to get out of bed each morning?  To fulfill our roles in some project greater than ourselves? Now of course, the elites have always been struggling with these questions because they are the ones that craft the stories to guide their tribes.  Consider the Greek philosophers obsessed with defining virtue. Perhaps it’s a shame that the elites have failed the common man and left him flailing without guidance. Or perhaps no one can listen to stories when their stomachs are empty and their prospects are dimmed by the systems that rampage the planet.

The Bay Area intelligentsia that bothers to concern itself with such questions seems to have converged on the notion that we should strive for the most pleasure for the greatest number of people over the greatest periods of time, what we might call hedonic utilitarianism.  Now of course this raises some sticky questions since pleasure is hard to define and measure, which is the problem that my friends at QRI are focused on. Also there is the question of how to value the pleasure of populations? Is a society with a small number of people whose lives are awesome and vast number whose lives are terrible better or worse than one where everyone is just ok, but no one is terribly happy or sad?

And if this were just a question of morals, then you can claim whatever values you want. Expressed values are just preferences, similar to preferring vanilla over chocolate, or the axioms that serve as the foundations of mathematical equations.  If I wanted to take a properly sociopathic position, I would argue that our values are simply the rules that allow us to operate in a given subculture. But neurotypicals get nervous when I talk like that, so I tend to avoid it. (Avoid hanging out with neurotypicals, that is.)

The real thing that annoys me is when people start saying things like values are universal or that humans provably value pleasure.  You can say that people OUGHT to value X or Y or whatever, I don’t care (1). When you try to say that humans DO value pleasure, it gets my hackles up.  And that’s why I get in fights on Facebook.

It’s not that I don’t enjoy pleasure, ask anyone who has seen me drunk.  It’s more that if I were able to offer a hedonist a box that would allow them to feel a broad range of pleasures from orgasm to the joy of discovery, to the warm glow of helping humanity thrive for millions of years, while in fact, unknown to them it could all be an illusion and all of humanity might be suffering, I predict the hedonist wouldn’t like that.

And then they will say something like, well I’m a utilitarian, so I value all life and all future life, blah blah blah, and I’m back to not listening again.  Or Andres will say something like, yeah well, what if all these are true: you have experience, all experiences exist eternally, and you are all beings experiencing all experiences.  And then I pick up the pieces of my brain, and go back to the question that really annoys me. Is all human behavior really driven by pleasure?

On the one hand, I’m sympathetic to this position.  I used to say this about altruism: we help others selfishly, because it benefits us.  It either makes us feel good about ourselves or it is rewarded by the group or it impresses potential mates or whatever.  And I can’t even argue that I don’t enjoy some virtuous acts. But the idea that all human behavior is driven by pleasure seeking seems to imply something else as well: that no behavior is instinctual or habitual. I do want to argue that some virtuous acts simply aren’t rewarded with pleasure or even rewarded with reduced suffering. But I will start with the instinctual and habitual cases because those are easier.

It seems obvious that a lot of what we do is simply force of habit.  I get up and go brush my teeth every morning, less to avoid the suffering of film on my teeth and more because it’s simply the habit that has been installed through repetition.(1.1)  Can a case be made that we prefer familiar tasks because they are less costly from an energy expenditure perspective? Sure. Are we aware of that as pleasure? Unlikely.(2) Is the familiar always a preferred state?  No, sometime we seek novelty. Maybe we only seek novelty when we have an excess of energy to process it with?  Not sure on that one.

A side point I would like to make is that certain friends of mine *cough Mike* refer to hedonism as preferring preferred states which just so… tautological.  Well yes, we prefer preferred states. But do we do things that we would prefer NOT to do? Sure. All the time. And then comes some argument about avoiding damage to self image (3) or computing the total future pleasure or some other complex explanation easily trimmed away by Occam’s razor.  Perhaps SOME of us are simply programmed by evolution to be dutiful? Would that be so hard to buy? I can see all manner of ways in which a predictably dutiful agent will be rewarded in environments that require a lot of (sometimes costly) cooperation.

And I’ve been dutiful.  And being dutiful feels truly awful sometimes.  So awful that in hindsight I really can’t believe that I fulfilled some of those duties.  And I might have said that I couldn’t have looked myself in the mirror afterward if I hadn’t fulfilled my self-perceived duties.  But it’s not like I did the math in my head and added up the number of hours I would have suffered. Because I can tell you, the suffering of my conscience from neglecting some duties would have been tiny compared to the suffering of fulfilling them.  Rationalization is a painkiller for sure.

Or consider those who choose suffering for a cause over the comforts of staying at home.  Are they really just blissing out on how cool they are being as the enemy drops mortars on them?  They do their duty, and it’s a grim and terrible thing sometimes and it’s an impulse that has been corrupted countless times, high and low, by leaders and by dysfunctional partners alike.   BUT, it’s not an impulse properly captured by “hedonism.”

To the degree that humans ARE driven by pleasure seeking, then the most likely reason WHY that would be adaptive would be that environments aren’t predictable and autonomous behavior can’t always be genetically encoded.  Sometimes behavior should be rewarded, other times it should be punished.  But is this true of ALL behavior?  That would suggest that there are simply no invariant aspects of fitness landscapes over time.  I mean, clearly a hedonist would allow that breathing isn’t rewarded per se, so IT can be autonomic, oxygen being a predictable environmental resource over evolutionary timeframes.  But what about parenting?  If a child became too annoying and parents simply abandoned them, then the species wouldn’t last very long.

Parenting is difficult to describe in hedonistic terms. Most parents admit to being less happy while their children are in the home.  Caregiving sort of sucks and is not super rewarding. Don’t let anyone fool you. But our species keeps doing it.

We can note that sex is rewarded much more than parenting.  Which suggests that we need to learn which partners to connect with, but we don’t get much choice over which children we care for.  Or more generally, more rewarded behaviors might be more dependent on learning as a consequence of relating to aspects of environmental interaction that are more variable. 

The problem is that good models of what drives human behavior are being developed and refined more and more. These models are allowing human behavior to be controlled in ways we haven’t seen since, uh, well since religion and tribal roles dictated our behavior actually.  I guess surveillance capitalism will in fact solve the human need to have their lives guided and give everyone a purpose in life again. I’m not sure if it’s so much worse to serve the Church of Zuck than to serve Rome actually. But if we want to build better futures, help rescue the masses from this post-modern world devoid of meaning, then we need to get to the heart of the question and discard this outdated hedonist model. It’s been stretched beyond the breaking point of credibility.

1 Actually, that’s not true, a lot of stated values annoy me.
1.1 I grant that operant conditioning suggests that habit formation likes a reward in the loop, but I was on a roll, so this concession ends up in the footnotes.
2 Based on Libet’s work, I’ll probably get myself into trouble if I try asserting that any decisions are conscious. Consciousness is probably just a social storytelling skill or maybe a meta level to resolve conflicting urges. Then again how can descriptive hedonists make their claim if behavior isn’t conscious?
3 Actually I might buy some version of the argument that preservation of self-image drives some behavior, but not because of pleasure or avoidance of pain, but because behaviors that violate self-image are illegible to us.

Satisficing is Safer Than Maximizing

Before I begin, let me just say that if you haven’t read Bostrom’s SuperIntelligence and you haven’t read much about the AI Alignment problem, then you will probably find this post confusing and annoying. If you agree with Bostrom, you will DEFINITELY find my views annoying. This is just the sort of post my ex-girlfriend used to forbid me to write, so in honor of her good sense, I WILL try to state my claims as simply as possible and avoid jargon as much as I can.

[Epistemic Status: less confident in the hardest interpretations of “satisficing is safer,” more confident that maximization strategies are continually smuggled into the debate of AI safety and that acknowledging this will improve communication.]

Let me also say that I THINK AI ALIGNMENT IS AN IMPORTANT TOPIC THAT SHOULD BE STUDIED. My main disagreement with most people studying AI safety is that they seem to be focusing more on AI becoming god-like and destroying all living things forever and less on tool AI becoming a super weapon that China, Russia, and the West direct at each other. Well, that’s not really true, we tend to differ on whether intelligence is fundamentally social and embodied or not and a bunch of other things really, but I do truly love the rationalist community even though we drink different brands of kool-aid.

So ok, I know G is reading this and already writing angry comments criticizing me for all the jargon. So let me just clarify what I mean by a few of these terms. The “AI Alignment” problem is the idea that we might be able to create an Artificial Intelligence that takes actions that are not aligned with human values. Now one may say, well most humans take actions that are not aligned with the values of other humans. The only universal human value that I acknowledge is the will to persist in the environment. But for the sake of argument, let’s say that AI might decide that humans SHOULDN’T persist in the environment. That would sort of suck. Unless the AI just upgraded all of us to super-transhumans with xray vision and stuff. That would be cool I guess.

So then Eliezer, err, Nick Bostrom writes this book SuperIntelligence outlining how we are all fucked unless we figure out how to make AI safe and (nearly) all the nerds who thought AI safety might not matter much read it and decided “holy shit, it really matters!” And so I’m stuck arguing this shit every time I get within 10 yards of a rationalist. One thing I noticed is that rationalists tend to be maximizers. They want to optimize the fuck out of everything. Perfectionism is another word for this. Cost insensitivity is another word for it in my book.

So people who tend toward a maximizing strategy always fall in love with this classic thought experiment: the paper clip maximizer. Suppose you create an AI and tell it to make paper clips. Well what is to stop this AI from converting all matter in the solar system, galaxy, or even light cone into paperclips? To a lot of people, this just seems stupid. “Well that wouldn’t make sense, why would a superintelligent thing value paperclips?” To which the rationalist smugly replies “Orthogonality theory,” which states that there is NO correlation between intelligence and values. So you could be stupid and value world peace or a super-genius and value paper clips. And although I AM sympathetic to the layman who wants to believe that intelligence implies benevolence, I’m not entirely convinced of this. I’m sure we have some intelligent psychopaths laying around here somewhere.

But a better response might be. “Wow, unbounded maximizing algorithms could be sort of dangerous, huh? How about just telling the AI to create 100 paper clips? That should work fine, right?” This is called satisficing. Just work till you reach a predefined limit and stop.

I am quite fond of this concept myself. The first 20% of effort yields 80% of value in nearly every domain. So the final 80% of effort is required to wring out that final 20% of value. Now in some domains like design, I can see the value of this. 5 mediocre products aren’t as cool as one super product, and this is one reason I think Apple has captured so much profit historically. But even Jobs wasn’t a total maximizer, “Real artists ship.”

But, I’m not a designer, I’m an IT guy who dropped out of highschool. So I’m biased, and I think satisficing is awesome. I can get 80% of the value out of like five different domains for the same amount of effort that a maximizer invests in achieving total mastery of just one domain. But then Bostrom throws cold water on the satisficing idea in Superintelligence. He basically says that the satisficing AI will eat up all available resources in the universe checking and rechecking their work to ensure that they really created exactly 100 paper clips. Because “the AI, if reasonable, never assigns exactly zero probability to it having failed to achieve its goal.” (kindle loc 2960) Which seems very unreasonable really, and if a human spent all their time rechecking their work, we would call this OCD or something.

This idea doesn’t even make sense unless we just assume that Bostrom equates “reasonable” with maximizing confidence. So he is basically saying that maximizing strategies are bad, but satisficing strategies are also bad because there is always a maximizing strategy that could sneak in. As though maximizing strategies were some sort of logical fungus that spread through computer code of their own accord. Then Bostrom goes on to suggest, well, maybe a satisficer could be told to quit after a 95% probability of success. And there is some convoluted logic that I can’t follow exactly, but he basically says, well suppose the satisficing AI comes up with a maximizing strategy on its own that will guarantee 95% probability of success. Boom, Universe tiled with paper clips. Uh, how about a rule that checks for maximizing strategies? They get smuggled into books on AI a lot easier than they get spontaneously generated by computer programs.

I sort of feel that maximizers have a mental filter which assumes that maximizing is the default way to accomplish anything in the world. But in fact, we all have to settle in the real world. Maximizing is cost insensitive.  In fact, I might just be saying that cost insensitivity itself is what’s dangerous. Yeah, we could make things perfect if we could suck up all the resources in the light cone, but at what cost? And really, it would be pretty tricky for AI to gobble up resources that quickly too. There are a lot of agents keeping a close eye on resources. But that’s another question.

My main point is that the AI Alignment debate should include more explicit recognition that maximization run amok is dangerous <cough>as in modern capitalism<cough> and that pure satisficing strategies are much safer as long as you don’t tie them to unbounded maximizing routines. Bostrom’s entire argument against the safety of satisficing agents is that it they might include insane maximizing routines.  And that is a weak argument.

Ok, now I feel better. That was just one small point, I know, but I feel that Bostrom’s entire thesis is a house of cards built on flimsy premises such as this. See my rebuttal to the idea that human values are fragile or Omohundro’s basic AI drives.  Also, see Ben Goetzel’s very civil rebuttal to Superintelligence.  Even MIRI seems to agree that some version of satisficing should be pursued.

I am no great Bayesian myself, but if anyone cares to show me the error of my ways in the comment section, I will do my best to bite the bullet and update my beliefs.