Superintelligence Skepticism: A Rebuttal to Omohundro’s “Basic A.I. Drives”

Superintelligence by Nick Bostrom

In the past couple of years, we have seen a number of high profile figures in the science and tech fields warn us of the dangers of artificial intelligence.  (See comments by Stephen Hawking, Elon Musk, and Bill Gates, all expressing concern that A.I. could pose a danger to humanity.)  In the midst of this worrisome public discussion, Nick Bostrom released a book called Superintelligence, which outlines the argument that A.I. poses a real existential threat to humans.  A simplified version of the argument says that a self-improving A.I. will so rapidly increase in intelligence that it will go “FOOM” and far surpass all human intelligence in the blink of an eye.  This godlike A.I. will then have the power to rearrange all of the matter and energy in the entire solar system and beyond to suit its preferences.  If its preferences are not fully aligned with what humans want, then we’re in trouble.

A lot of people are skeptical about this argument, myself included.  Ben Goertzel has offered the most piercing and even-handed analysis of this argument that I have seen.  He points out that Bostrom’s book is really a restatement of ideas which Eliezer Yudkowsky has been espousing for a long time.  Then Goertzel digs very carefully through the argument and points out that the likelihood of an A.I. destroying humanity is probably lower than Bostrom and Yudkowsky think it is, which I agree with.  He also points out the opportunity costs of NOT pursuing A.I., but I don’t think we actually need to worry about that, given how the A.I. community seems to be blasting full speed ahead and A.I. safety concerns don’t seem to be widely heeded.

Even though I assign a low probability that A.I. will destroy all humans, I don’t rule it out. It would clearly be a very bad outcome and I am glad that people are working on this problem. What concerns me is that some of the premises that Bostrom bases his arguments on seem deeply flawed. I actually think that the A.I. safety crowd would be able to make a STRONGER argument if they would shore up some of these faulty premises, so I want to focus on one of them, basic A.I. drives, in this post.

Now, even though I assign a low probability that A.I. will destroy all humans, I don’t rule it out.  It would clearly be a very bad outcome and I am glad that people are working on this problem. What concerns me is that some of the premises that Bostrom bases his arguments on seem deeply flawed.  I actually think that the A.I. safety crowd would be able to make a STRONGER argument if they would shore up some of these faulty premises, so I want to focus on one of them, basic A.I. drives, in this post.

In Superintelligence, Bostrom cites a 2008 paper by Stephen Omohundro called “The Basic A.I. Drives.”  From the abstract:

We identify a number of “drives” that will appear in sufficiently advanced A.I. systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted.

Now this is already raising warning bells for me, since obviously we have a bunch of A.I. systems with goals and none of them seem to be exhibiting any of the drives that Omohundro is warning about.  Maybe they aren’t “sufficiently” advanced yet?  It also seems odd that Omohundro predicts that these drives will be present without having been designed in by the programmers.  That seems quite odd.  He doesn’t really offer a mechanism for how these drives might arise.  I can imagine a version of this argument that says “A.I. with these drives will outcompete A.I. without these drives” or something.  But that still requires that a programmer would need to put the drives in, they don’t just magically emerge.

It … seems odd that Omohundro predicts that these drives will be present (in A.I.) without having been designed in by the programmers.  That seems quite odd.  He doesn’t really offer a mechanism for how these drives might arise.  I can imagine a version of this argument that says “A.I. with these drives will outcompete A.I. without these drives” or something.  But that still requires that a programmer would need to put the drives in, they don’t just magically emerge. … Anyway, let’s dig in a bit further and examine each of these drives.

Biological systems have inherent drives, but I don’t see how any artificial system could spontaneously acquire drives unless it had similar machinery that gave rise to the drives in living things.  And biological systems are constrained by things like the need to survive.  Humans get hungry, so they have to eat to survive, this is a basic drive that is driven by biology.  The state of “wanting” something doesn’t just show up unannounced, it’s the result of complex systems; and the only existing examples of wanting we see are in biological systems, not artificial ones.  If someone posits an artificial system that has the drives of a living thing, but not the constraints, then I need to see the mechanism that they think could make this happen.

So that’s a huge problem.  What does it even mean to say that A.I. will “have” these drives?  Where do these drives come from?  Big problem.  Huge.

Anyway, let’s dig in a bit further and examine each of these drives.  What we see is that, in each case, Omohundro sort of posits a reasonable sounding explanation of why each drive would be “wanted” by an A.I..  But even though this is a paper written in sort of an academic style with citations and everything, it’s not much more than a set of reasonable sounding explanations.  So I will take a cue from rational blogger Ozymandias, and I will list each of Omohundro’s drives and then offer my own list of plausible explanations for why each drive would be entirely different.

1. A.I.s will want to self-improve. Why self modify when you can make tools?  Tools are a safer way to add functionality than self modification.  This is the same argument I use against current generation grinders.  Don’t cut yourself open to embed a thermometer.  Just grab one when you need it and then put it aside.  Also, it’s easy to maintain a utility function if the A.I. just straps on a module as opposed to messing with its own source code.  Upgrades to tools are easy too. It’s foolish and risky to self modify when you can just use tools.

When I first posted this to Facebook, I got into this whole debate with Alexei, who has insight into MIRI’s thinking.  He insisted that the optimization of decision making processes will lead to overwhelming advantages over time.  I countered with the argument that competing agents don’t get unbounded time to work on problems and that’s why we see these “good enough,” satisficing strategies throughout nature.  But a lot of safety A.I. people won’t allow that there can ever be any competition between A.I., because once a single A.I. goes FOOM and becomes godlike, no others can compete with it and it becomes the one to rule them all.  But the period leading up to takeoff would certainly involve competition with other agents, and I also believe that problem solving intelligence does not exist independently, outside of a group, but I won’t get into that here.

2. A.I.s will want to be rational.  This seems correct in theory.  Shouldn’t we predict that rational agents will outcompete irrational agents?  Yet, when we look at the great competition engine of evolution, we see humans at the top, and we aren’t that rational.  Maybe it’s really, really, really hard for rational agents to exist because it’s hard to predict the outcomes of actions and also goals evolve over time. Not sure about this one, my objection is weak.

3. A.I.s will try to preserve their utility functions.  Utility functions for humans (i.e. human values) have clearly evolved over time and are different in different cultures.  Survival might be the ultimate function of all living things, followed by reproduction.  Yet we see some humans sacrificing themselves for others and also some of us (myself included) don’t reproduce.  So even these seemingly top level goals are not absolute.  It may well be that an agent whose utility function doesn’t evolve will be outcompeted by agents whose goals do evolve.  This seems to be the case empirically.

4. A.I.s will try to prevent counterfeit utility.  I don’t really disagree with this.  Though there may be some benefit from taking in SOME information that wouldn’t be part of the normal search space when only pursuing our goals.  The A.I. equivalent of smoking pot might be a source of inspiration that leads to insights and thus actually rational.  But it could certainly APPEAR to be counterfeit utility.

5. A.I.s will be self-protective.  Hard to disagree with this.  This is a reliable goal.  But, as I mentioned earlier in this post, I have questions about where this goal would come from.  DNA based systems have it.  But it’s built into how we function. It didn’t just arise.  AlphaGo doesn’t resist being turned off for some reason.

6. A.I.s will want to acquire resources and use them efficiently.  Omohundro further says, “All computation and physical action requires the physical resources of space, time, matter, and free energy.  Almost any goal can be better accomplished by having more of these resources.” I strongly disagree with this.  Rationalists have told me that Gandhi wouldn’t take a pill that would make him a psycho killer and they want to build a Gandhi like A.I.  But if we take that analogy a bit farther, we see that Gandhi didn’t have much use for physical resources.  There are many examples of this.  A person who prefers to sit on the couch all day and play guitar doesn’t require more physical resources either.  They might acquire them by writing a hit song, but they aren’t instrumental to their success.

Guerrilla warfare can defeat much larger armies without amassing more resources.  Another point a futurist would make is that sufficiently advanced A.I. will have an entirely different view of physics.  Resources like space, time, and matter might not even be relevant or could possibly even be created or repurposed in ways we can’t even imagine.  This is a bit like a bacteria assuming that humans will always need glucose.  We do, of course, but we haven’t taken all of the glucose away from bacteria, far from it.  And we get glucose via mechanisms that a bacteria can’t imagine.

So really, I hope that the safety A.I. community will consider these points and try to base their arguments on stronger premises. … If we are just throwing reasonable explanations around, let’s consider a broader range of positions. … I offer all of this criticism with love though. I really do. Because at the end of the day, I don’t want our entire light cone converted into paper clips either.

So really, I hope that the safety A.I. community will consider these points and try to base their arguments on stronger premises.  Certainly Omohundro’s 2008 paper is in need of a revision of some kind.  If we are just throwing reasonable explanations around, let’s consider a broader range of positions.  Let’s consider the weaknesses of optimizing for one constraint, as opposed to satisficing for a lot of goals.  Because a satisficing A.I. seems much less likely to go down the FOOM path than an optimizing A.I., and, ironically, it would also be more resilient to failure.  I offer all of this criticism with love though.  I really do.  Because at the end of the day, I don’t want our entire light cone converted into paper clips either.

[EDIT 4/10/26]
I appreciate that Steve came and clarified his position in the comments below. I think that my primary objection now boils down to the fact that the list of basic A.I. drives is basically cost and risk insensitive. If we consider the cost and risk of strategies, then an entirely different (more realistic?) list would emerge, providing a different set of premises.

[EDIT 4/11/2016]
When you think about it, Omohundro is basically positing a list of strategies that would literally help you solve any problem.  This is supposed to be a fully general list of instrumental goals for ANY terminal goal.  This is an extraordinary claim. We should be amazed at such a thing!  We should be able to take each of these goals and use them to solve any problem we might have in our OWN lives right now.  When you think of it this way, you realize that this list is pretty arbitrary and shouldn’t be used as the basis for other, stronger arguments or for calculating likelihoods of various AI outcomes such as FOOM Singletons.

[EDIT 4/12/2016]
I was arguing with Tim Tyler about this on Facebook, and he pointed out that a bunch of people have come up with these extraordinary lists of universal instrumental values.  I pointed out that all of these seemed equally arbitrary and that it is amazing to me that cooperation is never included.  Cooperation is basically a prerequisite for all advanced cognition and yet all these AI philosophers are leaving it off their lists?  What a strange blind spot.  These sorts of fundamental oversights are biasing the entire conversation about AI safety.

We see in nature countless examples of solutions to coordination problems from biofilms to social animals and yet so many AI people and rationalists in general spurn evolution as a blind idiot god.  Well this blind idiot god somehow demanded cooperation and that’s what it got!  More AI safety research should focus on proven solutions to these cooperation problems.  What’s the game theory of biofilms?  More Axelrod, less T.H. Huxley!

How I Discovered What’s Wrong with Cultural Appropriation

cultural-appropriation

I was on Facebook when my friend, Razib, posted a video of a black woman at SF State calling out a white guy with dreadlocks and accusing him of cultural appropriation. Maybe this video is fake, maybe it’s real, it’s hard to say. It seems sort of staged. Of course Razib and his fellow academics got all worked up about it. They are all sort of shell shocked by these social justice warriors turning academia into a politically correct police state. Nevermind that conservatives are the ones to blame for letting the far left gain the upper hand there.

Don’t get me wrong, I came into this thread ready to stick up for cultural appropriation. After all, what would America be if we didn’t appropriate the cultures of other nations? … But then I noticed another friend of mine trying to explain why cultural appropriation was actually bad. … But people were deriding him and it made me sort of annoyed. So I made an attempt to come up with a model that explains why cultural appropriation is harmful.

Don’t get me wrong, I came into this thread ready to stick up for cultural appropriation. After all, what would America be if we didn’t appropriate the cultures of other nations? I’m a mutt myself, I don’t even have my own culture. What the hell music would I be allowed to listen to? Polka and beer hall, oom-pah-pah music? (Shudder.) So I was rolling up my sleeves, ready to join in the self congratulatory derision of the latest social justice fad, but then I noticed another friend of mine in the thread trying to explain why cultural appropriation was actually bad. He’s no social justice warrior (SJW) himself, and he was not making a great case, but people were deriding him and making ad hominem attacks against him, and it made me sort of annoyed.

I go by virtue ethics, and I don’t stand by and let a pal get beaten up. So I had to stop myself and think about cultural appropriation in a new light. Why is it that SJWs brandish this idea of cultural appropriation? So I made an attempt to steelman the position that I had previously derided, and to come up with a model that explains why cultural appropriation is harmful. In doing so, I convinced myself that SJWs are partially correct, and that cultural appropriation is sometimes a bad thing.

So let’s start with some sort of definition of what cultural appropriation is.

Here’s a respectable snippet from Wikipedia:

“Cultural elements, which may have deep meaning to the original culture, can be reduced to ‘exotic’ fashion by those from the dominant culture. When this is done, the imitator, who does not experience that oppression, is able to ‘play,’ temporarily, an ‘exotic’ other, without experiencing any of the daily discriminations faced by other cultures.”

One small solace of black people in America might be that they get to be “cool” in some way and can be afforded status in their unique subculture. … And now this hipster dreadlocked boy gets to parade around in the modern equivalent of blackface, usurping the cool factor of being an outsider. But at any moment, he might cut his hair, put on a suit, and blend seamlessly into the dominant culture, while this black woman is left with her crappy internship, forever barred from many powerful inner circles due to her race and gender. What a bitter pill that must be to swallow.

Seems legit. One small solace of black people in America might be that they get to be “cool” in some way and can be afforded status in their unique subculture. How annoyed black rockers must have been when Elvis skyrocketed in popularity above them. How humiliating was blackface vaudeville to the contemporary black artists it was imitating? And now this hipster dreadlocked boy gets to parade around in the modern equivalent of blackface, usurping the cool factor of being an outsider. But at any moment, he might cut his hair, put on a suit, and blend seamlessly into the dominant culture, while this black woman is left with her crappy internship, forever barred from many powerful inner circles due to her race and gender. What a bitter pill that must be to swallow.

There was a time when (white though I am), I was treated as relatively low status for being a nerd with emotional problems. So I went and became a punk rocker and a goth, and I got some local subculture status and that felt good. I was pretty disgusted by all of the jock-core bands that came out and kind of ruined hardcore. I was annoyed by the suave popular kids who posed as new wavers. So I can understand where these SJWs are coming from.

It actually might help to think of this in terms of status hierarchies. This is a trick I learned from the rationalist community. Some rationalists have trouble understanding social interactions and have decided to model them all as status competitions. This is disturbingly accurate when you think about it. So let’s model cultural appropriation in terms of a status competition, shall we?

Conservatives don’t like to allow that minorities are “oppressed,” but we can probably all still agree that black Americans are generally treated as lower status than whites. So, of course, blacks built their own independent status hierarchies, and, back in the day, the minstrels achieved a certain status, putting on folksy comedic shows. Then whites came along, slapped on blackface, and stole the show, partially by virtue of their high status whiteness, without necessarily capturing the authentic down home humor. Boom. Status hierarchy hijacked.

So then jazz hierarchies emerged, oops, here came whites again to hijack the top of the hierarchy (Miles Davis got beaten out by some white guy named Chet Baker for trumpeter of the year?), then Elvis stole rock and roll, etc. Even dreadlocks probably afford blacks certain local status, and this is diminished by whites interjecting themselves into these hierarchies.

So yeah, that sucks. Now the conservatives and neo-reactionaries will howl about how bad social justice is and how it represses free speech and the true diversity of ideas and how it’s out of touch with reality and The Gods of the Copybook Headings and whatnot. And some may even cry that black Americans aren’t treated as low status and are ascendant right now. Mike J. pointed out to me that status is actually revealed in each discrete social interaction. And maybe when some blacks get into college via affirmative action, they push out some whites. This all seems preposterous and annoying to me. I hate it when strong people think of themselves as weak. Not to mention the fact that adopting victim narratives robs people of agency, so no one should really do it if they can avoid it.

Look at Kamau Bell’s incident at the Elmwood Cafe. Here we have a high status black man, a successful comedian who had a national show on FX and attended an ivy league school. But he dressed down one day and he was mistaken for a homeless person by a barista who tried to shoo him away from talking to HIS OWN WIFE on the patio of the Elmwood Cafe (in liberal Berkeley, no less). But instead of just making a joke about it, he angrily posted about it on social media, and the girl ended up getting fired.

When I first heard of this, I toyed with the idea that Bell was falling prey to his own victim narrative. He should have just laughed off this low wage counter prole and told her to relax herself and bring him a coffee while she was at it, no tip to be expected.

But the fact is that, in this world, even a rich, educated, fairly famous black comedian gets treated like a homeless person by a minimum wage earning white cafe lackey. The conservatives can deny it all they want, but blacks are treated as low status. So I am going to hold my tongue and not just tell this guy to buck up and adopt a narrative in which he has power and can afford to act generously towards those below him.

Out here in the real world, social justice doesn’t really have any power, and minorities and queers are getting crapped on. And it’s not cool for the relatively powerful to swoop in and steal the crumbs of subcultural status that outsiders have tried to amass for themselves. I understand why they get pissed off about it. … I know we need conservative impulses to keep society from flying off the rails, but we also need social justice and the progressives in order to progress as a civilization. Otherwise, we might still be burning cats or chaining children to factory floors.

I don’t approve of SJW tribunals sentencing dreadlocked whites to social ostracization. But I also don’t think that’s going to be a problem outside of academia. Out here in the real world, social justice doesn’t really have any power, and minorities and queers are getting crapped on. And it’s not cool for the relatively powerful to swoop in and steal the crumbs of subcultural status that outsiders have tried to amass for themselves. I understand why they get pissed off about it. It’s just not classy. I know we need conservative impulses to keep society from flying off the rails, but we also need social justice and the progressives in order to progress as a civilization. Otherwise, we might still be burning cats or chaining children to factory floors.

Social justice remains the pointy end of the spear driving western cultural progress. We shall not remain worms, but will evolve to something greater.

EDITS: 4/4/2016

First point: It was brought up to me privately, that cultural appropriation can muddle the waters and make authentic cultural exchange more difficult.  I need to think about this more, but the native american headdress makes a good example.  When this headdress is used as a costume, it is stripped of it’s deeper religious and social meaning.  We’ve missed the point of what each feather and token might actually represent.  It’s become just a pretty hat.  Or what if we had adopted arabic numerals strictly as decoration without regard to their use in mathematics?  Would the thinkers of Europe have scoffed at the idea that these scribbles worn as ornaments by the fashionable could have a deeper meaning?  I’m not entirely sure and of course this dreadlocks example doesn’t fall into this category, but it’s something worth considering.

Second point: I actually spent a huge segment of my day arguing about this on Facebook and I got sort of exhausted by it and by the absolutely uniform rejection of my defence of this SJW. And I wonder to myself, to what end have I done this to myself? What difference does it make in my life or what contribution am I offering to the greater good?

Personally, I felt very similar to most of the people on this thread just last week. But after taking the time to try to steelman this SJW idea of cultural appropriation, I actually found a way to understand it. For me, this was an excellent exercise in updating beliefs.

What disappoints me is that so many of my intelligent and sensitive friends don’t seem to be trying to steelman this position AT ALL. I see little effort to understand the motives of the SJWs who prattle on about cultural appropriation. I don’t see anyone trying to give this black woman the benefit of the doubt. My god, if anyone doubts that blacks have a hard time in America, they would need to look no further than that very thread or even the other comment threads discussing this topic. Who has made the slightest effort to understand this woman’s pain? Who has looked past her boorish but basically harmless behavior to the underlying causes?

I really wish more people would make an attempt to steelman the positions of their opponents in more cases. It’s hard to do but it would yield much better arguments.

The Robot Lord Scenario

A robot slices a ball of dough and drops the strips into a pot to make noodles at a food stall in Beijing. - photo by AP

A robot slices a ball of dough and drops the strips into a pot to make noodles at a food stall in Beijing. – photo by AP

I just finished reading Rise of the Robots, by Martin Ford. This is a nonfiction book in which Ford predicts that all jobs will soon be automated away, and that this will lead to an economic crash, since no one will have any money to buy anything.  I’ve written about this idea before, and Ford’s position hasn’t changed much since his previous book, Lights in the Tunnel.

Economists call the idea that automation makes jobs disappear the “Luddite fallacy,” and have long dismissed that this can happen.  Because, up until now, whenever jobs were taken away by automation in one area, new jobs were created in another, so there was nothing to worry about.  Luddites are named after Ned Ludd, who, along with his followers, smashed some weaving machines at some point in English history in order to save the jobs of weavers.  But progress rolled on and weavers apparently found other jobs to do.  Just as automation on the farm put farmhands out of work, new jobs opened up in factories.  This pattern has been repeated over and over since the Industrial Revolution.

So why should we even listen to Ford and his ranting that jobs are actually disappearing, not just changing?  Well, for one because he does a decent job of documenting actual job stagnation.  I had assumed that we were just sending jobs (such as call center jobs) overseas, i.e. offshoring.  And while this feels painful to us, if it means that even poorer and hungrier people in other countries get more food, then that doesn’t seem like a bad tradeoff.  But while Ford acknowledges that maybe offshoring is the cause of employment stagnation in the US, most of our money is spent on services that can’t be offshored.  So he insists that jobs are being taken by machines, not by starving foreigners.

He documents an impressive array of recent machine accomplishments, from making hamburgers to composing emotionally compelling music.  I don’t doubt that this is happening.  There is almost nothing that humans do to earn money that machines won’t be able to do more cheaply at some point.  The key question is WHEN this will happen.  Ford thinks that this could happen somewhat soon, and that we’d better whip out the guaranteed minimum income pretty quickly, so we don’t have a massive social collapse.  He even digs up Austrian economist Friedrich Hayek, who is worshipped by free market libertarians, and who thought that the guaranteed minimum income was a good idea, in order to overcome their objections to this idea.

Unsurprisingly, he fails to placate the free market libertarian Robin Hanson, who rationalists know and love from his OvercomingBias blog.  Hanson wrote  a nice takedown piece of Ford’s book on Reason.com.  Hanson focuses on Ford’s egalitarian streak and is most annoyed that anyone would object to his beloved economic inequality, which he holds near and dear to his heart, as any proper conservative should.  Ford and Hanson have locked horns before, and I do find their sparring entertaining, but I don’t feel that Hanson properly dissects the core of Ford’s argument.

To me, the basic question is this: Can our world  economy continue to function in the absence of consumers at the bottom?

In Ford’s view, the economy will stall if there are only rich consumers, because the rich spend a smaller percentage of their income than the poor do.  This is called the “marginal propensity to consume” or something.  Yet, somehow, consumer spending has increased even as wages have remained stagnant, and also, the rich have made up a greater percentage of consumer spending.  Ford says that this is because debt has increased.  Hanson replies with the apparent non sequitur that debt hasn’t increased as much as inequality.  Uh, what?  Debt needs to increase enough to cover consumer spending, not to match inequality.  But the fact is that if consumer spending increases, and the percentage that the rich contribute to consumer spending also increases, well, maybe we don’t need poor people to run the economy.

I don’t really understand these economics.  But it does sort of seem that the Fed is just printing money and trucking it directly into the bank accounts of the super rich, who aren’t spending much, so that would explain how inflation is held in check.  Then again, deflation from automation would balance all that quantitative easing.  Um, I think I will shut up now.

Anyway, I figured that of course you need a lot of poor consumers because they will cover the space of all possible desires for products better than a few rich consumers, and thus provide a broader base for innovation.  But then again, the poor are just cattle that herd together like idiotic conformists, all consuming the same garbage media like Taylor Swift and wearing the same outfits from the mall.  Whereas the rich value eccentricities?  They probably spend more money on Cristal and superyachts than fine art and health extension.  I don’t know.  Next topic.

If it does play out that the poor are automated out of work, and yet the economy keeps running based on the demands of a tiny, super rich elite, we could end up with what Noah Smith calls the Robot Lord scenario:

“The day that robot armies become more cost-effective than human infantry is the day when people power becomes obsolete.  With robot armies, the few will be able to do whatever they want to the many.  And unlike the tyrannies of Stalin and Mao, robot enforced tyranny will be robust to shifts in popular opinion.  The rabble may think whatever they please, but the Robot Lords will have the guns.  Forever.”

Nice!  Noah is a futurist after my own heart.  Who is going to force the super rich to hand out guaranteed incomes if they can sequester themselves in gated communities protected by autonomous weapon systems?  Sick as this may seem, it’s a remarkably American way for things to play out.  So what would happen to the lumpen masses?  This is grist for a great sci-fi novel.  Ragged, unaugmented humans trying to scrape out a meager existence in the trash heaps of the super rich transhuman aristocrats.  I guess the film Elysium examines this sort of scenario.  I haven’t seen it, but I might check it out in spite of the Hollywood stench that surrounds it.  Bruce Sterling sees this trend of “dematerialization” as more than just a Silicon Valley buzzword and imagines a “favela chic” scenario:

“You have lost everything material, no job or prospects, but you are wired to the gills and really big on Facebook.”

It’s not clear to me how the government fits into this scenario.  Governments do like to stockpile weapons and other real assets.  It is hard to see how they would go away entirely.  Maybe they will be the ones handing out the food bars while we fervently click the “like” buttons to trigger neurotransmitter spikes with our VR headsets on.

Nonetheless, we can imagine that hackers will play some unique role in this fully automated future.  They might be like Merlin, working magic for the future kings of capital.  Or perhaps some will be like Robin Hood, stealing from the rich to feed the poor.  Still others will be like Loki, wreaking havoc and glorying in the chaos, as hackers have always done.  But maybe the aristos will simply be replaced by hackers in the end.  After all, when all you have are robots to protect you, you better not be vulnerable to any SQL injection attacks, or you will get owned by super class a hackers.  I better book my trip to Las Vegas for DefCon this year.  I’ve got a lot of studying up to do if I want to survive the next feudal age.