Utopia Sucks

One often stumbles upon Utopian visions in the thoughtspace of futurists.  Supposedly HG Wells was one such and he was touted by Kim Stanley Robinson at Humainty+ last year as having this massive positive effect on society, including Bretton Woods.  But Pinker takes a dimmer view of Utopians and suggests that any worldview that includes a goal of infinite utility lasting forever rationally justifies the most horrible attrocities to be committed toward that end and pulls out Pol Pot and Hitler as his bogeymen.  The fact that we can have a somewhat coherent set of alleged Utopians that includes both Pol Pot and HG Wells suggests some problems.  First, terms like Utopia or Utility or Infinite Fun are poorly defined and even if we could all agree on a universal good, the best approach to reach those ends are difficult to determine.

Take Kevin Kelly’s criticism of Thinkism which might suggest that we need some than intelligence to solve the world’s problems.  Michael Anissimov understandably takes exception to that argument, and Kelly’s argument is clearly flawed in some ways.  (uh, you can already simulate biology today, Mr. Kelly)  But progress toward any grand social goal, let alone Utopia, is deeply constrained by messy cultural artifacts like economics, politics, and even (God help us) religion.  We have enough food to feed the world, and we have the technology to get to Mars. (or close enough)  So why don’t we do those things?  Clearly not everyone agrees that feeding the world or going to mars are the right things to do.  So how to choose a Utopia?  One solution is to create a Godlike AI to rule them all, over-riding all these conflicting goals by assuming everyone would agree if they were just simulated properly.

This is problematic for a bunch of reasons.  But I fear that math is a poor tool to use to solve the best-path-to-utopia equation, err, problem.  Too much hand-waving is required.  For example, even if we assume that Infinite Fun will be had by populating the universe with “humans,”  how do assign probabilities to different approaches to achieve that?   Even if we drink the thinkism koolaid, one could argue that Augmented Intelligence is more likely that Artificial Intelligence.  I mean, we have a good track record with Augmented Intelligence.  Arguably every application we call AI now is just Augmented Intelligence.  Humans are running these programs and debugging the code.  Maybe we could just bootstrap to rulers of the universe by augmenting a bunch of humans.

More likely is that these cultural artifacts like economics, politics, religion, and even taste will bog us down.  Maybe that’s ok.  Maybe  static visions of Utopia are basically over-fitting and wouldn’t be adaptive to changing environments.  A caveman would probably have imagined a Utopia of endless summer with fat, lazy herds of meat passing continuously by his cave…  Actually that doesn’t sound bad when I think of it, but you get my point.

Leave a Reply

Your email address will not be published. Required fields are marked *