This is Part 2 of a three-part review of the XFF pre-party. See Part 1 here. See Part 3 here.
This party was held at RallyPad which is “an incubator and workspace for non-profits and social entrepreneurs.” It was a decent space I guess. I like that location down on 2nd Street near Market. I’ve had some clients down there. It’s very vibrant during the day with all the office workers milling about. You can grab organic salads at Harvest & Rowe for lunch. It’s civilized.
After the cyberpunk panel, I went milling about and talked to people. One fellow asked if we had met at a Ribbonfarm event, but I had never heard of it. It turns out to actually be a blog by a fellow named Venkatesh Rao. I haven’t had a chance to look it over much, but Rao wrote a book called Tempo: timing, tactics and strategy in narrative-driven decision-making and the subtitle alone convinces me that he must be cool. But I was having a hard time understanding how you have an event at a blog. I guess that’s a thing bloggers do now. I have my own Futurist Meetup so I can see how that could work. But my group is more of an informal discussion group. I also bumped into Michael Anissimov and he told me about a recent blog entry in which he defends “Thinkism” from Kevin Kelly’s critique. I look forward to reading this and pitching in my own opinion from the peanut gallery, but that is for another post.
I went back and checked out the speakers at some point and heard the end of Michael Keenan’s well presented take on robotic cars. I understand that he was got a lot of info from Brad Templeton who is helping Google with their car now. I saw Brad speak on this topic and got to hang out with him a bit at Foresight 2010. The basic argument is that humans kill way too many people in car accidents and robotic cars would save lives. Of course a lot of driving jobs will be eliminated by robotic cars, but the police would also cut staff since so many resources are devoted to traffic related work. Insurance and alcohol companies will push for their adoption while teamsters and motorheads will oppose it. I’m with the bean counters on this one.
For me the highlight of the evening was H+ magazine editor Peter Rothman’s talk: The Singularity Already Happened. Rothman started by outlining various opinions on the Singularity held by: Cosma Shalizi, Mark Pesce, Henry Adams, and Kevin Kelly. We might draw from these views that either the singularity already happened or it’s meaningless. Rothman himself makes an argument similar to Kelly’s that Kurzweil’s exponential charts are misleading, but I think they are splitting hairs. Exponential growth means something even if the date 2035 does not.
Rothman goes on to make the point that multiple singularities in communication have already occured. Humans presumably went from pre-linguistic, to spoken, then to written and finally to active (software) communication. It was arguably impossible for humans prior to each of these changes to predict what would happen afterward. Just as it will be impossible for us to predict the next paradigm shift. We suspect that it’s AGI, but we may be like cavemen predicting that the future will simply be a progression of longer and more complex spoken words.
Rothman then suggested that the number of Facebook friends we all have is indicative that our intelligence must already be exploding ala the Dunbar number. It takes more intelligence to handle larger social groups. While this is born out somewhat by the Flynn effect, many pundits have pointed out that Facebook friends don’t involve the same level of engagement that traditional meatspace friends do. There are a bunch of casual acquaintances in there.
But it’s an interesting idea that this greater social interaction is a sort of singularity. No one predicted social as a killer app way back in 2002. Building on this social singularity idea, Rothman showed a chart which plotted the growth of derivative patents. That is to say, patents which referenced other patents grew dramatically which either suggests that we are running out of novel ideas and low hanging fruit or that we are becoming masters of collaboration. The pessimist in me prefers the former, Rothman likes the latter.
Rothman went on to describe a bunch of military tech and his own involvement in air defense narrow AI. This was really an amazing talk packed with history and data. But the pièce de résistance was the idea that a malicious AGI might already be out in the wild now. Rothman suggested that we follow strange flows of money, power, and ideas. He cited crazy trades that drove Kraft’s price up, strange flows of wealth to the top 1%, massive energy consumption increases, and mystery NSA data centers. This stuff was pure gold for a Sci Fi writer. I kept wishing that Daniel Suarez was around to take notes. I won’t bother to comment on less radical explanations of all these phenomenon. Suffice it say that I would apply Hanlon’s Razor: Never attribute to malice that which is adequately explained by stupidity. Nonetheless, it was a great talk and I look forward to Peter posting the slides so that I can review his many wonderful sources.
There is more to say about this amazing party and I still haven’t even gotten into the Humanity+ conference proper.
See Part 3 here.
Pingback: Extreme Futurist Festival Pre-party with Dorkbot SF 11-30-2012 Part 1 | The Oakland Futurist
Pingback: Extreme Futurist Festival Pre-party with Dorkbot SF 11-30-2012 Part 3 | The Oakland Futurist
Pingback: Black Hat & DEF CON 2013 – | The Oakland Futurist