Extreme Futurist Festival Pre-Party with Dorkbot SF, 11-30-2012, Part 1

The Extreme Futurist Festival organizers held a fund raiser tonight in SF along with DorkbotSF at RallyPad to pay for bleachers so that we can see SRL do it’s thing at the XFF.  This event had some counter-culture undertones that are missing from most futurist events that I attend.  The night started out with a panel discussion supposedly about cyberpunk between Val Vale, R.U. Sirius, and Rachel Haywire.

I guess Vale started Re/Search magazine.  I remember seeing the Modern Primitives book they put out in the early 90’s.  It introduced us hillbillies in Buffalo, NY to the idea of voluntary genital mutilation, which was helpful.  Vale brought up some of the ideas that often concern futurists such as the problems content creators face getting paid in the digital realm.  He seemed somewhat saddened by the idea that young people must turn to startups to pay the rent.  This must be what it means to live in the Silicon Valley bubble.  Even here in the Bay Area, only a small number of people are resorting to start-ups.  Vale suggested that people will need to create stores to sell paper and tangible media to make a living… Uh, Lanier he ain’t.  But I sympathize with his pain.  I guess Re/Search couldn’t hang with the digital disintermediation.

Vale pulled out a J.G. Ballard quote: sex * tech = the Future, which really makes no sense to me unless “the Future” = “internet porn.”  Still, I guess I will look him up and skim some of Ballard’s work.   I did like how Vale compared the Web 2.0 idea of user-contributed content to the DIY and “anyone can do it” ethic of Punk Rock.  A lot of my cherished 80’s British New Wave bands were inspired by how crappy the Sex Pistols were.  “Hell if they can do it, why not us?”  Are blogs, tweets, and user comments the natural cousins of punk rock?  I can see a parallel between the punk degradation of music and the bloggers degration of journalism anyway.

Rachel Haywire is organizing the XFF and she spoke a bit about her love of transgressive hyper-intelligent “counter-counter-culture”(sic).  Rachel lamented that the Nazi’s ruined Eugenics for the rest of us.  “Total Fail.”  (She must be going for understatement of the year award.)  Pearce and others brought up the potential for genetic engineering to reduce suffering at the Humanity+ conference but I worry about the risks.  In closing, Rachel threw down a funny poem about sexbots in the future using “let me tell you about my Ted talk” as a pickup line.

R.U. Sirius edited cyberpunk magazine, Mondo 2000 which was quite dear to me in the late 90’s when I arrived in the Bay Area.  It first introduced me to the idea of smart drugs (but not where to get them unfortunately.)  William Gibson was probably my favorite Science Fiction writer at that time and he really was the founder of cyberpunk in my mind. Cyberpunk is dark and gritty. It is cynical about the dominance of corporations and explores how tech can be repurposed by hackers and criminals.  “The street finds its own uses for things” as Gibson says.   But Sirius offered that Gibson himself never considered his novels to be dystopian.    Would today’s world seem dystopian to our agrarian ancestors?

R.U. Sirius was quite funny and talked about cyberpunk as a memeplex that included conceptual, industrial, and performance art along with hacker culture.  “Drug influence cyber hippies dancing on the edges of corporate culture.”  He even suggested the wild idea of using genetic engineering as a drug by adding schizophrenic genes to go on an insanity trip.  Nice.  In his mind, Mondo 2000 really imagined the future while Wired focuses on the prosaic and dull reality of today.  But it seems to me that Wired figured out how to work in the online format while Mondo simple did not.  That’s too bad.

My coverage of and commentary on the XFF pre-party continues here.

Deutsch: Where’s the AGI? Goertzel: Give it time. Me: What is inductive?

After seeing Kurzweil speak last week, I went browsing around on KurzweilAI.net.  It really is a cool website of well-curated articles on cutting edge science and other futurist topics.  I came across an article by artificial general intelligence researcher, Ben Goertzel.  In it, Dr. Goertzel is trying to defend AGI researchers from an attack by quantum computing guru and Taking Children Seriously advocate, David Deutsch.  Coincidentally, a friend of mine sent me an abridged version of Deutsch’s argument just yesterday.

Deutsch’s basic premise is that in spite of the fact that the laws of physics suggest that artificial intelligence must be possible, AI research has made no progress in the past 60 years.   He suggests that the reason for this is that AI (AGI) researchers are trying to use inductive reasoning and they don’t understand that Popper utterly destroyed induction back in the 80’s (or something).

Goertzel responds to this by effectively saying “Well, er, no.”  While it would be nice to have a proper philosophy of mind, Goertzel says:

I classify this argument of Deutsch’s right up there with the idea that nobody can paint a beautiful painting without fully solving the philosophical problem of the nature of beauty.

Goertzel’s alternative explanation for the failure of AGI has three main points:

  • computer hardware is weaker than the human brain
  • AGI receives little funding
  • There is an integration bottleneck (the most interesting point in my mind)

As Goertzel says:

The human brain appears to be an integration of an assemblage of diverse structures and dynamics, built using common components and arranged according to a sensible cognitive architecture. However, its algorithms and structures have been honed by evolution to work closely together — they are very tightly inter-adapted, in somewhat the same way that the different organs of the body are adapted to work together.

Achieving the emergence of these structures within a system formed by integrating a number of different AI algorithms and structures is tricky.

Goertzel doesn’t point this out, but it occurs to me that Deutsch might reject this view of AGI.  If Deutsch viewed AGI as involving the integration of diverse cognitive structures, he might not insist that NO progress has been made toward AGI.  Deutsch does acknowledge that narrow AI has made progress. (Chess playing, Jeopardy playing, plane flying, etc.)  So he must not think that we will be able to assemble components like the cerebellum simluation or Watson-style search into an AGI.  Otherwise they would represent some sort of advancement.

Deutsch suggests that AGI must have a non-inductive way of generating conjectures about reality:

 …knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations…

Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data.

But it’s not clear to me where human conjectures come from if they aren’t allowed to be derived from observations:

AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’.

But that’s probably just me not understanding induction.  I must be mistaken when I assume that the whole  Archimedes in the bathtub thing was sort of a conjecture derived from observation.  Are Deutsch and Popper really asserting that conjectures have no relation to “sense data?”  I better hit the books.  Conjectures about reality derived from observations have been shown to be useful by programs like Eureqa. And evolutionary algorithms have even come up with patentable conjectures somehow.

I have talked to several local AGI researchers who complain about the difficulty with injecting a certain randomness into the conjectures their programs generate.  So it’s not like AGI folks are in the dark about this stuff.  As Goertzel says, quoting Deutsch:

In the end, Deutsch presents a view of AGI that comes very close to my own, and to the standard view in the AGI community:

An AGI is qualitatively, not quantitatively, different from all other computer programs. Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose “thinking” is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely, creativity.

Yes. This is not a novel suggestion, it’s what basically everyone in the AGI community thinks; but it’s a point worth emphasizing.

Deutsch seems to take a veiled shot at SIAI when he suggests that programming an AI to behave a certain way would be like enslavement.  So to constrain an AI to friendliness would be evil.  Nice.  I would have never thought of that.  I just assumed that constraining a superhuman AI was simply impossible. Now Deutsch adds that if possible it would be immoral, akin to brainwashing.  The real question in his mind is how AGI or humans with good ideas can defeat those with bad ideas.  Here he implicitly rejects or ignores some of the assumptions that SIAI take for granted such as the inevitability of a singleton AI that is basically godlike.  So maybe he isn’t taking a swipe at SIAI, since he hasn’t bothered to read what they think.

But his answer to this seems a bit like the man with a hammer who sees every problem as a nail.  He seems to be trotting out his Taking Children Seriously ideas:

One implication is that we must stop regarding education (of humans or AGIs alike) as instruction — as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.

Now my problem here is that the enactivists so dear to my heart might agree that  perception is self-directed.  They would call learning and cognition generally a sort of sensorimotor coupling between agent and environment in which both are changed.  And that bears some resemblance to the stuff Popper and Deutsch are preaching.  However, I’m getting the feeling that this Popperian stuff devalues the environment overmuch.  I can buy that the environment doesn’t provide instruction, but if offers affordances surely.  And children do seem to copy behaviors they are exposed to.  This might not be discovery per se, but it sure seems like learning.

But I don’t want to reject Deutsch’s ideas.  I want to overcome my reservations because after all, what the hell do I know?  I know nothing of Popper, really.  This epistemology stuff is hard, especially  if I can’t expect to learn it without making conjectures and falsifying them.

UPDATE 11/21/2012:  I went back and reread the article referenced above: Induction versus Popper: substance versus semantics.  This offers an explanation of Popper’s view that I hadn’t considered.  In order for a conjecture to result from  being surprised by sense data, one must have had an implicit model that was being violated.  That’s easy to go along with.  There is always a model, even in Norvig’s approach.  So sure, there is no such thing as induction by that definition.  Who cares?  By that measure, no AI researchers I know of are hung up trying to do something that doesn’t exist.  Deutsch would have made a better case if he had used specific examples of AI strategies that were failing due to the fallacy of induction.

UPDATE 12/27/2012: I am confused again about Popper. Would he say then that babies are born with implicit models?  Surely that is nonsense.  And if they aren’t then how do they learn a model without any examples?

UPDATE 11/27/2012:  In my opinion, the crux of AGI will lay in desire.  As Vinge points out, humans are good at wanting things.  This is actually true of all living things.  That is why I suspect that AGI will actually spring from Artificial Life.  That’s why I am interested in enactive cognition and autopoeisis.

Kurzweil at Commonwealth: How To Create a Mind

I went to see Ray Kurzweil talk at the Commonwealth Club in support of his new book: How to Create a Mind.  As I suspected, Ray couldn’t resist trotting out the exponential-growth-of-computing-price-performance spiel for the Commonwealth audience who might not have been exposed to this material. However, instead of being bored by this stuff which I first read years ago in the Singularity Is Near, I found it to be an interesting refresher.  It is pretty cool how computing performance improves in a smooth, super-exponential curve across the entire 20th century, seemingly unaffected by war or economic depression.

Ray explains that this progress is driven by using the current generation of technology to build the next generation of tools. One new example that Ray offered of this was that of his father working as a composer.  In order for his father to actually hear a full performance of an orchestral composition, he needed to get funding and actually assemble an orchestra.  If changes needed to be made, the entire arduous process had to be repeated.  In contrast, any college student with a laptop can preview a complete orchestration of their music using modern software.  These same students with laptops created Google, Facebook, and Twitter.  And they are creating the next technological paradigm that will build upon and extend social/mobile/cloud technology.

Overall, Kurzweil was characteristically positive and optimistic.  He believes that technology, and social media in particular, are having a democratizing effect on the world (and he takes for granted that this is a good thing).  He showed a animated graph of GDP and the life expectancy in major parts of the world from 1800 to the present, and though the “haves” were clearly divided from the “have-nots,” the rising tide did raise all boats.  Even the worst off today are better off than the most advanced nations of 1800 by these metrics.  No wonder Pinker spoke at the Singularity Summit this year.  It’s a whig conspiracy.

I also forget sometimes what a sincere and likable presenter Ray is.  He cracked jokes throughout the evening and showed his human side.  At one point he commented that the cutting edge of human intelligence consisted of things like poetry, humor, being sexy, and being loving.  These are the sorts of things that my girlfriend is always calling intelligence (well, she doesn’t consider being sexy intelligent for some reason) and I always have trouble with that.  Intelligence is problem solving or pattern recognition or prediction in my mind.  But the more I think about it, I realize that poetry, humor, and love all require vast amounts of problem solving, pattern recognition, and prediction.  “What must I do to keep her/him from leaving?” “How will the audience react to this line?”  You’ve got to work for love.

Kurzweil does concede that areas outside of information technology have not progressed at the same rate (which Thiel harps on), but he insists that more and more fields will effectively become information technologies.  He cited health and medicine as examples, and brought up the exponential improvements in gene sequencing and biological simulation.  He suggested that drug discovery will transform into drug design as molecular-level simulation becomes more attainable.

The technology of atoms (as opposed to bits) will also begin accelerating as 3D printing evolves.  Kurzweil pointed out that 3D printers can now pring 70% of the parts needed to create a 3D printer.  (assembly is another matter)  He also mentioned that these printers are able to work with more and more materials and referred to a metal ring he was wearing that was created using an additive process.

It was almost 40 minutes into his one hour talk before he touched on the apparent topic of his book, namely artificial intelligence.  He considers the (exponential!) improvements in brain imaging to be the most promising path toward developing AI.  Our brains are an important “source of templates for intelligence.”  One recurring theme of Kurzweil’s is that the brain is possibly simpler than it appears.  He likes to point out that the neocortex is relatively uniform in structure.  He dismisses Paul Allen’s criticism that the brain is too complex to model by pointing out that Allen is assuming that every twig in the forest has significance.  Kurzweil contends that the brain is actually massively redundant, so the actions of individual neurons are of little importance.  He also asks where all of this design information would even come from given the limited bytes of information in the genome responsible for the brain (25 MB with lossless compression).

To support his argument, he mentioned some recent papers that are illuminating the structure of the brain.  One was the paper: The Geometric Structure of the Brain Fiber Pathways: A Continuous Orthogonal Grid (described for us laymen here).  The gist of which is that the brain’s connections are arranged like ribbon cables “that cross paths at right angles, like the warp and weft of a fabric.”  He also mentioned another paper that posited pattern recognizing modules composed of 100-neuron bundles.  I wasn’t able to locate this paper, but this one might be it (Damn my hardcopy version of How to Create a Mind, no search capability!).  I am somewhat sympathetic with Ray’s critics here.  He might be too simplistic in his modeling.  But I will read the book at some point and then I will have a more informed opinion.

Speaking of books, check out the new GoodReads.com group I created.  I will be posting all of the books I reference and some impressions of each one (see widget to the right).  If you don’t know about GoodReads.com, you should check it out.  It’s a great way to share and manage the books in your life.

Ray mentioned that he was not a fan of neural networks.  In his view a neuron is too weak computationally.  It is these 100-neuron modules that serve as pattern recognizers that capture his attention.  He discussed the hierarchal nature of the neo-cortex wherein more complex ideas are handled by modules structurally similar to those that handle lower order ideas.  The modules handling complex ideas are simply connected at the top of neocortical hierarchies.  This is similar to Numenta founder Jeff Hawkins’ thesis from On Intelligence.  Though it’s interesting to note that Numenta co-founder Dileep George has jumped ship and struck out in his own direction with his new company, Vicarious.

Kurzweil expressed admiration for Watson’s Natural Language Processing abilities.  Nonetheless, he appears to be working on his own version of the digital neocortex in competition with IBM, Google, Numenta, Vicarious, and all the other seekers of AGI.  Given his track record with optical character and voice recognition, he probably should be considered a serious contender.   He knows his Hidden Markov Models.

In Ray’s view machines are an extension of ourselves and will even allow us to increase and extend our emotional reach.  He points out that we already offload many of our mental processes to the internet.  He felt cognitively constrained during the SOPA strike.  In this sense his position is similar to Alva Noë and other embodied embedded cognition folks.

intelligent behaviour emerges out of the interplay between brain, body and world.

Though the embodiment crowd probably parts ways with Kurzweil on the brain in a vat question.

One final thought I found interesting was Ray’s approach to brainstorming.  He said that prior to sleeping each night, he assigns himself a problem to solve, either from work or in his emotional life.  During sleep, while dreaming, all of his rational faculties are disabled.  Which is why ridiculous events can occur in dreams without surprising us. Our right brains are non-judgemental.  All of the censors that inhibit our thoughts are quieted during sleep.  As he awakens in the night, he is aware that his dreams often are related to the problem he assigned himself in some way.  But it is during the interstitial time between sleep and waking when he finally tries to retrieve and salvage the solutions or suggestions provided during his dreamstate.  He said that this days are often spent writing down the key insights and carrying out the decisions made in dreams.