Deutsch: Where’s the AGI? Goertzel: Give it time. Me: What is inductive?

After seeing Kurzweil speak last week, I went browsing around on KurzweilAI.net.  It really is a cool website of well-curated articles on cutting edge science and other futurist topics.  I came across an article by artificial general intelligence researcher, Ben Goertzel.  In it, Dr. Goertzel is trying to defend AGI researchers from an attack by quantum computing guru and Taking Children Seriously advocate, David Deutsch.  Coincidentally, a friend of mine sent me an abridged version of Deutsch’s argument just yesterday.

Deutsch’s basic premise is that in spite of the fact that the laws of physics suggest that artificial intelligence must be possible, AI research has made no progress in the past 60 years.   He suggests that the reason for this is that AI (AGI) researchers are trying to use inductive reasoning and they don’t understand that Popper utterly destroyed induction back in the 80’s (or something).

Goertzel responds to this by effectively saying “Well, er, no.”  While it would be nice to have a proper philosophy of mind, Goertzel says:

I classify this argument of Deutsch’s right up there with the idea that nobody can paint a beautiful painting without fully solving the philosophical problem of the nature of beauty.

Goertzel’s alternative explanation for the failure of AGI has three main points:

  • computer hardware is weaker than the human brain
  • AGI receives little funding
  • There is an integration bottleneck (the most interesting point in my mind)

As Goertzel says:

The human brain appears to be an integration of an assemblage of diverse structures and dynamics, built using common components and arranged according to a sensible cognitive architecture. However, its algorithms and structures have been honed by evolution to work closely together — they are very tightly inter-adapted, in somewhat the same way that the different organs of the body are adapted to work together.

Achieving the emergence of these structures within a system formed by integrating a number of different AI algorithms and structures is tricky.

Goertzel doesn’t point this out, but it occurs to me that Deutsch might reject this view of AGI.  If Deutsch viewed AGI as involving the integration of diverse cognitive structures, he might not insist that NO progress has been made toward AGI.  Deutsch does acknowledge that narrow AI has made progress. (Chess playing, Jeopardy playing, plane flying, etc.)  So he must not think that we will be able to assemble components like the cerebellum simluation or Watson-style search into an AGI.  Otherwise they would represent some sort of advancement.

Deutsch suggests that AGI must have a non-inductive way of generating conjectures about reality:

 …knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations…

Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data.

But it’s not clear to me where human conjectures come from if they aren’t allowed to be derived from observations:

AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’.

But that’s probably just me not understanding induction.  I must be mistaken when I assume that the whole  Archimedes in the bathtub thing was sort of a conjecture derived from observation.  Are Deutsch and Popper really asserting that conjectures have no relation to “sense data?”  I better hit the books.  Conjectures about reality derived from observations have been shown to be useful by programs like Eureqa. And evolutionary algorithms have even come up with patentable conjectures somehow.

I have talked to several local AGI researchers who complain about the difficulty with injecting a certain randomness into the conjectures their programs generate.  So it’s not like AGI folks are in the dark about this stuff.  As Goertzel says, quoting Deutsch:

In the end, Deutsch presents a view of AGI that comes very close to my own, and to the standard view in the AGI community:

An AGI is qualitatively, not quantitatively, different from all other computer programs. Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose “thinking” is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely, creativity.

Yes. This is not a novel suggestion, it’s what basically everyone in the AGI community thinks; but it’s a point worth emphasizing.

Deutsch seems to take a veiled shot at SIAI when he suggests that programming an AI to behave a certain way would be like enslavement.  So to constrain an AI to friendliness would be evil.  Nice.  I would have never thought of that.  I just assumed that constraining a superhuman AI was simply impossible. Now Deutsch adds that if possible it would be immoral, akin to brainwashing.  The real question in his mind is how AGI or humans with good ideas can defeat those with bad ideas.  Here he implicitly rejects or ignores some of the assumptions that SIAI take for granted such as the inevitability of a singleton AI that is basically godlike.  So maybe he isn’t taking a swipe at SIAI, since he hasn’t bothered to read what they think.

But his answer to this seems a bit like the man with a hammer who sees every problem as a nail.  He seems to be trotting out his Taking Children Seriously ideas:

One implication is that we must stop regarding education (of humans or AGIs alike) as instruction — as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.

Now my problem here is that the enactivists so dear to my heart might agree that  perception is self-directed.  They would call learning and cognition generally a sort of sensorimotor coupling between agent and environment in which both are changed.  And that bears some resemblance to the stuff Popper and Deutsch are preaching.  However, I’m getting the feeling that this Popperian stuff devalues the environment overmuch.  I can buy that the environment doesn’t provide instruction, but if offers affordances surely.  And children do seem to copy behaviors they are exposed to.  This might not be discovery per se, but it sure seems like learning.

But I don’t want to reject Deutsch’s ideas.  I want to overcome my reservations because after all, what the hell do I know?  I know nothing of Popper, really.  This epistemology stuff is hard, especially  if I can’t expect to learn it without making conjectures and falsifying them.

UPDATE 11/21/2012:  I went back and reread the article referenced above: Induction versus Popper: substance versus semantics.  This offers an explanation of Popper’s view that I hadn’t considered.  In order for a conjecture to result from  being surprised by sense data, one must have had an implicit model that was being violated.  That’s easy to go along with.  There is always a model, even in Norvig’s approach.  So sure, there is no such thing as induction by that definition.  Who cares?  By that measure, no AI researchers I know of are hung up trying to do something that doesn’t exist.  Deutsch would have made a better case if he had used specific examples of AI strategies that were failing due to the fallacy of induction.

UPDATE 12/27/2012: I am confused again about Popper. Would he say then that babies are born with implicit models?  Surely that is nonsense.  And if they aren’t then how do they learn a model without any examples?

UPDATE 11/27/2012:  In my opinion, the crux of AGI will lay in desire.  As Vinge points out, humans are good at wanting things.  This is actually true of all living things.  That is why I suspect that AGI will actually spring from Artificial Life.  That’s why I am interested in enactive cognition and autopoeisis.

Drawing on the Right Side of the Brain

My girlfriend Gretchen has been working through Drawing on the Right Side of the Brain by Betty Edwards, and she decided to write a blog post about it.  I understand that “left” and “right” brain designations are somewhat outdated.  But there does seem to be some sort of mapping between the idea of an intuitive right brain and Kahneman‘s idea of System 1, while the rational left brain maps to System 2.

I haven’t read Edwards’ book myself, but one key idea that jumps out is that our preconceived notions of the things we are seeing can prevent us from perceiving them accurately.  This idea has been borne out somewhat by those transcranial magnetic stimulation experiments where subjects are able to draw much better when their left anterior temporal lobe is temporarily disrupted.  Though in those experiments they appear to be drawing from memory instead of from life, so this connection may be tenuous.

But this poses an interesting problem.  We seem to need abstractions of reality in order to reason about it, but these abstractions necessarily discard information.  The map is not the territory and all that (though reasoning (rationality) is not the sum of cognition).  Get Monica Anderson on the phone!  She loves these holism/reductionism quandries.  Of course the radical enactivists would have us discard the idea of cognition as representation altogether.  So they might deny something at the very heart of the map/territory argument: cognition requires representation.  I have to look up the enactivist take on rationality.  But they do love the idea that perception is self-directed.  It’s probably useful to keep in mind that our own conceptual frameworks are continually constraining our vision.  Though we may or may not be able to avoid that entirely.

Empathy Quotient and Systemizing Quotient

I recently completed a couple of 23andMe research surveys that measure your Empathy Quotient and Systemizing Quotient.  Empathizing–systemizing theory  was developed by Borat actor Sasha Baron-Cohen’s cousin, Simon Baron-Cohen as a way to understand autism.  According to this theory, people with Autism Spectrum Disorders (ASD) have a below average ability to empathize and and an above average ability to systemize. They are more interested in systems than people.

Note that E-S theory differentiates between cognitive and affective empathy.  So ASD folks have trouble determining how others are feeling (cognitive empathy) but can empathize when they do understand the state of mind of others (they have affective empathy).  They are contrasted with psychopaths who know how you are feeling and don’t care and will use that to hurt or manipulate or run major corporations.