Deutsch: Where’s the AGI? Goertzel: Give it time. Me: What is inductive?

After seeing Kurzweil speak last week, I went browsing around on KurzweilAI.net.  It really is a cool website of well-curated articles on cutting edge science and other futurist topics.  I came across an article by artificial general intelligence researcher, Ben Goertzel.  In it, Dr. Goertzel is trying to defend AGI researchers from an attack by quantum computing guru and Taking Children Seriously advocate, David Deutsch.  Coincidentally, a friend of mine sent me an abridged version of Deutsch’s argument just yesterday.

Deutsch’s basic premise is that in spite of the fact that the laws of physics suggest that artificial intelligence must be possible, AI research has made no progress in the past 60 years.   He suggests that the reason for this is that AI (AGI) researchers are trying to use inductive reasoning and they don’t understand that Popper utterly destroyed induction back in the 80’s (or something).

Goertzel responds to this by effectively saying “Well, er, no.”  While it would be nice to have a proper philosophy of mind, Goertzel says:

I classify this argument of Deutsch’s right up there with the idea that nobody can paint a beautiful painting without fully solving the philosophical problem of the nature of beauty.

Goertzel’s alternative explanation for the failure of AGI has three main points:

  • computer hardware is weaker than the human brain
  • AGI receives little funding
  • There is an integration bottleneck (the most interesting point in my mind)

As Goertzel says:

The human brain appears to be an integration of an assemblage of diverse structures and dynamics, built using common components and arranged according to a sensible cognitive architecture. However, its algorithms and structures have been honed by evolution to work closely together — they are very tightly inter-adapted, in somewhat the same way that the different organs of the body are adapted to work together.

Achieving the emergence of these structures within a system formed by integrating a number of different AI algorithms and structures is tricky.

Goertzel doesn’t point this out, but it occurs to me that Deutsch might reject this view of AGI.  If Deutsch viewed AGI as involving the integration of diverse cognitive structures, he might not insist that NO progress has been made toward AGI.  Deutsch does acknowledge that narrow AI has made progress. (Chess playing, Jeopardy playing, plane flying, etc.)  So he must not think that we will be able to assemble components like the cerebellum simluation or Watson-style search into an AGI.  Otherwise they would represent some sort of advancement.

Deutsch suggests that AGI must have a non-inductive way of generating conjectures about reality:

 …knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations…

Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data.

But it’s not clear to me where human conjectures come from if they aren’t allowed to be derived from observations:

AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’.

But that’s probably just me not understanding induction.  I must be mistaken when I assume that the whole  Archimedes in the bathtub thing was sort of a conjecture derived from observation.  Are Deutsch and Popper really asserting that conjectures have no relation to “sense data?”  I better hit the books.  Conjectures about reality derived from observations have been shown to be useful by programs like Eureqa. And evolutionary algorithms have even come up with patentable conjectures somehow.

I have talked to several local AGI researchers who complain about the difficulty with injecting a certain randomness into the conjectures their programs generate.  So it’s not like AGI folks are in the dark about this stuff.  As Goertzel says, quoting Deutsch:

In the end, Deutsch presents a view of AGI that comes very close to my own, and to the standard view in the AGI community:

An AGI is qualitatively, not quantitatively, different from all other computer programs. Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose “thinking” is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely, creativity.

Yes. This is not a novel suggestion, it’s what basically everyone in the AGI community thinks; but it’s a point worth emphasizing.

Deutsch seems to take a veiled shot at SIAI when he suggests that programming an AI to behave a certain way would be like enslavement.  So to constrain an AI to friendliness would be evil.  Nice.  I would have never thought of that.  I just assumed that constraining a superhuman AI was simply impossible. Now Deutsch adds that if possible it would be immoral, akin to brainwashing.  The real question in his mind is how AGI or humans with good ideas can defeat those with bad ideas.  Here he implicitly rejects or ignores some of the assumptions that SIAI take for granted such as the inevitability of a singleton AI that is basically godlike.  So maybe he isn’t taking a swipe at SIAI, since he hasn’t bothered to read what they think.

But his answer to this seems a bit like the man with a hammer who sees every problem as a nail.  He seems to be trotting out his Taking Children Seriously ideas:

One implication is that we must stop regarding education (of humans or AGIs alike) as instruction — as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.

Now my problem here is that the enactivists so dear to my heart might agree that  perception is self-directed.  They would call learning and cognition generally a sort of sensorimotor coupling between agent and environment in which both are changed.  And that bears some resemblance to the stuff Popper and Deutsch are preaching.  However, I’m getting the feeling that this Popperian stuff devalues the environment overmuch.  I can buy that the environment doesn’t provide instruction, but if offers affordances surely.  And children do seem to copy behaviors they are exposed to.  This might not be discovery per se, but it sure seems like learning.

But I don’t want to reject Deutsch’s ideas.  I want to overcome my reservations because after all, what the hell do I know?  I know nothing of Popper, really.  This epistemology stuff is hard, especially  if I can’t expect to learn it without making conjectures and falsifying them.

UPDATE 11/21/2012:  I went back and reread the article referenced above: Induction versus Popper: substance versus semantics.  This offers an explanation of Popper’s view that I hadn’t considered.  In order for a conjecture to result from  being surprised by sense data, one must have had an implicit model that was being violated.  That’s easy to go along with.  There is always a model, even in Norvig’s approach.  So sure, there is no such thing as induction by that definition.  Who cares?  By that measure, no AI researchers I know of are hung up trying to do something that doesn’t exist.  Deutsch would have made a better case if he had used specific examples of AI strategies that were failing due to the fallacy of induction.

UPDATE 12/27/2012: I am confused again about Popper. Would he say then that babies are born with implicit models?  Surely that is nonsense.  And if they aren’t then how do they learn a model without any examples?

UPDATE 11/27/2012:  In my opinion, the crux of AGI will lay in desire.  As Vinge points out, humans are good at wanting things.  This is actually true of all living things.  That is why I suspect that AGI will actually spring from Artificial Life.  That’s why I am interested in enactive cognition and autopoeisis.

3 thoughts on “Deutsch: Where’s the AGI? Goertzel: Give it time. Me: What is inductive?

  1. Pingback: Deutsch: Where’s the AGI? Goertzel: Give it time. Me: Inductive-what-now? « Scott Jackisch's Weblog

  2. Induction has multiple meanings. The narrowest meaning seems to correspond to Laplace’s rule of succession. E.T. Jaynes uses a much broader definition that resembles “the scientific method”.

    Jaynes describes the narrow meaning as useful in some rare conditions where we have no useful prior information.

    Eric Baum’s book What is Thought? contains strong arguments against the idea that all knowledge can come from observation. A mind that starts as a completely blank slate seems infeasible, and there seems to be a need for genetic information or something equivalent that contains priors about what models are useful for understanding the world. Without some priors, it’s unclear how a mind could choose out of all possible ways of interpreting sensory data concepts which correspond to water and bathtubs.

    • Hmm, rule of succession huh? I will need to read up on that. I don’t go for the blank slate idea, I agree that there must be some genetic behaviours, evolutionary heuristics or something. Also, I would say that embodied agents get to understand the world by interacting with it and the form of their embodiment contrains the possible space of interpretations (as in Gibson’s affordances.)

Leave a Reply to Scott Jackisch Cancel reply

Your email address will not be published. Required fields are marked *