Intelligence & Emergence

Presentation for EmergentPhenomena Research Group, Bryn Mawr College, June 18, 2003.

TableOfContents()

What is intelligence?

    intelligence - noun (1) the ability to learn or understand or to deal with new or trying situations; reasoning, or the skilled use of reason (2) the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests) (3) the basic eternal quality of divine Mind (4) mental acuteness ...

      after Merriam-Webster (2003)

Philosophers have traditionally considered intelligence (and the entire mind) to be something that transcends the body and the rest of the real world: understanding, thoughts, comprehension, rationality, truth, logic, ... --- all exist without being tied to any particular physical existence.

The ideal intelligence would be the ultimate rational reasoning system. (We can define rationality as the ability to do the right thing.)

On the other hand, psychologists note that humans are sometimes far from rational. A survey on the important aspects of human intelligence (from 1030 experts on human intelligence):

Description Percentage of Agreement
Abstract thinking or reasoning 99.3
Problem solving ability 97.7
Capacity to acquire knowledge 96.0
Memory 80.5
Adaptation to one’s environment 77.2

    The aggregate, or global capacity to act purposefully, think rationally, and deal effectively with the environment. Intelligence is an aspect of the total personality, rather than an isolated entity. -- psychologist  David Wechsler (1896-1981)

So, humans are limited-resource computational devices that aren't always rational, but must act with intentionality and effectiveness. But, the psychologist might argue, those are limitations derived from our biological forms. When humans aren't rational, that is an interesting problem worthy of exploring "what went wrong". Still, they would agree that given an IQ test, a rational system would have a higher score than one less rational.

Although there may be some difference of opinion on the fine points, most philosophers, psychologists, and artificial intelligence researchers would probably agree with the spirit of the statement that

intelligence = rationality

What is artificial intelligence?

"The exciting new effort to make computers think... machines with minds, in the full and literal sense." (Haugeland, 1985)

"[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning..." (Bellman, 1978)

"The study of mental faculties through the use of computational models" (Charniak and McDermott, 1985)

"The study of the computations that make it possible to perceive, reason, and act" (Winston, 1992)

"The art of creating machines that perform functions that require intelligence when performed by people" (Kurzweil, 1978)

"The study of how to make computers do things at which, at the moment, people are better" (Rich and Knight, 1991)

"A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes" (Schalkoff, 1990)

"The branch of computer science that is concerned with the automation of intelligent behavior" (Luger and Stubblefield, 1993)

. Human-oriented Rationally-oriented
think-oriented Systems that think like humans Systems that think rationally
behavior-oriented Systems that act like humans Systems that act rationally

    adapted from Russell & Norvig (1995).

A very different kind of definition came from Alan Turing

I propose to consider the question "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think."  Computing Machinery and Intelligence

How Turing's Imitation Game might work:

Q: In the first line of your sonnet which reads 'Shall I compare
thee to a summer's day', would not 'a spring day' do as well or better?

A: It wouldn't scan.

Q: How about 'a winter's day' That would scan all right.

A: Yes, but nobody wants to be compared to a winter's day.

Q: Would you say Mr. Pickwick reminded you of Christmas?

A: In a way.

Q: Yet Christmas is a winter's day, and I do not think Mr.
Pickwick would mind the comparison.

A: I don't think you're serious. By a winter's day one means a
typical winter's day, rather than a special one like Christmas.

Other examples: ConversationsWithComputers

Most researchers probably think that the Turing Test is a terrible method of judging intelligence. They might say:

  1. It doesn't get at what is behind intelligence (rationality? intentionality?)
  2. It leaves itself open for tricks (word substitutions, etc)
  3. It focuses on the convincing of the interrogator, rather than the thinking of the system
  4. The challenge could be met with a huge lookup table

The AI Framework

Solving problems in AI operates in the traditional computer science problem solving paradigm:

  1. examine the task
  2. understand the issues, isolate the concepts
  3. select appropriate algorithms
  4. write the program

Typically, this is a very narrowly defined task (compute the shortest path from point A to point B). The solution is typically narrowly implemented (it might work for a particular kind of robot in a particular kind of room, but not otherwise).

Also, in designing new algorithms, introspection can play a large role.

Although some researchers believe that a generally intelligent system can be defined formally.

Physical Symbol Systems Hypothesis

    A physical symbol system has the necessary and sufficient means for general intelligent action. -- Allen Newell and Herbert Simon

Likewise, any generally intelligent system can be seen as a symbol manipulation system.

Newell defined symbol systems according to their characteristics. Firstly, they may form a universal computational system. They have:

Summary of the AI Framework

Basically, the PSSH is the definition of a Turing Machine. Of course, Turing machines can compute anything that is computable (including things like neural networks, genetic algorithms, simulations of the universe, etc.) But, the traditional AI framework defines a style of computation:

  1. symbols encode the knowledge
  2. there is nothing below the symbol; symbols do not have content
  3. symbols get their meaning from their interactions with other symbols (emergent? very limited)
  4. a centralized database keeps the facts (symbols that are "true")
  5. specialized engines perform rational reasoning (eg, deductions) on the facts
  6. formal structures of symbols can represent anything
  7. anything that occurs in the world can be modeled and reasoned about
  8. by following this strict, formal methodology, items can be proved

AI is the creation of a program (often developed through introspection) to search through a well-defined space for a well-defined solution.

Problems

I didn't like that view of AI at all.

Problem 1: Rationality (doing the right thing) must always defined ''in some context''. Consider:

  1. You are sitting in a room. The fire alarm goes off. Do you make your way to the nearest exit, or the one way down the hall?
  2. You are sitting in a lecture hall with 251 other people. The fire alarm goes off. Do you make your way to the nearest exit, or the one way down the hall?
  3. Yesterday, the fire alarm went off, but it was a false alarm. Today you are sitting the same room. The fire alarm goes off. Do you make your way to the nearest exit, or the one way down the hall?
  4. Every day for the last 62 years the fire alarm goes off at noon in a particular room. It is noon, and you are sitting the room. The fire alarm goes off. Do you make your way to the nearest exit, or the one way down the hall?

Or this extreme version:

You decide to go for a walk and a limb falls off of a tree and bumps you on the head.

Was it rational to:

  1. go outside?
  2. walk under trees?
  3. walk under trees after a storm?

Problem #2: Rational (traditional AI) computer systems don't seem to work all that well.

Referring to rational, formal, good old-fashioned AI (GOFAI), traditional, symbolic AI

  1. They only work on well-defined systems (no ambiguity)
  2. They don't work very well when the world keeps changing (the Frame Problem)
  3. Because they are complete, they can take an exponential amount of time/memory before they can provide an answer
  4. They are brittle (they break if a single bit is misplaced)

Problem #3: All the intelligence is put in by the programmer.

In my first AI class I was very excited to create a program that could "see". I soon learned that computer vision involved me learning as much about "how to see" and programming that into the computer. For example, we were instructed about:

and a host of other things. I could see and I didn't know all that stuff; why did I have to program it into the computer!? Why do we have to figure out how to solve the problem first? That seems like where all the intelligence is, in the figuring out part.

Problem #4: Introspection is not a valid methodology to use to justify a model.

Concepts like consciousness, intentionality, goals, concepts, grammar, qualia, etc. can not be justified because it feels as if they exist.

Good Things about GOFAI

However, it still comes up lacking when the problems get hard.

A New Kind of Artificial Intelligence?

People have been wrestling for the last 20 years or so with the above problems. Some people believe that the traditional AI can be adapted to deal with all of these issues (see for example dangermouse.brynmawr.edu/presentations/maics-2001/img13.htm )

Connectionism

The idea of subsymbolic representations have been thoroughly defined. But the idea doesn't extend to mechanisms like evolutionary strategies.

Biologically-inspired AI

Biologically-inspired AI captures both neural and genetic metaphors, but not the essence of why and how they are different from traditional AI.

Embodied AI

This has been the term many roboticists have used, but many still use the same old AI techniques.

Behavior-based AI

This harks back to Turing's operational definition of intelligence. This radical view was put forth by Rodney Brooks (whom Anne cite in the forum the other day). He was onto something, but wasn't sure what. His radical idea was to engineer emergence (he wouldn't explain it that way), which ended up failing (he would admit that, I think).

Cognitive Science

Maybe.

New AI

This group acknowledges that we are headed somewhere, but not sure where. And what the hell will they name the next variation? Real New AI?

Hybrid AI

Take the best from both paradigms? Let the symbolic, traditional AI handle the formal aspects and use other methods for the non-formal parts. Useful as a bridge, but in the long run, you end up with the worst of both worlds.

Emergent Intelligence

Which leads me to EI. But first, a word about stream governors.

Steam governor

I first began to suspect that there was something wrong with the Physical Symbol Systems Hypothesis upon hearing about the steam governor. I began to doubt the hypothesis that two systems that had the same behavior must be using symbols.

The steam governor, invented by James Watt in 1785, is pictured on the left. It adjusts the speed of a steam locomotive. If the governor is going too fast, the steam will cause it to spin faster, which cause the flyballs to rise from centrifugal force, which lifts a lever that slows down the steam. If the steam slows down too much, the flyballs drop from gravity, causing the lever to open up, driving the engine faster.

The picture on the left is a "block diagram" of how the governor "works", defining one that could be built using a computer. Here there are inputs, sensors, actuators, and some computation.

Are the two devices the equivalent? Or fundamentally different?

Definition of EI

I have a somewhat strange definition of intelligence: EmergentIntelligence. Emergence, life, and intelligence are all on a scale: you can varying degrees of each. Like Turing, it is based on behavior.

But first, let's see the conjectures.

Conjectures

Conjecture #1: The more intelligent an emergence system is, the less possible it will be to "understand" how the system works.

I mean that there won't be a way to abstract from the system what it is doing. Such a complex emergent system wouldn't be able to be broken up into symbols, concepts, or modules. There won't be a description of the solution more abstract than the actual solution itself. (Of course, we could approximately describe it, but such approximations wouldn't be sufficient to build a model that would behave as the original. The devil would be in the details.)

This conjecture is related to the idea of maximal information content. Recall that a message with maximal information content is indistinguishable from randomness. If it were otherwise, a pattern could be detected, which would mean that it could be further compressed and thus could contain more information.

We won't be able to understand how these emergent computer programs work.

Likewise, an emergent computer model of the human brain may be equally complex and opaque. Therefore, it may be impossible in principle to to "understand" a sufficiently complex system. If this is true, then there may be an inherent impossibility for a system to know thyself: If a system is too simple, it doesn't have the ability to understand itself. However, as soon as it becomes complex enough to comprehend such complex things, the complexity goes over the edge, beyond the range of "understanding".

Not everyone appreciates this view of mine:

    "It seems to me that there is something fundamentally wrong about the proposal here. As McCloskey has argued, unless we can develop an understanding of how network models (or any kind of model for that matter) go about solving problems, they will not have any useful impact upon cognitive theorizing. Whilst this may not be a problem for those who wish to use networks merely as a technology, it surely must be a concern to those who wish to deploy networks in the furtherment of cognitive science. If we follow [Blank's suggestion] then even successful attempts at modelling will be theoretically sterile, as we will be creating nothing more than 'black boxes.'" -- Istvan Berekey

The end to understanding in science? Theoretically sterile science?

I don't think so, but the focus will be shifted from understanding how a system works, to understanding how the system develops. This may have deep psychological effects in science, however. There may be scientists that would prefer to understand theories that couldn't possibly work rather than build models that work but are incomprehensible. I believe this is what is going on in traditional AI.

Conjecture #2: Sufficiently complex emergent systems require simple (stupid) components.

Here's my intuition on this: suppose you have two brains, one built out of relatively simple components (say, something like neurons) and the other built out of tiny Einsteins. Now, the simple components receive some signals, do a bit of simple math (thresholding) and occasionally send a signal on to other components. The more sophisticated components, the little Alberts, do something very different. They don't really respond to anything unless they "understand it." Thus, ironically the emergent brain of Einsteins would fall flat, it just wouldn't work.

I think this would happen because the simple agents must be involved in computation for which they do not understand. This allows signals to propagate at the level higher than an individual neuron. On the other hand, the Einsteins would hog the signals, using them for their own messages.

Levels of emergence will continue to appear until a level appears that acts rationally. As soon as a level appears that no longer sends signals for which it doesn't understand, there will be no further emergence levels.

Conjecture #3: There will be a science of emergence in the next decade.

Conjecture #4: Information theory combined with a theory of meaning will form the basis for the science of emergence.

This will be based on the notion of "Web of causality".

Summary

  1. I very much agree with Turing on defining intelligence in terms of behavior
  2. EI and AI are two very different ventures, and have little to do with one another
  3. Emergent systems can be more powerful than centralized, symbolic systems. General computer science will be effected too (including networking, hardware design) as well as management and team-oriented organizations (for example, extreme programming)
  4. The role of understanding will change in science (shift towards development, rather than end process)
  5. Emergent systems require simple components at the lowest level, otherwise you get a limited amount of emergence for what you put in


ViewWiki | EditWiki | Webmaster@eprg