<- Home     Architecture II ->



Architecture I


January 28, 2008




1 Outline of Topics
  Coarse-grain mind model of intelligent systems  
  Why do we need to study architecture in A.I.?  
  Architecture and the perception-action loop  
  Mutliple levels of processing  
  Subsumption architecture: What it is  
  Subsumption architecture: What it is  
  Subsumption architecture Basics / Examples  
  Subsumption architecture Examples II  
  Subsumption Architecture: Characteristics  
  Where are we headed?  
  Large integrated systems  
  Remote Agent  
  Brain research architecture: Kosslyn & Koenig  
  HONDA Asimo  










Coarse-grain mind model of intelligent systems
  Sensation What comes in through the senses - "raw information" (processed data)
  Perception Includes interpretation and decisions about the sensory data
  Interpretation Interpret the perceptual data in context with knowledge and experience
  Decision Interpretation, decision making and planning are often intermixed. A simplification includes an initial decision between interpretation and planning, the decision to act.
  Planning The (mental and physical) act of deciding future actions
  Action The execution of a decision/plan
  Goals It is generally considered necesssary to model all cognitive agents as having goals; without goals comparisons between choices cannot be made and decisions become random.
  Memory Memory is "everywhere" in a mind. Various memories serve various purposes. Nobody knows how the human memory works as a whole.












Why do we need to study architecture in A.I.?
  (sw)architecture = (hw)architecture

Architecture in A.I. serves the same purpose as architecture when building physical structures

Just like in architectures for physical structures, A.I. architectures:

  • Use blueprints for specifying materials, the main components and their interconnections
  • Blueprints do not convey the full detail of implementation
  • Help coordinate the construction of complex (sw) structures
  • Specify the flow of elements (people/information) among units (rooms, buildings/ processing elements)
  A.I. systems get exceedingly complicated A.I. is among the "new" sciences of complexity: Their subject is more complex than any studied by science to date
  intelligence = organization

Intelligence is in essence an organizational matter.

An intelligence is defined by components:

  • What the components are
  • How they are structured
  • How they are interconnected
  • How they interact in realtime
  Science Complex systems are the next challenge
  Engineering We are headed towards larger and larger systems
  Holistic intelligence Architecture is now among the main stumbling blocks towards understanding holistic intelligence
  All naturally intelligent systems perceive, think and act Perception-Action is about architecture










4 Architecture & the Perception-Action Loop
The old pipeline model of cognitive processing.
  The need for mental "threads"

To monitor one's own actions one has to sample the world after one has done an Act; sampling of multiple acts becomes a difficult problem in a pipeline model.

To cope with this the mind must be doing "load balancing" among "multiple threads".












Multiple Levels of Processing


The new model of layered continuous processing and execution.

The top layer provides a quick response time to time-critical perceptual events (a bus coming at you); the middle layer provides a somewhat more thought-out respone set (1-3 second range) while the bottom layer provides a lot of different processing that takes quite a bit more time (e.g. remembering the name of that singer from the s).


Coordination hierachies: A functional hierarchy organizes the execution of tasks according to their functions. A product hierarchy organizes production in little units, each focused on a particular product. Several types of markets exist - here two idealized versions are show, without and with brokers. De-centralized markets require more intelligence to be present in the nodes, which can be aleviated by brokers. Brokers, however, present weak points in the system: If you have a system with only 2 brokers mediating between processors and consumers/buyers, failure in these 2 points will render the system useless. Notice that in a basic program written in C++ every single character is such a potential point of failure, which is why bugs are so common in standard software.

The human and animal minds are probably ... a mixture of all of these. At the gross anatomical level the brain is a functional hierarchy, with motor control and perceptual inputs in specific places (vision, for example, is always in the back of your head -- no execptions, while language is in the left hemisphere in most people).













  Perception-Action Loop A question of realtime

Realtime just means "real fast". Right?
Realtime involves (at least) all of the following:
1. Responsiveness: The system’s (in this case dialog participant’s) ability to stay alert to, and respond to, incoming information.
2. Timeliness: The system’s ability to manage and meet deadlines.
3. Graceful adaptation: The system’s ability to (re)set task priorities in light of changes in resources or workload, and to rearrange tasks and replan when problems arise, e.g. in light of missed deadlines.
Relevance. Are the decisions relevant to the situation?

  To achieve realtime
  • Sample the world: Monitor information sources and manage sensory apparati
  • Load-balance threads: Monitor actions & their outcomes
  • Compare action outcomes to goals












7 Subsumption architecture: What it is
  What it is Robot control architecture system developed at the MIT AI Lab by Rodney Brooks
  why it exists An effort to shift attention from human-level intelligence to simpler organisms, and in the process create the simplest possible architecture that could express intelligent behavior.
  how we will use it You will study the subsumption architecture to a sufficient degree that you can implement it in Java or C++ to control the CADIA Hexapod in your final project.













8 Subsumption Architecture Basics / Examples
  Augmented Finite State Machines (AFSMs) Finite State Machines, augmented with timers
  Modules (FSMs) have internal state

The internal state includes:

  • the clock
  • the inputs (no history)
  • the (current) output (no history)
  • may include "activation level"
  External environment constists of connections ("wires")
  • Input
  • Inhibitor
  • Suppressor
  • Reset
  • Output
  Augmented Finite State Machine (AFSM) with connections

Suppressor: Replaces the input to the module
Inhibitor: Stops the output for a given period
Reset: Initialization puts the module in its original state
  Augmentation The finite state machines are augmented with timers.
The time is fixed for each I or R, per module.
  Timers Timers enable modules to behave autonomously based on a (relative) time
  The AFSMs are arranged in "layers" Layers separate functional parts of the architecture from each other
Level 0 example for a robot that cleans away soda cans (see higher levels, below).
Level 0 and 1 combined.
Level 0, 1 and 2 combined.













Subsumption Architecture Examples II



Example subsumption architecture with layers.










Subsumption Architecture: Characteristics
  Architectural characteristics
  • Low processing overhead
  • Tight coupling between perception and action
  • Modular structure
  • Low to medium number of modules
  Behavioral characteristics
  • Rapid, reflexive responses
  • Long-term and short-term goals
  • Easy to construct relatively complex systems
  • Systems tend to work well in the real world
  • Not CPU intensive (the architecture, that is - sensory mechanisms can still be quite expensive)
  • Well understood framework
  • No complicated representation of external conditions needs to be put into the creatures

Architectures tend to be uniform - it is difficult to extend them beyond controlling simple creatures.

Hard to do:

  • simultaneous (parallel) goals and complex goal structures
  • layered planning and action control

... i.e., it is difficult to construct large architectures that have features of human-level intelligence

  Drawbacks can be overcome through use of Constructionist AI

Constructionist AI mantra: No method off-limits

Use well-known mechanisms to solve problems they apply well to

  • Expand existing architectures
  • Develop large architectures from scratch
  • Achive human-like features
  • Explore integration of very different mechanisms
  • Integrate learning into heterogeneous architectures


  Constructionist AI results in hybrid architectures

"Hybrid architectures"

  • Typically don't buy into a particular school of thought
  • Very often hierarchical like the subsumption architecture
















Where are we headed? (next 10-20 years)

Large integrated systems

Need to build integrated simulations of thinking systems to:

  • Improve autonomous systems (applications)
  • Realize proper models of intelligence (research)
  Distributed systems No single computer is up to the task, at present
  Manual construction
  • Current models of cognition are hand-built
  • Will be largely manually built for a while















12 Large Integrated Systems
  Increased autonomy …

…means we are moving towards systems that:

  • Sense, recognize, classify, plan, decide, act … integrated!
  Industry examples
  • Increased autonomy, flexibility in factories
  • Faster re-programmability, at a higher level
  Research examples
  • Remote Agent
  • Brain research
  • Asimo















13 Remote Agent
  Deep Space One

NASA satellite went into orbit on Oct. 24th 1998
Purpose: Test the value of various new technologies

  Deep Space One Remote Agent
  • On-board AI system: “Remote Agent”
  • Remote Agent managed the mission autonomously
Deep Space 1 Remote Agent architecture of main software modules.
  Remote Agent Thought Process Example

Remote Agent knows that:

  • To turn on a power source requires “pushing a button”
  • A “button” is a physical thing that behaves according to the laws of physics - buttons can get stuck

If a physical thing gets stuck you can try to get it loose by “jiggling” it

  • Reasonable approach to solve the problem: Jiggle the switch for the power source
  • Remote Agent solved the problem and turned on the power source
  To think, Remote Agent uses:
  • Detailed model of itself, plus
  • Heuristics, rules about the world











Kosslyn & Koenig architecture. Based on decades of brain research.