1. Memory, Representation and Abstraction
Recall Elman's Simple Recurrent Network
Trained on sequences of symbols
Training was simply prediction
Symbols represented various aspects of language
Found evidence that the network was detecting underlying patterns
error levels indicated expectations of word boundaries
cluster analysis found words that were used similarly had similar representations
Could this same, simple methodology be used in a non-symbolic world?
Yes, but the real world has some issues:
Sometimes there are long stretches of very similar inputs
Interesting events can be rare
This creates a situation of "catastrophic forgetting"
1.1. The Human Network Experiments
1.2. What can be done about catastrophic forgetting?
Recall our goals BringingUpRobot
2. Governor For Neural Networks
Something like a Self-Organizing Map that sits between the environment and the network that automatically "balances" the categories of training data.
2.1. A Governor for a Feedforward Network
Categories [input] + [output]
2.2. A Governor for a SRN
Categories [input] + [context] + [output]
2.3. It works!