close

Вход

Забыли?

вход по аккаунту

код для вставкиСкачать
CAP6938
Neuroevolution and
Artificial Embryogeny
Basic Concepts
Dr. Kenneth Stanley
January 11, 2006
We Care About Evolving Complexity
So Why Neural Networks?
•
•
•
•
Historical origin of ideas in evolving complexity
Representative of a broad class of structures
Illustrative of general challenges
Clear beneficiary of high complexity
How Do NNs Work?
Output
Output
Input
Input
How do NNs Work?
Example
Outputs (effectors/controls)
Forward Left Right
Front Left Right Back
Inputs (Sensors)
What Exactly Happens Inside the
Network?
• Network Activation
out1
out2
H1
H2
w11
w22
w12
X1
w21
X2
Neuron j activation:
 n

H j     xi wij 
 i 1

Recurrent Connections
• Recurrent connections are
backward connections in
Recurrent connection
the network
Wout-H
• They allow feedback
• Recurrence is a type of
memory
out
wH-out
H
w11
X1
w21
X2
Activating Networks of Arbitrary
Topology
• Standard method makes no distinction between
feedforward and recurrent connections:
H j , H j (t )
 n

    xi (t 1) wij 
 i 1

• The network is then usually activated once per
time tick
out
Wout-H
wH-out
• The number of activations per tick can be
H
thought of as the speed of thought
w11 w21
• Thinking fast is expensive
X
X
1
2
Arbitrary Topology Activation
Controversy
• The standard method is not necessarily the best
• It allows “delay-line” memory and a very simple
activation algorithm with no special case for
recurrence
• However, “all-at-once” activation utilizes the
entire net in each tick with no extra cost
• This issue is unsettled
The Big Questions
• What is the topology that works?
• What are the weights that work?
?
?
?
? ?
?
?
?
?
? ?
? ?
Problem Dimensionality
• Each connection (weight) in the network is a
dimension in a search space
21-dimensional space 3-dimensional space
• The space you’re in matters: Optimization is not
the only issue!
• Topology defines the space
High Dimensional Space
is Hard to Search
• 3 dimensional – easy
• 100 dimensional – need a good
optimization method
• 10,000 dimensional – very hard
• 1,000,000 dimensional – very very hard
• 100,000,000,000,000 dim. – forget it
Bad News
• Most interesting solutions are high-D:
– Robotic Maid
– World Champion Go Player
– Autonomous Automobile
– Human-level AI
– Great Composer
• We need to get into high-D space
A Solution (preview)
• Complexification: Instead of searching
directly in the space of the solution, start in
a smaller, related space, and build up to
the solution
• Complexification is inherent in vast
examples of social and biological progress
So how do computers optimize
those weights anyway?
• Depends on the type of problem
– Supervised: Learn from input/output examples
– Reinforcement Learning: Sparse feedback
– Self-Organization: No teacher
• In general, the more feedback you get, the
easier the learning problem
• Humans learn language without
supervision
Significant Weight Optimization
Techniques
• Backpropagation: Change weights based
on their contibution to error
• Hebbian learning: Changes weights based
on firing correlations between connected
neurons
Homework:
-Fausett pp. 39-80 (in Chapter 2)
-and Fausett pp. 289-316 (in Chapter 6)
-Online intro chaper on RL
-Optional RL survery
1/--страниц
Пожаловаться на содержимое документа