close

Вход

Забыли?

вход по аккаунту

Neural Control of a Mobile Robot

код для вставкиСкачать
Neural Control of a Mobile Robot
Jonathan Connell
Johuco Ltd.
Box 385, Vernon CT 06066
http://www.johuco.com
The human brain is very complex. There are billions of neurons and trillions of connections between them. Even
though many areas have a uniform structure, there are still several hundred architecturally distinct regions. This
makes simulating a human being very difficult. By contrast, there are a number of insects and marine animals that
have far fewer neurons. Many of these have been studied in detail and scientists have a good idea of how the various
components of their brains are connected together. There is also an extensive body of experimental data detailing
what sort of behaviors are present in each animal and how various groups of neurons interact to perform the
necessary computations. Thus, at this time there is a better basis for building robotic models of simple creatures than
humans. Yet, since man evolved from these simple creatures, the knowledge gained from such an endeavor should
ultimately lead us to a better understanding of our own minds.
Here, we investigate how animal-like control systems might plausibly be implemented using neuron-like elements.
The first half of this article describes the nature and capabilities of elementary reflexive systems. It also explains
how a collection of simple threshold units can perform the necessary processing. The second half of the article
shows how to construct an actual robot, "Muramator", using electronics to model the relevant biology. The robot's
primary goal is to follow around the edges of its world. However, by modifying some of the design parameters, this
creature's overall behavior can be altered in interesting ways.
Behaviors
Before designing our own creature, we need to investigate some of the organizational principles underlying natural
control systems. Let us start by breaking an organism's overall behavior into a collection of separate reflexes. This
allows us to study and develop each sub-behavior in isolation from the others. To achieve some particular goal, we
then put these reflexes back together and carefully coordinate their activities. Such a control system can be modelled
as a set of "if-then" rules where each rule corresponds to a primitive reflex. A simple way to coordinate the animal's
actions is to impose a fixed priority ordering on the rules. This structure is then used to resolve potential conflicts
between behaviors. Experimental robotics research has shown that such systems, if cleverly designed, are powerful
enough to accomplish relatively sophisticated tasks.
The functioning of such a rule-based system is
best illustrated by an example. Consider the
coastal snail. This creature spends its life at the
edge of the ocean, eating algae off of rocks.
Thus, the best place for a snail is in a crack
right above the waterline, where there is a rich
concentration of food. In addition, the creature
is in no danger of being dried out by the sun or
being gulped up by some passing bird.
Unfortunately, the snails are occasionally
washed off their rocks by large waves. To
avoid starvation they must have some way of
seeking out the optimal region again.
no water
CRACK
upside
down
BRIGHT
DARK
UP
Control
System
S
S
S
crawl
Ethological studies have revealed that the snail has two primitive drives: to climb upward and to avoid light. We will
refer to these reflexes as UP and DARK. However, neither of these "instincts" are complete functions: there are
some situations for which they do not suggest an action for the snail to take. For instance, if there is no appreciable
intensity difference between directions, the DARK behavior is quiescent and the snail crawls straight upward.
Copyright 1992, Johuco Ltd. All Rights Reserved
Similarly, when the snail is on a more or less flat surface, UP is inactive and the snail's direction of travel is
determined solely by the illumination gradient. Overall, however, DARK is the stronger behavior. If a very bright
light source is present, the snail will crawl away from it even if this means going downward. As shown in the figure,
we can draw the two reflexes as boxes and indicate the priority between their outputs using a circle with an “S" in it.
In the case of conflicting motion commands, the behavioral module which injects its signal into the side of such
"suppressor" node always wins and gets to control the snail's body.
Surprisingly enough, if one turns the snail upside down, instead of avoiding light, it will now head toward bright
areas. We can imagine that this is due to a third behavior, BRIGHT, which provides the animal with an urge to seek
out light. Since BRIGHT ends up controlling the motion of the animal, it must override the output of DARK. Yet
this new behavior only becomes active, "potentiated", when the animal is inverted. Otherwise the creature acts
solely on the basis of the lower level behaviors. However, there is a further twist to the story. It has been observed
that this light seeking behavior occurs only underwater. If the animal is in air it will invariably seek out dark areas,
even if it is upside down! We can model this by adding a final behavior, called CRACK, to the creature's repertoire.
When the creature is out of the water, this behavior takes precedent over all the other light sensitive behaviors.
This collection of four behaviors allows the snail to
find the best foraging area, even if it has to negotiate
major obstacles along the way. Imagine, as shown in
the figure, that the snail starts off on the ocean floor
a short distance off shore. Since the rocks are
slightly darker than the surrounding sand, it crawls
along the bottom towards them. When it reaches an
outcropping it starts climbing the face. If it comes
across a notch in the rock it is first drawn inward by
the darkness. Upon reaching the end, it then starts
climbing the rear wall and eventually reaches the
ceiling. Here, it becomes inverted and thus moves
outward toward light again. Having successfully
overcome this impediment, the snail continues
climbing toward the surface. When it reaches the
edge of the water, it ascends still further until it
comes across a crack. As before, the dark seeking
behavior will take over and directs the snail into any
crack encountered. However, since it is now above
water, the snail does not turn around when it reaches
the back, but instead stays deep in the crack.
The Snail's
Journey
rock
water
Neurons
We have seen that it is not necessary to have an explicit plan or representation of the situation in order to accomplish
a task. This greatly reduces the complexity of the nervous system required. As the snail example showed, a set of
ordered reflexes is adequate for simple navigation. Still, there is the question, “What are these reflexes made of?" In
animals the answer is, of course, neurons. So let us take a look at how real neurons work.
Over long distances neurons communicate via electrical impulses. However, when these signals reach the inputs
(called "dendrites") of another neuron, every input pulse causes a small puff of a specific chemical to be released
into the gap (called a "synapse") between neurons. This substance briefly opens a number of ion channels in a
nearby patch of the receiving neuron's cell membrane, and the resulting flow of charge carriers causes this area to
act like a miniature battery. The same thing happens throughout the neuron's tree-like collection of inputs and the
combined charges are funnelled back to the body of the cell (known as the "soma"). Thus, the neuron gradually
becomes more and more electrically charged. When there is enough accumulated potential, the neuron
spontaneously generates its own series of impulses which travel down the output fiber (the "axon"). Eventually this
signal impinges on the inputs of succeeding neurons where a similar series of events then takes place.
2
I
* w1
NEURON MODEL
excitatory
N
* w2
P
+
+
Σ
U
* w3
-
-1
-
OUTPUT
0 or 1
dt
time constant
threshold
T
S
* w4
inhibitory
Our model of a neuron takes into account all of these phenomena. As shown in this figure, there are a number of
inputs which converge on a single summation node. The combined value is then slowly integrated over time and
passed to a threshold unit which determines whether any output will be produced. Our model also incorporates some
other salient features of real neurons not yet mentioned. For instance, all inputs are not treated the same. Each
connection (the triangles on the left) has a particular "weight" associated with it. These weights multiply the value of
the input signal, which is normally either one or zero, thereby allowing the input to have a stronger influence on the
neuron. This reflects the anatomical fact that certain synapses in animals are physically larger or more transmissive
than others. In addition, we also allow inputs with negative weights (bottom of the diagram). These correspond to
the inhibitory connections often observed between actual neurons. You can think of a neuron as a tub being filled
with water. There are a number of hoses of different sizes feeding into the tub as well as a number of drain spouts. If
more water comes in that goes out, the tub will eventually fill up and slop water over its edge.
In both the real and modelled cases, combining the
effects of all the inputs is a little more complicated
than previously described. While we have
neglected several interesting effects based on the
actual geometry of the input structure, we do not
overlook the fact that real neurons are "leaky". In a
primitive model, the presence of a “one" at some
excitatory input would cause the neuron to charge
up to the threshold and then set its own output to
"one". Now, suppose that this input goes to "zero".
Since the integral of zero is zero, the potential of
the modelled neuron would not change and the
output would remain active. This is not the case for
a real neuron; its charge eventually decays over
time. In terms of the previous analogy, this
corresponds to a hole in the bottom of the tub. To
capture this effect, we have improved our model by
adding an extra "-1" term to the summation. This is
essentially an inhibitory input that is always on.
When none of the other inputs are active, the sum
is now negative so the integrated excitation always
decreases toward zero.
excitatory
inputs
inhibitory
input
inherent
leakage
However, we never let the value of the integral go negative. Although not strictly correct biologically, we restrict the
integrated excitation to be between 0 and 1 at all times. This reflects the fact that we can not fill a tub above its rim
or drain it below its bottom. While real neurons also have limits to the voltages that can be achieved, the resting state
does not necessarily correspond to the maximum depolarization. In animals, it is possible, and sometimes
computationally useful, to discharge a cell below this neutral level and thereby make it harder for later inputs to
trigger the neuron.
3
The final twist on our neural model is the nature of the threshold. We use a device known as a "Schmitt trigger"
instead of a simple comparator. This device has two thresholds, a high one for rising signals and a lower one for
falling signals. In our modelled neuron, the output switches on when the value of the integrated excitation reaches 1.
However, the neuron will then continue to generate a signal until the integral descends to 0 again. This is much like
the thermostat in a typical house. If you set the temperature to 70 degrees, the furnace will not turn on until it gets as
cold as 68 degrees or so. Then, it will proceed to warm the house until the temperature passes the setpoint, often
reaching 72 degrees before shutting off. Including this feature in our model is convenient, but, again, not totally
accurate from a neurological point of view.
To see how such simulated neurons are used, consider the
example shown. Here, we depict the body of the neuron as a
circle with a line coming out of it to represent the output. Input
terminals are shaped like the bell of a trumpet and have their
associated weight written next to them. White terminals are
excitatory whereas black ones are inhibitory. Since the structural
details do not matter, we show all the inputs impinging directly
on the cell body.
AND-NOT
gate
A
2
2
B
This neuron will only generate an output if input A is active and B is not. First, suppose neither A nor B is active.
The input sum is 0*2 - 0*3 - 1 = -1 (the last term comes from the leaking property of the model). This negative
result causes the accumulated value inside the neuron to decay until the output switches to zero. Now suppose A
comes on. The new sum is 1*2 - 0*2 - 1 = +1. If we set the time constant of the neuron to be very short, the output
will almost immediately switch to one. Finally, imagine that B comes on as well. The computed sum for this case is
1*2 - 1*2 - 1 = -1, which forces the neuron to turn off instead.
sum = 1
Oscillator
*
Integrated 1
Input
0
2
3
sec
4
sum = - 3
Neuron
Output
time
1
0
4 sec
8 sec
12 sec
A more complicated example is the oscillator shown above. The central part of this configuration is similar to the
AND-NOT gate. Here, the neuron itself supplies the inhibitory input while another neuron provides excitation. The
asterisk inside this auxiliary neuron indicates that it is of a special type which is always on. Assume that the central
neuron's internal potential starts off at zero and that its output is off. The input sum is 1*2 - 0*4 - 1 = +1 in this case.
Thus, the integrated input value starts climbing slowly as shown in the plot at the top right. The value written inside
the neuron tells how long it takes for a sum of zero to fully charge the neuron. As can be seen, it takes 3 seconds for
the integrated value to reach the value one. For a higher sum, less time would have been required.
Once the integral has reached the prescribed threshold level, the output of the neuron comes on. This is shown in the
lower right hand plot. However, this changes the overall sum sent to the integrator. It is now 1*2 - 1*4 - 1 = -3, so
the neuron's internal value starts to decay. Yet, because of our special threshold stage, the neuron's output value
remains at one until the integral again reaches 0. This happens one second after the output comes on: a sum of 3
means it takes only 1/3 of the specified time for the internal value to swing across its full range. At this point, the
output turns off and the whole cycle repeats. As can be seen, the result is a pulse train with a period of 4 seconds.
The frequency, as well as the relative width of the on and off portions, can be changed by adjusting the input
weights.
4
The Robot
Let us now design a creature which combines the insights gained in behavior-based control and neural modelling.
The robot presented here is called "Muramator", or "wall lover" in Latin. As its name suggests, this robot follows
along the edges of objects. Like the snail, it does this by breaking the task down into separate parts. The most
primitive of these is the EXPLORE module which constantly urges the robot to go forward. This causes the robot to
move around its environment; unfortunately, it also causes the robot to get stuck easily. To prevent this, we add
another behavior, AVOID, which overrides the output of EXPLORE. AVOID steers the robot away from any
perceived obstacles that might be encountered.
With just these two behaviors the robot is
capable of wandering around its environment for
long periods of time. However, it tends to
bounce around like a drunken pool ball. To make
the robot more responsive to its world we add a
third behavior, SEEK. This module searches for
objects and guides the robot toward them. Like
an ancient mariner, it attempts to keep the
shoreline of its world in sight at all times. A
dynamic balance exists between SEEK and
AVOID. Together, they keep the robot running
roughly parallel to the edges of objects.
Modules
obstacle
SEEK
obstacle
AVOID
S
EXPLORE
S
move
These abstract specifications now need to be translated into real rules. To do this we need to know the actual
perceptual and motion capabilities of the robot. For the body of our robot, we have chosen a vehicle which is able to
stop, to go forward, or to turn in place toward the left. No other actions are possible. For sensing, we use a single
infrared proximity detector. This device works by emitting a beam of light then looking for a bright reflection. The
sensor is able to see objects in a almond-shaped region about 3" wide by 12" long.
Typical Path
seek
explore
avoid
avoid
wall
The implementation of the first behavior, EXPLORE, is trivial: we just run the robot's motor forward all the time.
The next behavior, AVOID, is also fairly simple. If the obstacle detector senses anything, we run the robot's motor
in reverse thereby causing the creature to turn away from the stimulus. However, the orientation of the sensor is
important in this case. It is important for the robot to have some forward vision to avoid ramming into objects
directly in its path. However, if the sensor points straight forward, the robot is likely to side-swipe them. Therefore,
we compromise and aim the sensor about 30 degrees to the right of the robot's midline. This naturally makes the
robot more sensitive to obstacles on the right; a sensible choice since the robot avoids things by turning left.
The last behavior, SEEK, is more difficult to instill in the robot. The robot can only look in one direction, and
anytime it sees something it is programmed to turn until this sensor reading disappears. Thus, we can't directly
determine where the wall is and how to steer the robot toward it. However, if the proximity sensor is active a large
percentage of the time we can assume that the robot is still near obstacles. Open spaces, on the other hand, are
characterized by the absence of any sensor readings. This forms the basis for our seeking strategy. When the robot
has not seen anything for a while, it spins around in an attempt to locate the edge of the world again. Because it
avoids things by turning left, the creature has a good chance of finding things if it turns right instead. Yet, our robot
5
can only turn to the left; to turn right we must go the long way. We do this by timing the movement to yield a 270
degree rotation. Notice that if the new SEEK behavior is omitted, the robot will not turn back toward the wall as
indicated, but instead zoom off into space.
The total collection of behaviors can be embedded in a network of 11 simulated neurons as shown below. For motor
control the robot's brain has one neuron which drives the robot forward and another which causes it to turn. This
makes the primitive behavior EXPLORE easy to implement: a continuously active neuron drives the "forward". The
innards of AVOID are also simple: it just activates the "turn" neuron when the robot sees something. This is
particularly easy since the appropriate obstacle detection signal is directly available at the output of the neuron
marked "15ms" (to be described later).
Notice that neither EXPLORE nor AVOID is directly connected to the motor neurons. Instead, each sends its
command via a single intermediate neuron. When designing large networks it is good practice to insert an
"interneuron", such as this, to serve as an interface point where higher level signals can be injected. In this case,
AVOID suppresses the default forward command by inhibiting the interneuron involved in the EXPLORE behavior.
This blocks the original motion directive and allows the more important AVOID module to substitute its own
instructions. Technically, AVOID should really have two interneurons: one to generate the new command and one
to suppress the old command. This arrangement would let us cascade suppressor nodes so that the most important
behavior could suppress all the others with a single connection.
obstacle signal
suppressor
avoid
8
detector
15
ms
2
2
turn
2
suppressor
0.6
ms
100
emitter
5
sec
kf
2
kt
*
2
2
*
forward
2
seek
2
*
explore
Full Network
The third behavior, SEEK, is composed of two additional neurons. Neglecting the weight 100 input, we can see that
the remaining structure is identical to the basic oscillator presented earlier. The output of this unit feeds directly into
AVOID's interneuron and thus causes the robot to turn. There is no new interneuron involved in this pathway
because there are no higher level behaviors which might need to suppress SEEK. An important feature of this
module is that the strengths of both the excitatory and inhibitory connections (kf and kt) of the underlying oscillator
can be varied. Adjusting the kt weight makes the output of the oscillator remain high longer and thus causes the
robot to turn through a greater angle. Adjusting kf controls the interval between turns and thus determines the
distance that the robot travels before spinning around to look for the wall again. However, the creature is basically
blind during one of these programmed turns, and may rotate all the way past the wall when seeking it! This is the
reason for the inhibitory connection with a weight of 100. As soon as the robot senses an object nearby, the
oscillator is reset to the off state. This not only causes the robot to stop turning, it also synchronizes the unit so that it
correctly measures the time since the last obstacle sighting.
The remaining four neurons form the proximity detection subsystem. The neuron in the lower left corner of the
diagram is directly connected to the infrared emitter and generates the outgoing signal. This neuron functions as a
simple oscillator and produces a symmetric square wave. It is necessary to modulate the infrared beam to
differentiate it from ambient infrared sources such as sunlight. Once the beam bounces off some target, the two
neurons in the upper left corner of the diagram are responsible for processing the returned signal. The first neuron in
6
the chain represents the detector. It is "on" when incoming infrared radiation is detected. Since we emit a series of
pulses, the output of this first neuron resembles the top plot shown in the next figure. The next neuron smooths out
this waveform to yield a simple binary output as shown in the bottom plot. It does this by using the integrating
properties of our neural model. When the detector neuron is on, the input to the second neuron is 1*8 - 1 = +7 so the
internal potential of this neuron increases. Between pulses the input sum is simply -1 and thus the potential slowly
decays. This sequence of events is shown in the middle plot. For ease of plotting, we have changed the weight of the
input from 8 to 3. However, this does not change the qualitative results. As can be seen, it takes several pulses to
charge the neuron up to a high enough level to generate an output. This is useful for rejecting noise pulses.
Similarly, the slow decay constant allows the system to "fly-wheel" through signal dropouts: the output of the
neuron will remain on for several cycles after the stimulus vanishes. This same phenomenon can also be used to
artificially increase the size of the avoidance turns the creature makes.
Pulse
Input
synapse
strength = 3
1
0
charge = 2
decay = 1
1
clipping
Integral
0
1
delay time
Output
time
0
time
hold time
time
Circuitry
The final step is compiling all this into circuitry. The actual electronic schematic for the creature is shown on the
next page. The 11 neurons of our original design are implemented in a number of different ways. Some are modelled
with voltage comparators, some with diode logic, and some with electro-mechanical relays. This part of the article
presents the details of the implementation; if you are not an electronics hacker, feel free to skip to the next section.
Of all the circuitry, the oscillator for the infrared emitter (lower left corner) corresponds most directly to the neural
model. Here, one section (triangle) of a quad analog comparator generates a square wave which is amplified by an
MPS2907 transistor to drive the IR LED. The feedback from the chip's output to its positive input provides the
"hysteresis" needed by our dual threshold neural model, while the resistor and capacitor on the negative input form
the required integrator. The 10K resistor to +12V mimics the action of the always-on neuron.
The high level SEEK behavior is built from similar oscillator (center of page). The 220K resistors make the
comparator into a Schmitt trigger, while the 1K resistor to +12V serves as the always-on neuron. This oscillator has
an active low output: when pin 1 is at zero volts, the robot should turn. Notice that the oscillator's output is
connected back to the integrating capacitor (C1) through two diodes. These diodes cause the top potentiometer to
control the charging time when the oscillator's output is high, and the lower potentiometer to control the discharge
rate when the output goes low. Thus, we can independently vary the excitatory and inhibitory input strengths
(referred to as kf and kt in the neural diagram) using these potentiometers. The last part of the SEEK circuit is the
diode descending from the obstacle detection LED. Since the diode has negligible resistance, it can instantly
discharge capacitor C2 to synchronize the oscillator. This connection models the weight 100 input in our neural
design.
7
Obstacle detection forms the basis for the AVOID behavior. The circuitry for this consists of the pair of comparators
in the upper third of the diagram. Starting at the far left, the reflected infrared is received by the TIL414 phototransistor which generates a small voltage change across the 1M resistor connected to it. This signal is AC-coupled
via the 470pf capacitor into a simple threshold unit formed by one section of the LM339 quad comparator. The
capacitors on voltage divider feeding into the positive input of this comparator help stabilize the reference voltage
against transients caused by switching the IR LED and normal operation of the motor. An inverted version of the
detected signal then enters a low pass filter formed by the 47K resistor and capacitor C2. Since the LM339 has opencollector outputs, the decay of the voltage across C2 is governed solely by the 1M resistor. Finally, the filtered
voltage enters another section of the LM339 which again acts as a simple comparator. The trigger level for this
threshold unit is obtained from the same voltage divider as used for the first stage. The result is a clean digital signal
indicating when an obstacle is present. This, in turn, causes the red LED to light up when the robot sees something.
9V
9V
9V
9V
TIL414
1K
LM339
470pf
8
470K
1M
330
9V
9
1M
47K
+
Obstacle
C2
Red
LED
11
+
10 -
14
DPDT
Relay
13
47K
22µf
12V
9V
+
9V 12V 9V 12V
1K
47K
12V
D1
+
M
10K
9V
0.01µf
Forward
47K
1M
Turn
9V
IR
LED
C1
+
6
7
9V
1K
-
Green
LED
1
Timer
+
Power
12V
+
3V
9V
9V
-
47K
C1
C2
D1
100K
9V
220K
+
9V
4
5
DL
47µf
1µf
jumper
WG+PJ
4.7µf
0.1µf
1N4001
1M
MURAMATOR CIRCUITRY
8
0.1µf
9V 12V
1K
12
12V
0.01µf
+
10K
Level
220K
9V
2
1N914
220K
100K
3
1M
12V
MPS2907
MPS2907
The basic EXPLORE behavior is incorporated directly into the relay circuitry. Normally the robot's motor is
connected so it runs forward (the extra diode, D1, is inserted to slow the creature down, if necessary). However,
whenever the relay is energized, the voltage applied to the motor is reversed and the creature turns instead. Thus, the
functions of the EXPLORE neuron, the first suppressor, and the turn and forward neurons are all included in this
piece of circuitry. The driver transistor for the relay can then be considered the equivalent of the second suppressor
node in the neural diagram, with the two diodes in the dotted box modelling the two excitatory connections to this
interneuron. Because both the obstacle detector and central oscillator have active low outputs, these diodes form a
logical OR gate. Thus, if the creature either sees something (red light turns on) or the turn timer kicks in (green light
turns on), the relay will be activated and the creature will turn in place. The diodes around the motor and across the
relay's coil, however, serve no behavioral function, they just clamp inductive spikes to the power supply rails.
Tuning
Once the electronics have been assembled and mounted on the robot, it is necessary to calibrate the creature. Start by
angling the infrared emitter and detector about 30 degrees away from the robot's direction of travel. Carefully align
the two parts so that they point exactly parallel to each other. Then turn the robot on by pushing the "Power" switch
up while leaving the "Level" switch in the deactivated down position. Check to see that robot runs forward when it
is in an open space. If the red obstacle detection indicator is always on when you put the robot on the floor, tilt the
IR components slightly upwards until the light goes off. Next, hold the robot and move it to within 8" of an obstacle.
Verify that the red LED lights up and that the robot's motor runs in reverse in this situation.
If you now let the creature loose, it should try to avoid things in its path. Muramator works best on wood or smooth
tiled floors - it has a hard time plowing through carpeting. If the creature seems to lurch a lot or has trouble turning,
shift the batteries around to change its balance. At this point, you might design an experimental obstacle course for
the vehicle to see what it can and can not do. For instance, if it encounters an obstacle which juts out, such as a
convex corner, it will swerve away from it but never come back toward it. It has no of memory which direction it
used to be travelling so it can not get back on track in this case. You might also want to try changing the direction
the IR components, or purposely mis-aligning them, to see how these parameters alter the creature's behavior.
Next, switch in the second layer of control by sliding the "Level" switch forward. Start with both potentiometers
midway through their ranges by making the arrows on the dials point toward the edge of the board. If you keep
Muramator in a large open area, the green LED should flash at regular intervals. When this light is on, the robot
should perform some sort of spin. Once the circuitry appears to be working properly, adjust the "Forward" dial until
the robot goes about 8" between turns. Then adjust the "Turn" dial so that the robot makes about 3/4 of a revolution
(270 degrees) every time it turns. Muramator will now not only avoid objects, but turn around if it hasn't seen
anything in a while. Together, these reflexes cause it to follow along walls and to circle any post-like objects it sees.
travel too far
just right
corner
turn too far
corner
corner
Changing the values of the two potentiometers can produce noticeably different results, as shown in the paths above.
You can explore these interactions with your own robot, or you might try writing a computer graphics simulation of
the events. Notice, as shown on the right, if the turn timer is set for too long an interval, the robot will only regain
the wall after a number of turns. If the obstacle had been a post rather than a corner, the robot might not have found
it again at all. On the other hand, consider the scene on the left in which the travel distance is set too long. Here, the
robot may wander off into the middle of the room when it encounters a corner. The first travel leg after an AVOID
turn is often longer than the succeeding ones. Thus, even though the robot heads back in the right direction, it will be
too far away to see the wall again. This, however, can be a useful feature in a world with more than one robot
because corners then become a natural meeting place for different creatures.
9
Another interesting set-up involving two identical vehicles is shown below. Here, the travel distances are set very
short and the turn angle is set to 180 degrees. If you send the two robots head-to-head, they will veer off from each
other and then turn around for another pass. Robot jousting! It helps if you cover both opponents with commercially
available Scotch-lite retro-reflective tape. This increases the sighting distance for other robots to about 18". To
achieve the maximum range, make sure each robot's infrared beam is pointing level with respect to the floor. This is
just one example of how robots can interact in the world. With a larger number of individuals, there may be even
more interesting patterns.
Playing “Chicken"
explore
seek
avoid
avoid
explore
seek
We have now progressed from a vague concept to an actual working robot. This transformation was made possible
by several insights. First, we applied the methodology of breaking an activity into component behaviors. Next, we
codified these behaviors as simple situation-action rules. Finally, we cast the rules into circuitry based on a plausible
neural model. These same steps can be applied to other creatures with other behaviors. You might try designing
some yourself, at least on paper. Extending this line of research to ever larger, more complex creatures is a promising path for developing a deeper understanding of how the human mind itself works.
SUPPLEMENTARY READING
Vehicles by Valentino Braitenburg, MIT Press, Cambridge MA, 1986.
- discusses how simple creatures might be constructed based on biological principles
Minimalist Mobile Robotics by Jonathan Connell, Academic Press, Cambridge MA, 1990.
- describes a more sophisticated robot built by the author while at MIT
Mind Children by Hans Moravec, Harvard University Press, Cambridge MA, 1988.
- speculates on the future of robots and their relation to human society
The Study of Instinct by Niko Tinbergen, Oxford University Press, Oxford England, 1951.
- examines the nature of animal behavior and analyzes its components
10
1/--страниц
Пожаловаться на содержимое документа