CS 444 - Week 12 Lecture 1 - 2015-04-14
Control and Balance and Etc.
* world models
* some approaches to controlling a robot involve
the robot having a model of the world, some
have MORE model than others, and some have
little or none...
* some may use different amounts of long-term (persistent)
and/or short term (transitory) memory
long-term - coordinate-system based maze or qualitative
representation based on landmarks and
their relationships
short-term - the latest sensor reading
* representations can be explicit or implicit --
explicit representations might be symbolic,
discrete, manipulable, etc. --
implicit may be derivable or reconstructable, etc.
* engineering approaches to controlling a robot
a FEW examples...
* simple controllers
* move arm to a particular location,
move forward 20 cm,
rotate 90 degrees,
etc.
* but we know the robot doesn't usually succeed
in EXACTLY doing what we programmed --
* mechanical drift (it doesn't go quite straight)
* environmental factors may intervene
* friction, gravity
* robot parts aren't quite THAT precise
* etc.
* feedback controllers
* try to deal with issues -- small environmental
changes or not-quite-precise movements
or etc. -- by sensing issues and reacting/
correcting for them
* not quite as generalizable as, say,
artificial intelligence techniques that might
try to use, say, world knowledge
* (if it had worked, the compass-sensor-based
CompassPilot's goTo might have been able to
use compass sensor readings to TELL if there
were issues in reaching the desired
goal, and then correct for those --
whereas plain DifferentialPilot goTo might
use just the tachometer in the motor, and
try to go a certain number of rotations --
that's simpler control
* let's talk a BIT more about Feedback controllers
* [this part is from course text - chapter 20 -
pp. 375-377]
* consider the problem of DYNAMIC balance --
a moving robot might need to react to changes
in its center of gravity, or it might fall over
* a CONTROL LOOP is an engineering term for a machine
that:
* reads a value
* reacts according to the value
* repeats
* e.g., a sensor might have a target value
that it continuously tries to maintain,
and any difference between the current
value and the target value is called ERROR
* ONE approach to a control loop:
* read a sensor
* determine from the reading the direction
of our current error
* attempt to use the motors to compensate
in the other direction
...how do you avoid overcompensating
and behaving erratically?
* could we perhaps: try to "learn" (in some
sense) if it overshooting the mark
and try to control its compensation;
* a PID CONTROLLER is an alternative approach for
a control loop, which can be fruitfully used
in control loops whose purpose is to balance;
* PID: proportional
integral
derivative
...this is also sometimes called a PREDICTIVE
controller
* PID control tries to more carefully calculate
how much to react based on the error from the
desired value;
(still a feedback control loop)
* it is also trying to be self-adjusting,
so if it overcorrects on one iteration
it will ADJUST so it doesn't correct
as much on the next iteration
* the 3 parts of the PID control algorithm:
* PROPORTIONAL measures the CURRENT error
and determines HOW MUCH it should adjust
to achieve balance
* INTEGRAL determines the amount of time that
the error went uncorrected
* DERIVATIVE is the predictive part of the
algorithm -- that anticipates future error --
tries to anticipate future error from the
rate of change over time
* OR:
* the PID control algorithm OBSERVES what
happens after each correction
* when it senses "not upright", it corrects
itself
* on the next iteration, it sees if it
over- or under-corrected
* IF overshot, it ADJUSTS the derivative
part so it uses LESS power next time
goal: to use JUST enough power so it
gets upright to a balanced position
* (a form of reinforcement learning)
* Wikipedia has an excellent article describing
the resulting equation --
see: en.wikipedia.org/wiki/PID_controller
* one implementation of this --
Brian Bagnall's Sejway.java, a version of
which you may be trying out later this week...
* robot control architecture -
a set of principles for designing form well-understood
building blocks...
* we discussed the subsumption architecture for
behavior-based robotics...
* trends in robotic control
* 1960s, 1970s - classical?
* try to provide an "exact" mode;l
* no sensing needed
* 1980s - reactive paradigm
* might go so far as having NO model of the world?!
* requires VERY GOOD sensing
* branching...
since 1990s - HYBRIDS
* model-based at "higher" level,
* reactive at "lower" levels
since mid-1990s - PROBABILISTIC robotics
* more-seamlessly integrate
models and sensing;
* accept that the models
are inaccurate, the sensors
are inaccurate, and try to
correct/deal with that?
* classes of robot control architectures include:
* deliberative - look-ahead: think, plan, then act
* extensive representation of the world
* reactive - no look-ahead: react
* have NO representation of the world
* hybrid - think slowly, react quickly
* use a representation, but not extensively
* behavior-based - distribute thinking over
acting
* avoid or distribute the representation
* Braitenburg vehicles might be considered an
example of behavior-based robotics -- I think;
* class of small, reactive robots