Monthly Archives: January 2015

More Wall-following Robot Tuning

01/26/15

Yesterday and today I have been working through some ‘issues’ with the wall-following robot. The darned thing  insists on doing exactly what I tell it to do, rather than what I  want it to, despite the liberal use of the RPM (Read Programmer’s Mind) command throughout the code.  This time I had managed to make modifications to the left & right motor speeds in two different locations (instead of encapsulating all changes in one function), and used value/copy arguments instead of reference arguments in one place where refs were absolutely necessary.

Anyway, I got the ‘issues’ squared away, and then decided to make some ‘improvements’ (never   a good thing!) by changing the distance acquisition code to use the NewPing library’s ‘ping_median()’ function.  The ping_median() function makes a number (5 by default) of pings in quick succession and then ( after throwing out any out-of-range values) returns the median distance.  I  thought this might smooth things out a bit, but what it actually did was slow things down sufficiently so the robot quickly got outside the loop capture range and departed the area.  One change that actually did improve things was to add some code in setup() to start the robot moving straight  at low speed after acquiring an initial distance estimate (I did use the ping_median() function for this), as this reduced/eliminated startup divergences.

 

 

After making these changes, I ran some more wall-following tests, with some interesting results.  On one test the robot ran off the end of the straight-wall test section (aka ‘Kitchen’) and onto the rug in our living room.  On the rug sits a footstool with a round base. Watch the action :-).

 

So, at this point the robot is doing a fair job of following walls, although it still has a number of deficiencies that have to be addressed  before it gets let loose in the wild to roam the hallways.

  • Gaps give it fits.  There is a gap of about 10-12cm between the fridge and the pantry wall in the kitchen, and when the robot gets to this spot, it invariably nose-dives right into it, as the distance measurement instantly increases  a lot right there.  I don’t really know if anything can be done about this, as this same behavior is what allows it to follow wall variations in the first place.
  • After reversing the definition of ‘forward’ on the robot (see my earlier post), the ‘front’ ping sensor is actually mounted on the new ‘rear’.  I still need to reposition it (or make that end the front again).  I’m thinking seriously of repositioning the left and right ping sensors so they are on or near the line between wheel axles, to suppress/eliminate the mechanical element of the feedback the wall-following feedback loop.  If I do that, then I can turn the robot around again, and not have to reposition the ‘front’ sensor.
  • It still runs on non-rechargeable AA alkalyne batteries (I’ve got rechargeables on the way, but…).  Even when I get rechargeables installed, I still have to figure out how to get the robot to charge itself.

Stay tuned! 😉

 

 

 

 

 

 

Tuning the Robot’s Wall-following Feedback Loop

01/25/15 10:00

After getting the robot to follow a wall  at all, I have now graduated to the task of tuning the feedback loop for ‘best performance’.  Part of the problem is trying to define what ‘best performance’ means in terms of wall following capabilities.

In my UpdateWallFollowMotorSpeeds() function, I compare the current distance to the previous one, and adjust left/right motor speeds up or down to  turn away from the wall if the distance is decreasing, or toward it if the distance is increasing.  The amount of adjustment applied is a program constant that can be ‘tuned’ for optimum performance, whatever that means.  Wheel speeds can vary from 50 to 200 (out of a 0 to 255 range)

  • Small wheel-speed adjustment values result in a smooth  sinusoidal path with a slowly increasing amplitude – which eventually diverges to the point where the robot either departs from the wall entirely, or runs into it head-on, as shown in the following video, taken with an adjustment value of 5
  • Larger adjustment values increase the ‘available control authority’, but also increases the correction magnitude to the point where a small error in distance measurement causes the robot to diverge and spin in place.
  • An intermediate value of 25 seem to be a good compromise between smoothness and sufficient control authority to keep the robot going straight (on average, anyway)

Speed Adjustment Value = 5

 

Speed Adjustment Value = 50

 

Speed Adjustment Value = 25.  On this video you can also see the two green LED’s which serve as ‘commanding left turn’ and ‘commanding right turn’ debug indicators.

This is OK, but still not quite what I want for my wall following robot.  However, it may well be that I will never be able to completely suppress the oscillations, as this is a fairly complex feedback loop.  And, as I know from my 40+ years as an Electrical Engineer, all feedback loops can cause oscillations when the loop phase shift approaches 180 degrees.  This particular arrangement has both electrical and mechanical phase terms that make it even more interesting

  • The position of the sensors relative to the axis of rotation introduces a large phase shift term, as small rotation changes cause large distance changes. As we saw earlier, the original phase relationship with the sensors at the rear was positive (a small turn toward the wall causes a large increase in distance measurement, exactly opposite of the expected result), while the phase relationship in the current configuration with the sensors at the front is negative   (a small turn toward the wall causes a large    decrease in distance measurement, the expected result but amplified).
  • There is a time/phase lag between the time the program makes a decision to the time the wheel speeds actually change, so by the time the robot starts to compensate for an error term, the error has had time to get larger.  This in turn means the robot will continue to correct well past the point where it should, leading to overshoot in the other direction.

The above leads me to believe that I (aka the robot) should be making corrections based on the value of  the distance-vs-time curve slope, rather than  just the sign of the slope (positive or negative).  If I use the value of the slope, then I can make bigger corrections for larger slopes, and smaller ones for smaller slopes.  This could get a little tricky, as the slope calculation  requires some knowledge of time; IOW, each point in the curve is a (dist,time) pair, and both have to be known to calculate the slope between two pairs (dist0,time0) and (dist1,time1).  I could simply assume that the time between distance measurements is a constant, which would mean that the slope is proportional to  (dist1 – dist0).  So, I think I’ll try a very simple setup, and make the adjustment value = 25 * |d1-d0|.

 

 

The ‘Robot Weave’ (aka Wall-E3)

01/23/2015

In our last installment, Wall-E2 was more a wall-bumping robot than a wall-following one, and since then I have beating my own head against a wall trying to figure out why the darned thing wouldn’t behave the way I expected.

The basic idea of our wall-following robot is to use a side-looking ultrasound distance sensor to maintain a constant distance from a wall.  If the distance readings start to increase, the outside motor would be sped up, and the inside one slowed down, until the readings started going down again, at which point the procedure would be reversed.  If the distance readings remained constant, no speed adjustments would be made.  Debugging this arrangement is difficult, because it is hard to obtain any real-time data on what the robot is seeing and doing.  The Uno doesn’t have much RAM, so storing test data for later analysis isn’t feasible, and besides, I’d wind up spending more time on the analysis software than on the robot itself.  I was able to run the robot tethered to my PC in Visual Micro’s serial debug mode, and this (eventually) allowed me to gain some small insight into what was going on.  I finally decided that  I had too many moving parts (some virtual, some literal) and I was going to have to drastically simplify the system if I wanted to have any chance of making progress.  So, I removed all the control code except that required to go straight and follow the wall – non-essential stuff like the back-up-and-turn feature was all commented out.  Then I added two  sets of  Red/Green LED pairs to the Uno as slow/fast indicators for the left and right motors.  Green meant the motor was slowing down, and red meant it was speeding up.  The idea was to allow me to (literally) see if the commands to the motors were consistent with the robot’s position relative to the wall.

I was testing each revision by holding the robot in my hand and moving it back and forth toward one wall of my lab area, while watching the distance and motor speed debug readouts on my PC.  If the desk testing went well, then I would run the robot untethered along a section of wall in our kitchen.  This was very frustrating because it looked like the robot behaved properly on the desktop tests, but not on the real-world wall testing – how could this be?  In test after test, the robot literally spun out of control almost immediately, winding up with one wheel running full speed  and the other one stopped.

Eventually out of desperation I ran multiple wall tests, each time starting with the robot parallel to my test wall, hoping to see why it  always corrected in the wrong direction on the wall, but in the right direction in the desktop tests.  I saw that the robot made an initial turn away from the wall – OK, but then instead of turning back toward it – kept turning even sharper away from it – again and again!  I watched several times very carefully, trying to make my mind work like the robot’s simple program – get the distance, compare to the last one, adjust motor speeds to compensate.  And then it dawned on me – the robot was doing  exactly  what it was programmed to do, but the geometric relationship between the sensor location (where the distance measurement occurs) and the center of rotation of the robot was screwing up the phase relationships – turning what should have been a negative feedback loop into a positive one – with predictable results.  The following figure illustrates the problem.

Wall-E3 Distance Feedback Dynamics

Wall-E3 Distance Feedback Dynamics

In the figure, the top half illustrates the sensor and drive wheel layout, and shows the direction of travel as initially designed and programmed.  D1, D2, and D3 are successive distance measurements processed by the program.  At Position 2, the robot has determined that it needs to move away from the wall, and so speeds up the left motor and slows down the right motor, leading to Position 3.  However, because the sensor position is well behind the center of rotation (essentially the wheel axle line), the sensor actually gets closer to the wall instead of farther away. The robot responds to this by further increasing the left wheel speed and further decreasing the right wheel speed, which makes the problem even worse, etc.  This continues until the left motor is running full speed and the right motor is stopped, causing the robot to spin in place.

The bottom half of the figure shows my rather elegant (if I do say so myself) solution to this problem – simply reverse the direction of travel, which has the effect of converting the dynamics from positive to negative feedback.   Position 1 and Position 2 are the same as the top half, but in Position 3, it is clear that the distance from the wall starts increasing as soon as the robot starts rotating.  The robot responds by undoing its previous adjustments to the drive wheels; if it overshoots, the sensor dynamics bring it back to the centerline.

Implementing this scheme required very little work.  I had to swap the left and right distance sensor and motor control leads on the Uno (easier to do it this way than to change the code), and redefine ‘forward’ to be toward the caster wheel instead of away from it.  After making the above changes, I ran the wall-following test again, and lo and behold – it worked (sort of)!  The following video clip shows the new-improved Wall-E3 ‘weave’.

 

Now that I have the robot working to the degree that it doesn’t immediately spin out of control, I can start to look for ways to improve performance, to maybe reduce the amplitude of the ‘weave’ to something reasonable.  I have already incorporated the ‘NewPing‘ library (and contributed a few buckazoids to the author for a nice, elegent class library!) into the code, so I should be able to use it to speed things up.

Stay tuned for further adventures of Wall-E, the wall-following robot!

 

01/24/2015 – Update to the ‘Robot Weave’ saga:  I cleaned up the code a bit and hopefully sped things up a little.  The following video was taken in our kitchen a few minutes ago

Wall-following Robot (aka Wall-E2)

My grandson Danny and I have been working on a wall-following robot project, just for grins.  The general idea is to create a semi-autonomous robot as animated prey for our two cats.  I’m not sure I really care too much whether or not we ever get something the cats will actually run  toward instead of  away from; it’s the adventure that counts ;-).  There is also the hope that the robot adventure will (or can be made to) intersect/overlap with my  3D printing capabilities/interests.

Because I’m an old broke-down engineer, I have tried to imagine the general  requirements for our semi-autonomous prey robot.  So far as we have been able to enumerate them to date, they are:

  • Follow walls and not get stuck
  • Find and utilize a recharging station
  • Act like a cat prey animal (i.e. a mouse or similar creature)
  • Generate lots of fun and waste lots of time for the humans (me and my grandson) involved

The  first  three requirements actually seem pretty complete, although maybe not that easy to realize.  However, the fourth requirement should be easy to meet, and in fact I can report that it has already been partially achieved (we have already wasted lots of time and had lots of fun!)

To get started, I did what I always do – research!!  In the bad old days I did this by raiding technical libraries for books/magazines/articles relevant to the subject, and then going after everything referenced in the first round of material.  It wasn’t unusual for me to go through dozens or even hundreds of citations in a short period, after which I was usually able to create an effective approach to the challenge, whatever it was.  These days I start by throwing out a wide search loop on Google, and then following whatever trails seem productive.  At this stage I’m not at all picky about what I look at, and not at all picky about discarding materials or leads that weren’t  relevant.

In the case of DIY robots, there is a  lot of educational material out there, along with lots of DIY parts, development tools, and other goodies.  For our  first try at this, I acquired the following parts:

  • A Chinese Arduino Uno clone (http://www.ebay.com/itm/US-UNO-R3-ATmega328P-ATmega16U2-2012-Version-Board-Free-USB-Cable-for-Arduino-/251459710321?ssPageName=ADME:X:RRIRTB:US:3160), $8.99 ea.  I purchased 3 and was glad I did, as one arrived DOA (I got a full refund – thanks!), and I almost always manage to kill at least one of everything I try ;-).
  • A DIY robot chassis with a dual-motor L298 controller (http://www.ebay.com/itm/161246592134) for $24.99.  This turned out to be  way huge for a ‘cat prey animal’, but hey – ya gotta start from somewhere! ;-).
  • 5ea HC-SR04 Ultrasonic distance sensors.  Prices for these varied all over the lot, from almost $10 each to $6.14 for a pack of 5 (guess which one I picked).
  • A Solarbotics L298 Motor Driver kit from the local Micro Center, to replace the one that came with the robot chassis – the one I burned up shortly after it arrived 🙁
  • A set of 5ea Linear Technology LT3081 1.5A programmable voltage regulators, to replace the 2N3055/LM317t – based home-brew lab power supply I managed to burn up while burning up the dual motor driver 🙁  :-(.

While waiting for the parts to arrive, I worked on setting up the development environment for Arduino coding.  I already had VS2008 on my machine, so I utilized Visual Micro’s integrated IDE add-on for Arduino development, and found Virtronic’s very nice Simulator for Arduino – PRO version (www.virtronics.com.au).  Danny and I worked out the basic structure diagram, and we were able to get an early version of the software running on the simulator during a 3-4 day visit to the kid/grandkid abode over Christmas

First-cut structure chart for the wall-following robot

First-cut structure chart for the wall-following robot

 

After getting the parts in and assembling the robot chassis (and burning up/replacing the motor driver), I was able to get the robot to the ‘first baby steps’ stage fairly quickly (where the term ‘quickly’ is used somewhat loosely!)

Robot kit as delivered

Robot kit as delivered

Finished robot - top

Finished robot – top

Finished robot - bottom

Finished robot – bottom

And speaking of ‘baby steps’ – here is a short movie of the first really successful run of the wall-follower.  In this case, ‘success’ means that the robot recognized that it was stuck, and backed up/turned to recover from the stuck condition.

There’s lots more work to do before this particular robot has any chance of attracting a cat or two.  It doesn’t run in a straight line for crap, and is much better at bumping into walls than following them.  But hey, it is already over-achieving on technical requirement #4 (generate fun, waste time!) 😉

Stay tuned!