Monthly Archives: April 2015

Robot of the future – LIDAR and 4WD

Posted 04/29/15

In a whole series of posts over this last month, I described the results of my efforts to solve the ‘stealth slipper’ problem, where Wall-E gets stuck on my wife’s fuzzy blue slippers and can’t seem to reliably detect this condition.  I ran a large number of experiments which eventually convinced me that the ultrasonic sensors I have been using for wall-following and ‘stuck’ detection just aren’t up to the task. Even the use of two forward-looking ping sensors, which I thought was going to be a really cool and elegant solution, didn’t do the job. There is just too much data corruption due to multipath and ‘friendly fire’ corruption between ping sensors to reliably discriminate the ‘stuck on stealth slippers’ condition.

Slipper turned 90 degrees  clockwise

Slipper turned 90 degrees clockwise

So, its time to consider other, more radical, alternatives.  First and foremost, it is clear that ultrasonic sensing will not work for ‘stealth slipper’ detection/avoidance.  Other possible sensing modes are:

  • IR Ranging:  This has the advantage of being pretty cheap, but hobbyist IR ranging options like the Sharp IR Range Sensor (shown below) are fairly short range, slow, and have just an analog output with limited accuracy.

    Sharp IR Range Sensor

    Sharp IR Range Sensor

  • LIDAR:  This technology is fast, can be very long range, and can provide very accurate ranging information.  Unfortunately these sensors tend to be heavier and much more expensive than either the ultrasonic or IR sensor options.  The CentEye/ArduEye laser range finder using the Stonyman vision chip was a really cool, light weight and cheap solution, but it is unfortunately out of production and unavailable :-(.  The best of the lot for now appears to be the LIDAR-LITE sensor or the fully-assembled LIDAR package that is part of the NEATO robot vacuum cleaner.
    PulsedLight LIDAR-Lite unit

    PulsedLight LIDAR-Lite unit

    Neato Robotic Vacuum LIDAR module

    Neato Robotic Vacuum LIDAR module

  • Optical parallax vision processing:  This is really the same as the LIDAR option, but with separate laser, receiver, and parallax computation modules.  This is what the now-unobtainable Stonyman chip/Laser/Arduino solution did, but there are other, less attractive, ways to do the same thing.  One is a combination of a cheap laser diode and the Pixy CMU Cam.  The Pixy module handles a lot of the vision pre-processing necessary for parallax range determination, and the laser diode would provide the bright, distinct spot for it to track.Pixy CMU Cam module

After looking through the available options, it occurred to me that something like the LIDAR-Lite might allow me to not only replace the forward-looking sensor on WallE, but maybe even the side ones as well.  The LIDAR-Lite is fast enough (20 msec/reading) that I should be able to use it for all three directions (left, right, forward).  In fact, if I mounted it on a servo motor using something like the Adafruit slip ring component shown here, I could implement a cool 360-degree LIDAR

Adafruit Slip Ring with 6 contacts

Adafruit Slip Ring with 6 contacts

It also occurred to me that while I’m in the process of making radical changes to WallE’s sensor suite, I might want to consider changing WallE’s entire chassis (I think this is the robot equivalent of ‘repairing’ a car by lifting up the radiator cap and driving a new car under it).  The 2-wheel plus castering nose wheel arrangement with the current WallE leaves a lot to be desired when navigating around our house.  Too often the castering nose wheel gets stuck at the transition from the kitchen floor to the hall carpet or the area rugs. In addition the nose wheel axle/sleeve space tends to collect dirt and cat hair, leading to the castering nose wheel acting more like a castering nose skid than a wheel ;-).  After some more quality time with Google, I came up with a very nice 4-wheel drive DFRobot 4WD Arduino Mobile Platform robot chassis from DF Robots, along with the companion ‘All In One Controller‘.

DFRobot 4WD Arduino Mobile Platform

DFRobot 4WD Arduino Mobile Platform

DFRobot Romeo V1 All-in-one Microcontroller (ATMega 328)

DFRobot Romeo V1 All-in-one Microcontroller (ATMega 328)

Adventures with Wall-E’s EEPROM, Part VI

Posted 04/26/15

In my last post I showed there was a lot of variation in the data from Wall-E’s ping sensors – a lot more than I thought there should be.  It was apparent from this run that my hopes for ‘stuck’ detection using variation (or lack there of) of distance readings from one or more sensors were futile – it just wasn’t going to work.

At the end of the last post, I postulated that maybe, just maybe, I was causing some of these problems by restricting the front sensor max distance to 250 cm.  It was possible (so I thought) that opening up the max distance to 400 cm might clean up the data and make it usable.  I also hatched a theory that maybe motor or movement-related vibration was screwing up the sensor data somehow, so I ran some tests designed to investigate that possibility as well.

So, I revised Wall-E’s code to bump the front sensor max distances to 400 cm and made a couple of runs in my test hallway (where the evil stealth slippers like to lurk), to test this idea.  The code adjustment had a bit of a ripple effect, because up til now I had been storing the distance data as single bytes (so could store a distance reading of 0-255 cm), and storing 2-byte ints was going to take some changes.  Fortunately a recently released update to the EEPROM library provide the put() and get() methods for just this purpose, so I was able to make the changes without a whole lot of trouble.

Results:

First, I ran a number of tests with the front sensor max distance still set at 255 so I could stay with the single-byte storage system, with and without the motors engaged, and with and without mechanically induced vibration (tapping vigorously on the side of the robot chassis) while moving it toward and away from my bench wall.

Test bench run with motors disabled, without any external tapping

Test bench run with motors disabled, without any external tapping

Test bench run with motors enabled, but no external tapping

Test bench run with motors enabled, but no external tapping

Test bench run, motors enabled, with external tapping

Test bench run, motors enabled, with external tapping

From these runs, it is clear to see that having the motors engaged and/or having an external disturbance does not significantly affect the sensor data quality.

Next, I enabled 2-byte EEPROM storage and a 400 cm max distance for the two front sensors. Then I did a bench test to validate that EEPROM storage/retrieval was being done properly, and then ran another field test in the ‘slipper’ hallway.

Field test with all sensors and motors enabled, 400 cm max distance on front sensors

Field test with all sensors and motors enabled, 400 cm max distance on front sensors

The front and top-front sensor data still looks very crappy, until Wall-E gets within about 100 cm of the far wall, where it starts to look much better.  From this it is clear that opening up the front max distance from 255 to 400 cm did absolutely nothing to improve the situation.  Meanwhile, the offside side sensor readings are all over the place.

So, I have eliminated motor noise, mechanical vibration, and inappropriate max distance settings as the cause of the problems evident in the data.  After thinking about this for a while, I came to the conclusion that either there was still some intra-sensor interference, and/or the hallway itself was exhibiting multipath behavior.  To test both these ideas, I disabled the motors and all but the top-front sensor, and ran a series of 4 tests, culminating in a run in the ‘slipper’ hallway where I moved the robot by hand, approximating the somewhat wobbly path Wall-E normally takes.  The results are shown below.  In the first two tests I moved the robot toward and away from my test wall a number of times, resulting in a sinusoidal plot.  In the two long range tests, I started approximately 400 cm away from the wall, moved close, and then away again, back to approximately 400 cm.

Test run in my lab and in the  'slipper' hall, top front sensor only (400 cm max distance).

Test run in my lab and in the ‘slipper’ hall, top front sensor only (400 cm max distance).

The first two tests (‘Bench 1’ and ‘Bench 2’) validated that clean data could be acquired, and the ‘Lab Long Range’ test validated that the ping sensor can indeed be used out to 400 cm (4 meters).  However, when the field test was run, significant variation was noted in the 150-350 cm range, and there doesn’t seem to be any good explanation for this other than multipath.  And, to make matters worse, if one sensor is exhibiting multipath effects, it’s a sure bet that they all are, meaning the possibility (probability?) of multiple first, second, and third-order intra-sensor interference behavior.

After this last series of tests, I’m pretty well convinced that the use of multiple ping sensors for navigation in confined areas with multiple ‘acoustically hard’ walls is not going to work.  I can probably still use them for left/right wall-following, but not for front distance sensing, and certainly not for ‘stuck’ detection.

So, what to do?  Well, back to Google, of course!  I spent some quality time on the web, and came up with some possibilities:

  • The Centeye Stonyman Vision Chip and a laser diode. This is a very cool setup that would be perfect for my needs.  Very small, very light, very elegant, and (hopefully) very cheap lasesr range finder – see https://www.youtube.com/watch?v=SYZVOF4ERHQ.  There is only one thing wrong about this solution – it’s no longer available! :-(.
  • The ‘Lidar Lite’ laser range finder component available from Trossen Robotics (http://www.trossenrobotics.com/lidar-lite).  This is a complete, self-contained LIDAR kit, and it isn’t too big/heavy, or too expensive (there might be some argument about that second claim, but what the heck).
  • The Pixy CMUCam, also available from Trossen (http://www.trossenrobotics.com/pixy-cmucam5).  This isn’t quite as self-contained as it needs a separate laser and some additional programming smarts, but it might be a better fit for my application.

So, I ordered the LIDAR-lite and the CmuCAM products from Trossen, and they will hopefully be here in a week or so.  Maybe then I can make some progress on helping Wall-E defeat his nemesis – the evil stealth slippers!

Stay tuned…

Frank

 

 

 

 

 

 

 

 

Adventures with Wall-E’s EEPROM, Part V

 

Posted 04/22/15

In my last post I analyzed a stuck/un-stuck scenario where Wall-E got stuck on a coat rack leg, and then got himself unstuck a few seconds later.  This post deals with a similar scenario, but with the evil stealth slippers instead of the coat rack, and this time Wall-E didn’t get away :-(.

 

 

EEPROM data from Wall-E slipper run.  Note large variations on all four channels.

EEPROM data from Wall-E slipper run. Note large variations on all four channels.

Last 50 records, showing large amount of variation on all four channels.

Last 50 records, showing large amount of variation on all four channels.

Analysis:

  • T = 09: Wall-E hits his nemesis, the evil Stealth Slippers
  • T + 37: Wall-E signals that it has filled the EEPROM.  No ‘stuck’ detection, so no sensor array data.
  • My initial impression of the 4-channel EEPROM record was “Geez, it’s just random garbage!”.  There does not appear to be any real structure to the data, and certainly no stable data from the left and right side sensors.  Moreover, the top front sensor – the one that was supposed to provide nice stable data even in the presence of the stealth slippers – appears to be every bit as unstable as the others – ugh!
  • In order to more closely examine the last few seconds of data, I created a new plot using just the last 50 or so records.  From this it is clear that both the left and right side sensor data is unstable and unusable – both channels show at least one max-distance (200 cm for the side sensors) excursion.  The front and top-front data doesn’t fare much better, with 4-5 major excursions per second.
  • The only bright spot in this otherwise panoply of gloom is that the front and top-front sensor data shows a lot of intra-sensor variation, meaning that this might be used to effect a ‘stuck’ declaration.  In the last 50 records, there are 4 records where ABS(front-topfront) > 85 (i.e. > MAX_FRONT_DISTANCE_CM / 3).  Looking more closely at the entire EEPROM record, I see there are 18 such instances – about one instance per 50 records or so, or about 1 per second.  Unfortunately, at least 6 of these occur in the first third or so of the entire record, meaning they occur before Wall-E gets stuck on the slipper.  So much for that idea :-(.

Despite the gloom and doom, this was actually a very good run, in that it provided high-quality quality data about the ‘stealth slipper detection’ problem.  The fact that the data shows that one of my ideas for detection (the intra front sensor variation idea) simply won’t work, as that variation is present in all the data, not just when Wall-E is stuck.  At least I don’t have to code the detection scheme up and then have it fail! ;-).

It is just barely possible that I have caused this problem by restricting the max detection distance for the front sensors to 250 cm in an effort to mitigate the multipath data corruption problem.  So, I’m going to make another run (literally) at the slippers but with the max front distance set out to 400 cm versus the existing 255 cm limit.  However, this will cut the recording capacity in half, as I’ll have to use 2 bytes per record.  I can compensate for this by not storing the left and right sensor data, or by accepting a shorter recording time, or some combination of these.  One idea is to store the left & right sensor data as bytes, and the front sensor data as ints.  This will require modifying the EEPROM readout code to deal with the different entry lengths, but oh well….

Stay tuned…

Frank

 

 

Adventures with Wall-E’s EEPROM, Part IV

Posted 04/22/15

In my last post, I showed some results from Wall-E’s EEPROM data captures, including a run where Wall-E got stuck on the wife’s evil stealth slippers – and then unexpectedly got ‘unstuck’.  I couldn’t explain Wall-E’s miraculous recovery from the captured EEPROM sensor data, so I was left with two equally unpalatable conclusions; either I didn’t understand Wall-E’s program, or Wall-E was ‘seeing’ something besides what was captured in the EEPROM.

So, I decided to modify Wall-E’s programming to capture additional data when/if Wall-E got stuck – and then unstuck – on future runs.  The mods were described in the last post, but basically the idea was to capture the contents of both the 50-point front and top front sensor data arrays, along with the current values of all four sensors.  To do this I re-purposed the first 100 EEPROM locations to store the sensor array data, figuring that the earliest points would be the least likely to be relevant for post-run analysis.

After making the mods, and testing them on the bench in debug mode, I took Wall-E out for another field trial, hoping he would do the same thing as before – namely getting stuck and then un-stuck on/from the evil stealth slippers.

As it turned out, Wall-E’s next run produced good news and bad news. The good news is that Wall-E did indeed get stuck and then un-stuck, providing some very good decision data.  The bad news was that it got stuck on a coat rack leg (an easier problem for Wall-E) instead of the stealth slippers.  Still, it did provide an excellent field validation of the new data collection scheme, as shown below.

Remaining EEPROM Contents at the point where Wall-E declares 'Stuck'.

Remaining EEPROM Contents at the point where Wall-E declares ‘Stuck’.

Top and Top Front Sensor Array Contents at the point where Wall-E declares 'Stuck'

Top and Top Front Sensor Array Contents at the point where Wall-E declares ‘Stuck’

Analysis:

  • T + 13:  Wall-E gets stuck on a coat rack leg.  He got stuck because I hadn’t yet updated the left front bumper to the new non-stick style, but this was a good thing ;-).  Just before Wall-E hits the leg, the left sensor distance reading changes rapidly from about 40 cm to about 20, as the left sensor picks up the coat rack leg on its left.
  • T + 13-15: Wall-E tries to drive around the coat rack leg it is stuck on, causing it to turn about 45 degrees to the left. During this period the right, front, and top-front sensor readings vary wildly, but the left sensor reading stays quite stable (almost certainly reading the distance to the coat rack leg on the left).
  • T + 16:  Wall-E has stopped moving, and consequently the top and top-front sensor readings settle down.  Interestingly, the right distance sensor readings don’t settle down, even thought there are no obstacles within the max detection distance (200 cm) on that side – no idea why.
  • T + 20: Wall-E declares the ‘stuck’ condition.  This is almost certainly due to the total deviation of the current contents of the front sensor reading arrays falling below the threshold (5 cm in this case).  From the data, the front sensor array deviation is 4 cm, while the top-front deviation is still high (161) due to a 209 value that hasn’t quite yet fallen off the end.

So, the captured for this run is entirely consistent with the stuck condition and recovery, with the possible exception of the anomalous right sensor readings that should show a constant 200 cm but doesn’t for some unknown reason.

Next up – another try at getting Wall-E stuck on the stealth slippers, with (hopefully) a ‘stuck’ condition’ detection to boot!

Stay tuned…

Frank

 

 

 

Adventures with Wall-E’s EEPROM, Part III

 

Posted 04/19/15

In my last post I described how I might be able to use the Arduino Uno’s onboard EEPROM to see the world from Wall-E’s point of view, at least for a few seconds at a time.  So, now that I’m back home from the Gatlinburg, Tn duplicate bridge tournament, I decided to try my luck at this.

First, I had to make some additional modifications to Wall-E’s program:

  • Revised the sensor data retrieval routines to substitute MAX_FRONT_DISTANCE_CM for any zero reported from either front sensor, and to substitute MAX_LR_DISTANCE_CM for any zero reported from either the left or right sensor. This takes care of the problem of sensor readings abruptly transitioning from a near-maximum reading to zero, and back again.
  • Revised the EEPROM storage routine to store readings from all four sensors instead of just the two front ones.  Data will be recorded until the EEPROM is full, at which point Wall-E will blink all four taillight LED’s twice.  Wall-E will continue to run, but won’t store any more data.
  • Revised the separate EEPROM readout program to properly read out all four sensor values, instead of just the two front ones.

After making the modifications (and fixing the inevitable bugs), I set Wall-E loose on the world with it’s new EEPROM-storage capabilities.  I video’d the run so I would later be able to correlate the EEPROM data with what Wall-E was actually doing at the time.  The first run went very well, with Wall-E behaving ‘normally’ (whatever ‘normal’ means for a robot!).  I videoed the run until Wall-E blinked its tail-lights to signify it was done recording, and I noted that the run had lasted about 30 seconds (which was too bad, because Wall-E got stuck on a coat rack leg just after running out of storage space! ;-).

The video and the Excel plot are shown below, followed by my post-run analysis

 

EEPROM data from Wall-E's first instrumented run

EEPROM data from Wall-E’s first instrumented run

Analysis:

  • The very first thing I noticed about the Excel plot is that there is something badly wrong with the first part of the data, up to about point 400; it’s way too constant.  After about 400, it looks like Wall-E collected ‘good’ data, although a lot of it looks pretty frightening!
  • I thought Wall-E started out tracking the wall on the right, but the data doesn’t support that – it appears it was tracking on the left wall from the get-go.
  • from the video it looks like Wall-E’s tracking period is between one and two seconds, and from the plot this corresponds to about 50 points.  This correlates reasonably well with the observation that it takes about 30 seconds to fill the 1024-byte EEPROM.  1024 divided by 30 gives 34 points/sec, so 50 points would give a period of about 50/34 = 1.5 seconds.  This is actually quite good news, because it means that the 50-point array I was using earlier as part of the ‘stuck’ detection routine can probably capture an entire tracking period.  The amplitude of the tracking response appears to be about 8-10 cm in ‘free space’ and about half that when Wall-E’s castering nose wheel was hitting the rug edge after about point 580.
  • From the plot, it appears the front and top-front sensors were reporting obstacles in view even though there weren’t any, at least not for the first part of the initial wall-tracking phase of the run.  This is probably due to the fairly large heading deviations made by Wall-E even while wall tracking, possibly coupled with some multipath effects.  This is actually good news as there should be significant variation in front sensor readings during normal operation, even if there is no dead-ahead obstacle within sensor range.   Also, it is clear that once Wall-E came within about 60-75 cm of the door, it ‘captured’ the attention of both the front and top-front sensors, which then tracked the door very nicely all the way down to the 10 cm avoidance threshold.  The same thing happened with the side wall, although because Wall-E was moving slower, the distance reversal happened a bit earlier.
  • After turning away from the side-wall obstacle, Wall-E tracked the same wall, but in the opposite direction and at a larger distance (about 60 cm vs about 40 on the way in).  It is interesting to note that the reported distanced from the front sensors also went up, presumably due to the longer slant-distance to the wall and back during the toward-wall heading excursions (and also probably because Wall-E’s heading excursions weren’t as large due to being slower on the carpet)
  • The ‘off-side’ (right sensor on the way in, and left sensor on the way out) showed very large variations – large enough to hit the stops (200 cm max distance) occasionally on the way in, and on almost every heading swing on the way out.  The measured distance from Wall-E’s average position on the way out to the far wall is about 150 cm, so a 45 degree heading variation would create a slant range distance of over 200 cm.  Smaller heading deviations would also likely create a situation where most of the left sensor’s ping energy bounced away from the sensor, creating a ‘nothing in view’ response.

Posted 04/20/15

Today I made another run with the EEPROM enabled.   This time I was fortunate enough to have Wall-E encounter my wife’s stealth slippers, while still in the EEPROM collection window, so I may have captured that data.  Unfortunately, Wall-E was uncharacteristically smart enough to figure out he was stuck, and so backed away from the dreaded robot-eating slippers!

 

Sensor plot from Run 2. Wall-E triumphs over the ‘stealth slippers’ at the end!

Analysis:

  • This plot shows the same disconcerting initial section where the data just doesn’t look correct.  However, this time I know where it comes from; Wall-E’s normal running code is configured (now – not for Run 1) to zero out the EEPROM contents before the run starts, and this happens in setup().  When I want to read the information back out again, I need to load an entirely different program, and to do that, I have to connect the USB connector to the Arduino.  Unfortunately, this action also powers up the Arduino and sets Wall-E’s normal program running – and practically the first thing it does is start zeroing out the EEPROM.  Fortunately it only got through the first couple of hundred data points before being halted by the bootloader, but this is the reason for the initial group of zeros in the plot for Run 2, (and the initial group of too-stable data for Run 1).  In the future, I think I’ll incorporate a switchable delay into Wall-E’s programming, maybe using the now-existing pushbutton, so that no data will be zeroed out or overwritten.
  • Between point 336 and 358 the front and top-front sensor readings ‘come off the stop’ of 250 cm as they start ‘seeing’ the doorway at the end of the hall.
  • T + 24: Wall-E hits the wall to the left of the doorway at point 628 after a hard left turn, and immediately backs up and recovers.  The hard left turn occurs when Wall-E’s right sensor picks up the short wall stub to the near right and then, after making an initial correcting left turn, picks up the door itself and ‘wall-follows’ it.  This is clearly shown in the data, as the right sensor (red) readings drop below the left ones at that point.  Wall-E is programmed to use the nearer wall for wall-following.
  • T + 27: Wall-E barely misses the end of the short wall stub.  It has been ‘wall-following the door and then short wall on its left side this entire time. Interestingly, between the t + 24 hit and this near miss, the top-front sensor readings vary wildly, but the front sensor ones look fairly smooth.  My guess is that the upper sensor was alternately ‘seeing’ the wall stub approaching and the wall well beyond that point, while the lower one was just getting the wall stub.
  • T+ 29: At point 684, Wall-E hits the other wall and recovers.
  • T + 33: After recovering from the second wall hit, Wall-E runs into the stealth slippers and gets stuck – yay!!
  • T + 37: Wall-E gets away!!  After 4 seconds, Wall-E manages to figure out that it is ‘stuck’ and backs away from the evil stealth slippers!  The sensor data in this 4-second period is extremely interesting, as this is the first time I’ve been able to see what Wall-E is ‘seeing’ when he gets stuck on the evil stealth slippers.  As previously assumed (but never confirmed until now!) the left sensor readings show very little variation, making the ‘stuck’ condition very easy to distinguish (at least from the human eyeball viewpoint), but the readings from the other three sensors aren’t so easy to interpret.  Contrary to expectations, and my ‘test range’ results, both the front and top-front sensor readings show significant variations (as does the readings from the ‘off-side’ side sensor.  The problem is, I have no clue as to why Wall-E got away from the slippers.  The current code uses only the top and top front sensor readings to determine the stuck condition, and I can’t see anything in the last 50 records that would cause that determination to succeed.    There are two criteria for a ‘stuck’ declaration; if neither the front or top front sensors show significant variation for the last 50 points, or if the difference between the front and top front sensors exceeds more than half the max front sensing distance (255 cm for this run) at any time.  There are actually two points within the last 50 that satisfy the second of these criteria (the top front sensor reading goes to 255 but the front sensor reading stays at 120 and 80 respectively), but these don’t trigger a ‘stuck’ declaration – Wall-E stays glued to the slippers for another 2-3 seconds.  So, either I don’t understand what my own code is actually doing, or I don’t understand what is actually going into the 50-point deviation tracking arrays, or the EEPROM data isn’t what Wall-E is actually seeing, or some combination of all the above (or something else entirely!).
  • Last 50 records before EEPROM recording ended.  Somewhere in  here is something that Wall-E used to declare the 'stuck' condition.

    Last 50 records before EEPROM recording ended. Somewhere in here is something that Wall-E used to declare the ‘stuck’ condition.

  • T + 39: EEPROM recording ends.

Well, I think I’m making some progress toward understanding Wall-E’s point of view (or maybe it’s his ‘point of hear’ instead?), especially with respect to how Wall-E manages obstacles like the evil stealth slippers.  I’m not there yet, although I am encouraged by how stable the left (near wall) sensor data looked while Wall-E was stuck on the slipper.  If that behavior can be confirmed by further observations, then it might be an easy and definite way of ‘stuck’ detection.

So, I have modified Wall-E’s code yet again as follows:

  • Moved the EEPROM initialization code until after the LED light startup sequence, thereby (hopefully) giving the compiler/load/bootloader code time enough to stop Wall-E’s main program before it wipes out the EEPROM data from the previous run.
  • Modified my ‘IsStuck()’ stuck detection function so that if it ever does detect a ‘stuck’ condition, it will
    • write the contents of the front and top-front 50-point history arrays to the the EEPROM (either after the newly collected sensor data if there is room, or by overwriting the initial set of sensor readings if there isn’t)
    • Light all 4 LEDs
    • Shut down the motors
    • Enter an infinite loop

By having Wall-E shut down and go to sleep, I can ensure that the ‘stuck’ detection is the very last thing Wall-E does, which should mean that the captured array and sensor data is what Wall-E was using to make the detection.

Then I modified the EEPROM read-out program to print out the array contents in single columns for ease of plotting, with a line in between the array contents and the remaining sensor readout values.

 

 

Stay tuned….

Frank

 

Gatlinburg, Tn NABC Regional, Postscript

April 18, 2015

I’m writing this from home on Saturday, after travelling back from Gatlinburg.  I got up this morning, had a last big breakfast at the Pancake Cabin, was on the road about 8:30 am, and was back here by 3:30 pm.  Got another 4-5 hours of prime book-listening time on ‘Lucifer’s Hammer’, and home in plenty of time to catch up on email and write this post.  I also was treated to another ‘alternate reality’ experience as I left the Gatlinburg area.  When I came off the mountain road back into civilization, (if you can call that strip ‘civilized’), I saw mile after mile of antique cars lining the road on both sides of this big 4-lane highway, with people sitting in lawn chairs on the sidewalk as if they were watching a parade!  Evidently I had stumbled through some sort of regular antique car show/sales event.  I mean, they sometimes have something like this in the parking lot of our local DQ restaurant, but this went on for miles!

Just to round out my partnership desk experiences, my morning session partner never showed, so I was left with nothing to do until Mary and I played the Friday afternoon and evening 299 pairs.  We did OK in the afternoon (52%, 0.46 MP red), but we really hit the jackpot in the evening pairs/team game, with a 61% for 3.81 MP red.  Mary was ecstatic (I was pretty happy too!), and it was a wonderful way to end my first Gatlinburg experience.

Mary and I managed a 61% game, taking 1st overall and 3.81 red

Mary and I managed a 61% game, taking 1st overall and 3.81 red

Gatlinburg Regional Mug/Pencil Holder

Gatlinburg Regional Mug/Pencil Holder

All in all, I had a wonderful time, got to meet and play with some tremendously nice folks, and even got a few points in the bargain (no gold though, drat!).  I’ll be back there next year, Lord willin’ and the creek don’t rise!

Frank

 

Adventures with Wall-E’s EEPROM, Part II

Posted 04/17/15

I haven’t had much time to work on the ‘stuck detection’ problem lately, as I have been playing duplicate bridge every day at the ACBL regional tournament here in Gatlinburg, Tennessee.  However, my pick-up partner for the morning session didn’t show, so I’m using the free time to think about the problem some more.

In my last post I showed that Wall-E gets confused in certain circumstances when what I believe to be multipath effects corrupt the data collected by the two front-facing ultrasonic sonar sensors.  My current ‘stuck detection’ algorithm relies on at least the top sensor getting good clean distance data, even if the lower one is completely or partially blocked (the ‘stealth slipper’ scenario).  When the data from both sensors is corrupted, I’m screwed.  In my last post I showed some data I collected from a ‘stuck’ scenario using Wall-E’s on-board EEPROM, and this data made it crystal clear that both sensors were getting bad data.  So, what to do?  Time to go back to the drawing board and devise a new, improved ‘stuck detection’ algorithm that takes in to account the new information.

Excel plot of the front and top-front sensor data while Wall-E was stuck on the base of a cat tree

Excel plot of the front and top-front sensor data while Wall-E was stuck on the base of a cat tree

Here’s what we have:

  • The ‘stealth slipper’ scenario, where the lower sensor is partially or completely blocked, while the upper one is not.  In this case, the upper sensor may report a real distance if there is an obstacle within the set MAX_DISTANCE_CM parameter, or it may report zero if there isn’t .  The lower sensor can sometimes also report zero when it is blocked by the fuzzy slippers, as the slippers absorb or deflect enough of the ultrasonic energy to make the sensor think there’s nothing there.  The ‘stuck’ detection algorithm handles this case in a two-step process.  First, if both sensors report very little variation over time (meaning the distance to the next obstacle isn’t changing) then a ‘stuck’ condition is declared.  Or, if the deviation over time of the bottom sensor is different than the variation over time of the top sensor (the bottom sensor is partially blocked by the slipper, and reports widely varying distance readings), then a ‘stuck’ condition is declared.  A potential flaw in this algorithm is that either or both sensors can report a constant zero even if Wall-E is moving normally, but there simply isn’t anything within the set MAX_DISTANCE_CM parameter.  In the current algorithm, this is addressed by setting the MAX_DISTANCE_CM parameter to 400 cm for the front sensors, on the theory that in a normal wall-following scenario, there is always something within 4 meters (13 feet or so).
  • The normal wall-following scenario, where (hopefully) the front sensors report a steadily declining distance, with enough variation over time to avoid triggering a ‘stuck’ detection.  In this mode, the side sensor reporting the smaller distance will be used for primary left/right guidance.
  • The multipath scenario, where the physical geometry is such that the front sensors report widely varying distances, sometimes the same, and sometimes different.   The present ‘stuck detection’ algorithm fails completely here, as the top sensor reports enough variation over time to pass the first test, and there isn’t enough variation difference between the top and bottom sensors to satisfy the second one.  I don’t think there is anything to be done about the problem with the first test, as the presence of variation over time is the only reliable indicator of movement.  However it seems to me that the second test (differential variation between the two sensors) can be improved.  The differential variation test essentially compares the average variation in the top sensor to the average variation in the bottom one, and this clearly doesn’t work for the multipath case.  But, if I compared the two sensor readings on a point-by-point basis over a few seconds, I should be able to detect the mutipath case.  Maybe a running count of differences that exceed a set threshold (from the plot, it looks like a 10 cm distance threshold would work)?  Then if the count/sec exceeds some other threshold, declare a ‘stuck’ condition?

Anyway, I’m starting to think there may be some hope for Wall-E after all; maybe he won’t wind up stuck forever under a cat tree somewhere.  In any case, I think it would be wise to use my new-found EEPROM powers to collect some more ‘real-world’ data before I leap too far to a conclusion.  I think I might even want to include the side sensors in the reporting, so I can see what is actually happening during a typical wall-following sequence, as well as when Wall-E is actually stuck.  I think I can probably get at least 10-20 seconds of data without running out of EEPROM space, so we’ll see.

Stay tuned,

Frank

 

Gatlinburg, Tn NABC Regional, Part II

Posted 04/17/15

I’m writing this from my hotel room Friday morning, killing time before the afternoon pairs sessions start at 1pm.  This is my last day here at the Gatlinburg tournament, and I must say I’ve gotten my money’s worth (not in points, unfortunately, but in fun and experience).  I was told that this was a great regional to play in, and I certainly agree with that assessment now.  The organization was superb, the facilities great, and the people uniformly pleasant and helpful; doesn’t get any better than that (well, I could have played a lot better, but that’s not their fault;-) ).

Hugh, Judy, Dee and I played in a Knockout (KO) game Monday afternoon, as we had been told this was the best route to gold points.  In the KO format, teams of 4 compete against other teams in a 12-board Swiss Team game, and the losing team is ‘knocked out’ of the competition.  If you win a couple of rounds of this, you are pretty much assured of getting some gold points.   However, the downside is if you lose, you are out of the game entirely.  Well, we basically got our rear ends kicked, so our hopes for gold points were dashed  pretty soundly.  Being the persistent sorts, Hugh and I tried again the next day with a pickup team, and this time we got through one round before being KO’d, and we even tried a 3rd KO game the next day with another pickup pair, and again got KO’d on the first round.  This was pretty disappointing, as we thought we were bringing a pretty decent game to the party.  In retrospect it looks more like we brought knives to a gunfight! ;-).

After that, Hugh and I decided we would try our luck at the partnership desk, and I got hooked up with a ‘young’ (30 pts) player from Marrietta Ga, named Mary.  She and I were able to put together a 40% for the first session of a two session ‘Gold Rush’ pairs game, and a 57% for the second one. We got some red points for the 57% game, but because we hadn’t done well in the first session, we missed out on the gold. :-(.  We did, however, have a lot of fun, so that was it’s own sort of ‘gold’.  The next day (Thursday), Mary and I played in another Gold Rush 2-session game, but were unable to get out of the 40’s for either session.  After that, Mary suggested we try the stratified 2-session game for Friday, thinking that the non Gold Rush fever types might be a little bit easier for her, and I agreed.

So, Mary and I will play these last two sessions today, and then I’ll be on my way back home again tomorrow morning, in my red Ford F-150 with ‘Lucifer’s Hammer’ on the CD player again.  I won’t be taking any gold back with me, but I will be taking memories of some really great bridge played with some really great people, in a very interesting setting.  Lord willin’ and the creek don’t rise, I’ll be back here again next year! ;-).

Frank

 

Gatlinburg, Tn NABC Regional, Part I

Posted 04/12/15

I’m writing this from my room in the Le Conte View Motor Lodge, right across the street from the Gatlinburg Convention Center, site of the 2015 Gatlilnburg NABC Regional Tournament.  I’m down here with Hugh, Trish, Judy and Dee from the Columbus Bridge Center, hunting for gold “in them there hills” ;-).

I didn’t really know what to expect when I started this adventure – I just wanted to come down here and play some bridge, maybe earn a few points and experience a regional tournament.  I drove myself in my trusty Ford F-150 pickup (not your normal bridge vehicle, that’s for sure!).  I put the hotel address into my Tom Tom GPS, and ‘Lucifer’s Hammer’ by David NIven into my CD player, and hit the road, expecting to arrive in some sleepy town in southeastern Tennessee.  What I did not expect was the abrupt transition from suburban Knoxville to some sort of combination of alternate-reality,  fun-house carnival, beach resort, and mountain hideaway!  I first realized something strange was going on when  I started passing Hollywood-themed establishments packed side-by-side with each other; King Kong clinging to a skyscraper on this side, the Titanic complete with water rushing by its bow on the other, and a 3-story upside-down courthouse, complete with upside-down lawn and trees!  Then there was another abrupt transition from all the fun-house madness to an idyllic mountain road (albeit a modern 4-lane one) by an idyllic mountain stream, with no human habitations in sight.  This continued up into the Smoky Mountains right into the city limits of Gatlinburg.  In fact, I had begun to wonder whether I had missed a turn or something, when, with no warning at all, I was deposited back into the fun-house/beach resort alternate reality.  Ripley’s Museum, a quickie marriage chapel, a ‘Space Needle’ (how did I get from Tennessee to Washington state?), and everything else a person might dream of (in a nightmare about being lost on the boardwalk of a beach resort).  People everywhere, walking along the one main street.  Cars everywhere, driving at 5mph.  Long lines of motorcycles, also driving 5 mph, most with two riders.  Pickup trucks filled with hillbillies with the confederate flag prominently displayed.  Buildings crammed together check-by-jowl as if every square inch of real estate was more precious than gold (and I suspect it is!).  I found my hotel without any problem, because it, like every other hotel/motel in Gatlinburg, fronts on the one main street.  As I pulled into the hotel parking lot, I noticed that the hotel buildings (there are three, I think) are all oriented perpendicular to the street, reinforcing the impression that street frontage is hugely expensive.

Rooftops looking southeast from my hotel room

Rooftops looking southeast from my hotel room

Looking east from my room.  The convention center can be seen in the background, just across the street from the hotel

Looking east from my room. The convention center can be seen in the background, just across the street from the hotel

Looking northeast from my hotel room, the Gatlinburg 'Space Needle'

Looking northeast from my hotel room, the Gatlinburg ‘Space Needle’

Le Conte View Motor Lodge, seen from the steps of the convention center

Le Conte View Motor Lodge, seen from the convention center

I got checked into my room and decided to walk around a bit and get myself oriented for tomorrows tournament start.  I found the convention center OK (right across the street, hard to miss), and wandered around inside a bit.  I peeked into the main playing area on the first floor, and was in for another shock.  The main room is at least 100 yards long and at lest 50 yards wide, completely full from edge to edge with bridge tables.  Down the middle of this huge room was a line of bridge tables with a single chair sitting on top of the table; For a while I thought this was maybe a setup mistake, but there were too many of them for that.  Then I realized there was one such table for every column of tables – they must be game section boundaries of some sort.  Later, after taking the phtograph below, I realized that it showed only one half of the main playing area – there’s another entire section beyond the far wall!

The main playing hall.  There is another complete playing area behind the far room divider

The main playing hall. There is another complete playing area behind the far room divider

After this I walked from the convention center to what I think was the southern edge of town – maybe 1/2 mile – no more.  And each foot of the way was crowded hotels, restaurants, the fore-mentioned quickie marriage chapel, a real church right next to it, tatoo parlors, tee-shirt/gift shops, and everything else imaginable.  The other way from the hotel was the same – every imaginable themed entertainment/fun ride establishment, plus lots of themed restaurants (Bubba Gump Shrimp Co, for one), plus a few ‘normal’ franchises like Dunkin Donuts, Five Guys Hamburgers, and TGI Friday thrown in for good measure.

All this incredible variety of tourist-oriented businesses crammed into such a tight area stoked my curiosity, so I spent some time reviewing the town’s history on Wikipedia.  Turns out it’s location at the entrance to the Smokey Mountain National Park made it a natural tourist stop. From Wikipedia: in 1912, the town consisted of about 6 houses, a Baptist church, and a blacksmith’s shop.  In 1934 (the first year the park was opened), 40,000 tourists visited the town, with that number swelling exponentially to 500,000 within a year!  In 1992 an entire city block burned to the ground and was subsequently rebuilt (that explains the strange new/old character of the place, I guess).  Now the place is all hotels, motels, restaurants, and arcade-style game places of all descriptions, but no houses at all (or at least I never found any).

All for now – its late and I want to get some sleep before the opening day tomorrow.

Frank

 

 

Adventures with Wall-E’s EEPROM

Posted 4/12/15

At the conclusion of my last post (New ‘stuck’ Detection Scheme, Part VI), I had decided that I needed to investigate the use of the EEPROM on board Wall-E’s Arduino Uno as a potential way of recording actual field data, in an effort to find out what is really happening when Wall-E gets stuck and can’t get un-stuck.

OK, so the plan is to add a pushbutton to Wall-E’s hardware to trigger sensor data collection into EEPROM.  Then I can read the data out later using another program.  I happened to have a fairly decent selection of pushbuttons from other projects, so this part wasn’t a problem.  I decided to use an unused analog port (A5), with it’s pullup resistor enabled, so all the pushbutton has to do is pull that line to ground.  Then I could modify Wall-E’s code to write sensor data to EEPROM for as long as the A5 line is LOW.

The following photos show the pushbutton hardware and wiring

Side view showing pushbutton wiring and strain relief

Side view showing pushbutton wiring and strain relief

Side view showing pushbutton location on left wheel cover

Side view showing pushbutton location on left wheel cover

Top view showing pushbutton connections to the Arduino Uno

Top view showing pushbutton connections to the Arduino Uno

Next, I modified Wall-E’s code to write the current front and top-front sensor measurements to EEPROM in an interleaved fashion whenever the A5 line was LOW.

Next I wrote another small sketch to read the EEPROM values back out and de-interleave them into two columns of sensor measurements so it would be convenient to use Excel to plot results.  After testing this on the bench, it was time to let Wall-E loose on the world to ‘go get stuck’.  As if Wall-E could sense there was something wrong, it did its best to not get stuck.  I had almost run out of patience when Wall-E ran into the base of one of our cat trees and stuck – grinding its wheels but not going anywhere.  I was able to collect several seconds of data from the two front sensors – YES!!

Wall-E stuck on the base of a cat tree

Wall-E stuck on the base of a cat tree

Wall-E stuck on the base of a cat tree

Wall-E stuck on the base of a cat tree

After loading the readout program, extracting the sensor data and sucking it into Excel, the following plot shows what I collected.

Excel plot of the front and top-front sensor data while Wall-E was stuck on the base of a cat tree

Excel plot of the front and top-front sensor data while Wall-E was stuck on the base of a cat tree

This was not what I was expecting to see!  I had expected to see stable data, indicating that I had screwed up the algorithm somehow, and fixing the algorithm would fix the ‘stuck’ detection problem.   Instead, the above plot clearly shows that it is the data that is screwed up, not the algorithm!

Looking more closely at the data, it appears that the stable sections around 85 cm are probably representative of the actual distance from Wall-E’s front sensors to the wall behind the cat tree.  However, the only explanation I can come up with for the large variations in both sensor readings is some sort of multipath effect, caused by parts of the scene that aren’t directly ahead, but are in the view of the ping sensors.  I have some experience with multipath effects from my time as a radar/antenna research scientist at The Ohio State University, and it is a very hard problem to deal with.  Eliminating or suppressing multipath effects is basically impossible with single sensors, or even multiple co-located ones; in order to address multipath, a space-diversity scheme for sensors is required, where sensors are spaced far enough apart so that if a particular multipath path creates constructive interference at one sensor, it will create destructive interference at the other.  Then the data from both sensors can be averaged to achieve a better, more stable result.  This is infeasible to do for Wall-E, but maybe, just maybe, that isn’t totally necessary.  Maybe I can look for the pattern of variation between the two front sensors shown above. Currently I’m looking for large differences in the total deviation between the front and top-front sensors, but this is essentially an averaging process.  Maybe it would work to compare the two sensors on a measurement by measurement basis?

Stay Tuned…

Frank