DC motor speed control using optical tachometer

Posted 04/13/15

In my last post I described my plans for upgrading Wall-E (my Wall-following Robot) with a LIDAR package of some type, and my thought that I might be able to use such a package to not only replace the existing front-facing ping sensors, but (with a bit of rotating magic) the side sensors as well.

In order to replace *all* the sensors, the LIDAR package would have to rotate fast enough so that it could produce front, left, and right-side distance readings in a timely enough fashion to actually implement wall-following.  I’m not sure exactly what the requirements for wall-following are, but I think it’s safe to say that at least the measurements to the followed wall must be in the several-per-second range, or Wall-E could run into the wall before it figures out it is getting too close.  The other side, and the front could be taken at a more relaxed pace if necessary, but the wall being tracked has to be done correctly.

In order to rotate a unit such as the LIDAR-Lite from PulsedLight, I would  need a speed-controlled motor of some kind.  I considered  both stepper motors and direct-drive DC motors.  Since I already had two DC motors (the left and right wheel motors on Wall-E) and they came with ‘tachometer sensors’ (plastic disks with slots for optical wheel motion sensing), I thought I’d give this a try.  Earlier in my robot startup phase, I had obtained  some IR LED/Photodiode pairs, so I had at least the basic building blocks for a tachometer system.  I was  already speed-controlling Wall-E’s wheel  motors for steering using PWM from the Arduino Uno, so that part was already in place.  ‘All’ I had to do was couple the input from a tachometer into the already-existing PWM speed control facility and I would have a closed-loop speed-controlled rotating base for my LIDAR system – cool!

OK, so now I have all the parts for a speed-controlled motor system – I just have to assemble them. First up was a way of mounting the IR LED and IR detector in such a way that the slots in the tachometer wheel would alternately make and break the light path between them.  In the past when I had to do something like this, I would carve or glue something out of wood, or bend up some small pieces of aluminum.  However now I have a very nice 3D printer and the TinkerCad design package, so I could afford to do this a different way.  The confluence  of hobby robotics and 3D printing allows so much more design/development freedom that it almost takes my breath away.  Instead of dicking around for a while and winding up with something half-assed that is used anyway because it is way too much trouble to make another  -better – one, a 3D printer based ‘rapid iteration’ approach allows a design to  be evolved very quickly, with each iteration so cheap as to be literally throw-away.    To illustrate the approach, the image below shows the evolution of my IR LED/IR detector bracket, along with the original 20-slot tachometer wheel that came with the motors and a 10-slot version I printed up as a replacement (the tach signal-to-noise ratio was too low with the 20-slot original).

Evolution of an IR tach sensor bracket, along with the original and a custom-printed tach wheel

Evolution of an IR tach sensor bracket, along with the original and a custom-printed tach wheel

The evolution proceeded from left to right in the image.  I started with just a rectangular piece with a horizontal hole to accommodate the IR LED, and a threaded hole in the bottom to affix it to the robot chassis.  Then the design evolved a ‘foot’ to take advantage of a convenient slot in the robot chassis, for physical stability/registration purposes.  Then I added a second side with a slot in it to accommodate the  IR detector, with the tach wheel passing between the two sides.  This basic two-sided design persisted throughout the rest of the evolution, with additional material added on the IR LED side to accommodate the entire length of the IR LED.  Not shown in the photo are some internal evolutionary changes, most notably the width of the slot that allows IR energy from the LED to fall on the detector – it turns out that the detector opening should be about 1/2 the width of a tooth slot for best signal.  Each step in the above evolution cost me about 30 minutes of design time in TinkerCad, and a few pennies worth of filament.  Moreover, once I have the end design, printing more is essentially free.  Is that cool, or what?

Wall-E's right motor being used as my tachometer test bed

Wall-E’s right motor being used as my tachometer test bed.  Note the piece of scotch tape on the wheel, used for manually timing RPM.

Tachometer sensor bracket, showing IR LED and tach wheel

Tachometer sensor bracket, showing IR LED and tach wheel

Tachometer sensor bracket, showing slot for the IR detector

Tachometer sensor bracket, showing slot for the IR detector

Since I was already controlling the speed of Wall-E’s motors with an Arduino Uno (albeit for steering), I simply modified the wall-following program to act as a test driver for the tach feedback system.  The output of the IR detector was connected to an analog input, and the analog readings were captured and imported into an Excel spreadsheet for analysis.

The first test showed that I wasn’t getting enough signal swing between the slot and non-slot (plug) states of the tach wheel (less than 100 out of a possible 1024 levels), and this led me to start experimenting with different IR detector apertures.  As shown in the second plot below, constricting the aperture provided a marked improvement in SNR (about 3 times the peak-peak variation).

First test of the tach sensor system.  Note the not-impressive variation between wheel slot and plug readings

First test of the tach sensor system. Note the not-impressive variation between wheel slot and plug readings

Paper barrier with a small slot placed in front of detector aperture

Paper barrier with a small slot placed in front of detector aperture

The above results led directly to the final round of evolutionary changes to the tach sensor bracket, where the detector aperture was changed from a large circle (same diameter as the IR LED) to a small slit.  In addition, to further improve the SNR, the tach wheel itself was redesigned from 20 slots to 10  with  the slots and plugs equal area.  In addition one slot was removed to create an absolute wheel position ‘index mark’.  After these changes, the tach sensor test was redone resulting in the following plot.

IR Detector response with a narrow slit aperture and a 10-tooth wheel.

IR Detector response with a narrow slit aperture and a 10-tooth wheel.

Now the signal varies from 0 to 800, allowing easy and reliable ‘off’ to ‘on’ state detection, and index mark detection.

After incorporating the physical changes noted above, an Arduino program was developed to test whether or not the motor could be accurately speed controlled.  Rather than trying to manually threshold-detect the above waveform, I simply used  Mike Schwager’s very cool EnableInterrupt Library (see  https://github.com/GreyGnome/EnableInterrupt) and set the Tach signal analog input to trigger an interrupt on each signal change.  This resulted in two interrupts per slot position, but this was easily handled in the software.

After getting the program working,  I found that I could control the motor such that, when set to 60 rpm, 20 wheel revolutions (as measured by counting the scotch tape on the wheel) took exactly 20 seconds.

Well, I’m not quite sure where I’m going from here.  Now I have demonstrated that I can control a typical hobbyist/robot motor for use as a LIDAR turret.  However, I’m not entirely convinced that a spinning LIDAR can produce wall distance measurements fast enough for successful wall following, and it will be a major PITA to modify Wall-E sufficiently to find out.  For one thing, I can’t really use one of Wall-E’s drive wheels as the LIDAR turret motor without giving Wall-E an unacceptable ‘limp’  (If I did that, I guess I would have to change his name from ‘Wall-E’ to ‘Quasimodo’ ;-)).  For another, to mount the LIDAR and turret on the current Wall-E chassis would be a major project by itself, as Wall-E’s real estate is already heavily populated with ‘stuff’.

So,  I think I’m going to wait until my new 4WD robot chassis arrives (it was unfortunately delayed for a week or so), and then build it up from scratch as a LIDAR-only platform. I can then use one of the motors from Wall-E as the turret motor for the PulsedLight LIDAR-Lite system.  In the meantime, I think I’ll try and mount the NEATO XV-11 spinning LIDAR on Wall-E, as it doesn’t require any additional motors (it has its own motor built in), and see if I can successfully follow a wall using only LIDAR.

Stay Tuned…

 

Frank

 

 

Robot of the future – LIDAR and 4WD

Posted 04/29/15

In a whole series of posts over this last month, I described the results of my efforts to solve the ‘stealth slipper’ problem, where Wall-E gets stuck on my wife’s fuzzy blue slippers and can’t seem to reliably detect this condition.  I ran a large number of experiments which eventually convinced me that the ultrasonic sensors I have been using for wall-following  and ‘stuck’ detection just aren’t up to the task. Even the use of  two forward-looking ping sensors, which I thought was going to be a really cool and elegant solution, didn’t do the job.  There is just too much data corruption due to multipath and ‘friendly fire’ corruption between ping sensors to reliably discriminate the ‘stuck on stealth slippers’ condition.

Slipper turned 90 degrees  clockwise

Slipper turned 90 degrees clockwise

So, its time to consider other, more radical, alternatives.  First and foremost, it is clear that ultrasonic sensing will not work for ‘stealth slipper’ detection/avoidance.  Other possible sensing modes are:

  • IR Ranging:  This has the advantage of being pretty cheap, but hobbyist IR ranging options like the Sharp IR Range Sensor (shown below) are fairly short range, slow, and have just an analog output with limited accuracy.

    Sharp IR Range Sensor

    Sharp IR Range Sensor

  • LIDAR:  This technology is fast, can be very long range, and can provide very accurate ranging information.  Unfortunately these sensors tend to be heavier and much more expensive than either the ultrasonic or IR sensor options.  The CentEye/ArduEye laser range finder using the Stonyman vision chip was a really cool, light weight and cheap solution, but it is unfortunately out of production and unavailable :-(.  The best of the lot for now appears to be the LIDAR-LITE sensor or the fully-assembled LIDAR package that is part of the NEATO robot vacuum cleaner.
    PulsedLight LIDAR-Lite unit

    PulsedLight LIDAR-Lite unit

    Neato Robotic Vacuum LIDAR module

    Neato Robotic Vacuum LIDAR module

  • Optical parallax vision processing:  This is really the same as the LIDAR option, but with separate  laser, receiver, and parallax computation modules.  This is what the now-unobtainable Stonyman chip/Laser/Arduino solution did, but there are other, less attractive, ways to do the same thing.  One is a combination of a cheap laser diode and the Pixy CMU Cam.  The Pixy module handles a lot of the vision pre-processing necessary for parallax range determination, and the laser diode would provide the bright, distinct spot for it to track.Pixy CMU Cam module

After looking through the available options, it occurred to me that something like the LIDAR-Lite might allow me to  not only replace the forward-looking sensor on WallE, but maybe even the side ones as well.  The LIDAR-Lite is fast enough (20 msec/reading) that I should be able to use it for all three directions (left, right, forward).  In fact, if I mounted it on a servo motor using something like the Adafruit slip ring component shown here, I could implement a cool 360-degree LIDAR

Adafruit Slip Ring with 6 contacts

Adafruit Slip Ring with 6 contacts

It also occurred to me that while I’m in the process of making radical changes to WallE’s sensor suite, I might want to consider changing WallE’s entire chassis (I think this is the robot equivalent of ‘repairing’ a car by lifting up the radiator cap and driving a new car under it).  The 2-wheel plus castering nose wheel arrangement with the current WallE leaves a lot to be desired when navigating around our house.  Too often the castering nose wheel gets stuck at  the transition from the kitchen floor to the hall carpet or the area rugs.  In addition the nose wheel axle/sleeve space tends to collect dirt and cat hair, leading to the castering nose wheel acting more like a castering nose skid than a wheel ;-).  After some more quality time with Google, I came up with a very nice 4-wheel drive DFRobot 4WD Arduino Mobile Platform  robot chassis from DF Robots, along with the companion ‘All In One Controller‘.

DFRobot 4WD Arduino Mobile Platform

DFRobot 4WD Arduino Mobile Platform

DFRobot Romeo V1 All-in-one Microcontroller (ATMega 328)

DFRobot Romeo V1 All-in-one Microcontroller (ATMega 328)

Adventures with Wall-E’s EEPROM, Part VI

Posted 04/26/15

In my last post I showed there was a  lot of variation in the data from Wall-E’s  ping sensors – a lot more than I thought there should be.  It was apparent from this run that my hopes for ‘stuck’ detection using variation (or lack there of) of distance readings from one or more sensors were futile – it just wasn’t going to work.

At the end of the last post, I postulated that maybe, just maybe, I was causing some of these problems by restricting the front sensor max distance to 250 cm.  It was possible (so I thought) that opening up the max distance to 400 cm might clean up the data and make it usable.  I also hatched a theory that maybe motor or movement-related vibration was screwing up the sensor data somehow, so I ran some tests designed to investigate that possibility as well.

So, I revised Wall-E’s code to bump the front sensor max distances to 400 cm and made a couple of runs in my test hallway (where the evil stealth slippers like to lurk), to test this idea.  The code adjustment had a bit of a ripple effect, because up til now I had been storing the distance data as single bytes (so could store a distance reading of 0-255 cm), and storing 2-byte ints was going to take some changes.  Fortunately a recently released update to the EEPROM library provide the put() and get() methods for just this purpose, so I was able to make the changes without a whole lot of trouble.

Results:

First, I ran a number of tests with the front sensor max distance still set at 255 so I could stay with the single-byte storage system, with and without the motors engaged, and with and without mechanically induced vibration (tapping vigorously on the side of the robot chassis) while moving it toward and away from my bench wall.

Test bench run with motors disabled, without any external tapping

Test bench run with motors disabled, without any external tapping

Test bench run with motors enabled, but no external tapping

Test bench run with motors enabled, but no external tapping

Test bench run, motors enabled, with external tapping

Test bench run, motors enabled, with external tapping

From these runs, it is clear to see that having the motors engaged and/or having an external disturbance does not significantly affect the sensor data quality.

Next, I enabled 2-byte EEPROM storage and a 400 cm max distance for the two front sensors. Then I did a bench test to validate that EEPROM storage/retrieval was being done properly, and then ran another field test in the ‘slipper’ hallway.

Field test with all sensors and motors enabled, 400 cm max distance on front sensors

Field test with all sensors and motors enabled, 400 cm max distance on front sensors

The front and top-front sensor data still looks very crappy, until Wall-E gets within about 100 cm of the far wall, where it starts to look much better.  From this it is clear that opening up the front max distance from 255 to 400 cm did absolutely nothing to improve the situation.  Meanwhile, the offside side sensor readings are all over the place.

So, I have eliminated motor noise, mechanical vibration, and inappropriate max distance settings as the cause of the problems evident in the data.  After thinking about this for a while, I came to the conclusion that either there was still some intra-sensor interference, and/or the hallway itself was exhibiting multipath behavior.  To test both these ideas, I disabled the motors and all but the top-front sensor, and ran a series of 4 tests, culminating in a run in the ‘slipper’ hallway where I moved the robot by hand, approximating the somewhat wobbly path Wall-E normally takes.  The results are shown below.  In the first two tests I moved the robot toward and away from my test wall a number of times, resulting in a sinusoidal plot.  In the two long range tests, I started approximately 400 cm away from the wall, moved close, and then away again, back to approximately 400 cm.

Test run in my lab and in the  'slipper' hall, top front sensor only (400 cm max distance).

Test run in my lab and in the ‘slipper’ hall, top front sensor only (400 cm max distance).

The first two tests (‘Bench 1’ and ‘Bench 2’) validated that clean data could be acquired, and the ‘Lab Long Range’ test validated that the ping sensor can indeed be used out to 400 cm (4 meters).  However, when the field test was run, significant variation was noted in the 150-350 cm range, and there doesn’t seem to be any good explanation for this other than multipath.  And, to make matters worse, if one sensor is exhibiting multipath effects, it’s a sure bet that they  all are, meaning the possibility (probability?) of multiple first, second, and third-order intra-sensor interference behavior.

After this last series of tests, I’m pretty well convinced that the use of multiple ping sensors for navigation in confined areas with multiple ‘acoustically hard’ walls is not going to work.  I can probably still use them for left/right wall-following, but not for front distance sensing, and certainly not for ‘stuck’ detection.

So, what to do?  Well, back to Google, of course!  I spent some quality time on the web, and came up with some possibilities:

  • The Centeye Stonyman Vision Chip and a laser diode. This is a  very cool setup that would be perfect for my needs.  Very small, very light, very elegant, and (hopefully) very cheap laser range finder – see  https://www.youtube.com/watch?v=SYZVOF4ERHQ.  There is only one thing wrong about this solution – it’s no longer available! :-(.
  • The ‘Lidar Lite’ laser range finder component available from Trossen Robotics (http://www.trossenrobotics.com/lidar-lite).  This is a complete, self-contained LIDAR kit, and it isn’t  too big/heavy, or  too expensive (there might be some argument about that second claim, but what the heck).
  • The Pixy CMUCam, also available from Trossen (http://www.trossenrobotics.com/pixy-cmucam5).  This isn’t quite as self-contained as it  needs a separate laser and some additional programming smarts, but it might be a better fit for my application.

So, I ordered the LIDAR-lite and the CmuCAM products from Trossen, and they will hopefully be here in a week or so.  Maybe then I can make some progress on helping Wall-E defeat his nemesis – the evil stealth slippers!

Stay tuned…

Frank

Adventures with Wall-E’s EEPROM, Part V

 

Posted 04/22/15

In my last post I analyzed  a stuck/un-stuck scenario where Wall-E got stuck on a coat rack leg, and then got himself unstuck a few seconds later.  This post deals with a similar scenario, but with the evil stealth slippers instead of the coat rack, and this time Wall-E didn’t get away :-(.

 

 

EEPROM data from Wall-E slipper run.  Note large variations on all four channels.

EEPROM data from Wall-E slipper run. Note large variations on all four channels.

Last 50 records, showing large amount of variation on all four channels.

Last 50 records, showing large amount of variation on all four channels.

Analysis:

  • T = 09: Wall-E hits his nemesis, the evil Stealth Slippers
  • T + 37: Wall-E signals that it has filled the EEPROM.  No ‘stuck’ detection, so no sensor array data.
  • My initial impression of the 4-channel EEPROM record was “Geez, it’s just random garbage!”.  There does not appear to be any real structure to the data, and certainly no stable data from the left and right side sensors.  Moreover, the top front sensor – the one that was supposed to provide nice stable data even in the presence of the stealth slippers – appears to be every bit as unstable as the others – ugh!
  • In order to more closely examine the last few seconds of data, I created a new plot using just the last 50 or so records.  From this it is clear that both the left and right side sensor data is unstable and unusable – both channels show at least one max-distance (200 cm for the side sensors) excursion.  The front and top-front data doesn’t fare much better, with 4-5 major excursions per second.
  • The only bright spot in this otherwise panoply of gloom is that the front and top-front sensor data shows a lot of intra-sensor variation, meaning that this might be used to effect a ‘stuck’ declaration.  In the last 50 records, there are 4 records where ABS(front-topfront) > 85 (i.e. > MAX_FRONT_DISTANCE_CM / 3).  Looking more closely at the entire EEPROM record, I see there are 18 such instances – about one instance per 50 records or so, or about 1 per second.  Unfortunately, at least 6 of these occur in the first third or so of the entire record, meaning they occur  before  Wall-E gets stuck on the slipper.  So much for  that idea :-(.

Despite the gloom and doom, this was actually a very good run, in that it provided high-quality quality data about the ‘stealth slipper detection’ problem.  The fact that the data shows that one of my ideas for detection (the intra front sensor variation idea) simply won’t work, as that variation is present in  all the data, not just when Wall-E is stuck.  At least I don’t have to code the detection scheme up and then have it fail! ;-).

It is just barely possible that I have caused this problem by restricting the max detection distance for the front sensors to 250 cm in an effort to mitigate the multipath data corruption problem.  So, I’m going to make another run (literally) at the slippers but with the max front distance set out to 400 cm versus the existing 255 cm limit.  However, this will cut the recording capacity  in half, as I’ll have to use 2 bytes per record.  I can compensate for this by not storing the left and right sensor data, or by accepting a shorter recording time, or some combination of these.  One idea is to store the left & right sensor data as bytes, and the front sensor data as ints.  This will require modifying the EEPROM readout code to deal with the different entry lengths, but oh well….

Stay tuned…

Frank

 

 

Adventures with Wall-E’s EEPROM, Part IV

Posted 04/22/15

In my last post, I showed some results from Wall-E’s EEPROM data captures, including a run where Wall-E got stuck on the wife’s evil stealth slippers – and then unexpectedly got ‘unstuck’.  I couldn’t explain Wall-E’s miraculous recovery from the captured EEPROM sensor data, so I was left with two equally unpalatable conclusions; either I didn’t understand Wall-E’s program, or Wall-E was ‘seeing’ something besides what was captured in the EEPROM.

So, I decided to modify Wall-E’s programming to capture additional data when/if Wall-E got stuck – and then unstuck – on future runs.  The mods were described in the last post, but basically the idea was to capture the contents of both the 50-point front and top front sensor data arrays, along with the current values of all four sensors.  To do this I re-purposed  the first 100  EEPROM locations to store the sensor array data, figuring that the earliest points would be the least likely to be relevant for post-run analysis.

After making the mods, and testing them on the bench in debug mode, I took Wall-E out for another field trial, hoping he would do the same thing as before – namely getting stuck and then un-stuck on/from the evil stealth slippers.

As it turned out, Wall-E’s next run produced good news and bad news. The good news is that Wall-E did indeed get stuck and then un-stuck, providing some very good decision data.  The bad news was that it got stuck on a coat rack leg (an easier problem for Wall-E) instead of the stealth slippers.  Still, it  did provide an excellent field validation of the new data collection scheme, as shown below.

Remaining EEPROM Contents at the point where Wall-E declares 'Stuck'.

Remaining EEPROM Contents at the point where Wall-E declares ‘Stuck’.

Top and Top Front Sensor Array Contents at the point where Wall-E declares 'Stuck'

Top and Top Front Sensor Array Contents at the point where Wall-E declares ‘Stuck’

Analysis:

  • T + 13:  Wall-E gets stuck on a coat rack leg.  He got stuck because I  hadn’t yet updated the left front bumper to the new non-stick style, but this was a good thing ;-).  Just before Wall-E hits the leg, the left sensor distance reading changes rapidly from about 40 cm to about 20, as the left sensor picks up the coat rack leg on its left.
  • T + 13-15: Wall-E tries to drive around the coat rack leg it is stuck on, causing it to turn about 45 degrees to the left. During this period the right, front, and top-front sensor readings vary wildly, but the left sensor reading stays quite stable (almost certainly  reading the distance to the coat rack leg on the left).
  • T + 16:  Wall-E has stopped moving, and consequently the top and top-front  sensor readings settle down.  Interestingly, the right distance sensor readings  don’t  settle down, even thought there are no obstacles within the max detection distance (200 cm) on that side – no idea why.
  • T + 20: Wall-E declares the ‘stuck’ condition.  This is almost certainly due to  the total deviation of the current contents of  the front sensor reading arrays falling below the threshold (5 cm in this case).  From the data, the front sensor array deviation is 4 cm, while the top-front deviation is still high (161) due to a 209 value that hasn’t quite yet fallen off the end.

So, the captured for this run is entirely consistent with the stuck condition and recovery, with the possible exception of the anomalous right sensor readings that should show a constant 200 cm but doesn’t for some unknown reason.

Next up – another try at getting Wall-E stuck on the stealth slippers, with (hopefully) a ‘stuck’ condition’ detection to boot!

Stay tuned…

Frank

 

 

 

Adventures with Wall-E’s EEPROM, Part III

 

Posted 04/19/15

In my last post I described how I might be able to use the Arduino Uno’s onboard EEPROM to see the world from Wall-E’s point of view, at least for a few seconds at a time.  So, now that I’m back home from the Gatlinburg, Tn duplicate bridge tournament, I decided to try my luck at this.

First, I had to make some additional modifications to Wall-E’s program:

  • Revised the sensor data retrieval routines to substitute  MAX_FRONT_DISTANCE_CM for any zero reported from either  front sensor, and to substitute MAX_LR_DISTANCE_CM for any zero reported from either the left or right sensor. This takes care of the problem of sensor readings abruptly transitioning from a near-maximum reading to zero, and back again.
  • Revised the EEPROM storage routine to store readings from all four sensors instead of just the two front ones.  Data will be recorded until the EEPROM is full, at which point Wall-E will blink all four taillight LED’s twice.  Wall-E will continue to run, but won’t store any more data.
  • Revised the separate EEPROM readout program to properly read out all four sensor values, instead of just the two front ones.

After making the modifications (and fixing the inevitable bugs), I set Wall-E loose on the world with it’s new EEPROM-storage capabilities.  I video’d the run so I would later be able to correlate the EEPROM data with what Wall-E was actually doing at the time.  The first run went very well, with Wall-E behaving ‘normally’ (whatever ‘normal’ means for a robot!).  I videoed the run until Wall-E blinked its tail-lights to signify it was done recording, and I noted that the run had lasted about 30 seconds (which was too bad, because Wall-E got stuck on a coat rack leg  just after running out of storage space! ;-).

The video and the Excel plot are shown below, followed by my post-run analysis

 

EEPROM data from Wall-E's first instrumented run

EEPROM data from Wall-E’s first instrumented run

Analysis:

  • The very first thing I noticed about the Excel plot is that there is something badly wrong with the first part  of the data,  up to about point 400; it’s way too constant.    After about 400, it looks like Wall-E collected ‘good’ data, although a lot of it looks pretty frightening!
  • I thought Wall-E started out tracking the wall on the right, but the data doesn’t support that – it appears it was tracking on the left wall from the get-go.
  • from the video it looks like Wall-E’s tracking period is between  one and two seconds, and from the plot this corresponds to about 50 points.  This correlates reasonably well with the observation that it takes about 30 seconds to fill the 1024-byte EEPROM.  1024 divided by 30 gives 34 points/sec, so 50 points would give a period of about 50/34 = 1.5 seconds.  This is actually quite good news, because it means that the 50-point array I was using earlier as part of the ‘stuck’ detection routine can probably capture an entire tracking period.  The amplitude of the tracking response appears to be about 8-10 cm in ‘free space’ and about half that when Wall-E’s castering nose wheel was hitting the rug edge after about point 580.
  • From the plot, it appears the front and top-front sensors were reporting obstacles in view even though there weren’t any, at least not for the first part of the initial wall-tracking phase of the run.  This is probably due to the fairly large heading deviations made by Wall-E even while wall tracking, possibly coupled with some multipath effects.  This is actually good news as  there should be significant variation in front sensor readings during normal operation, even if there is no dead-ahead obstacle within sensor range.   Also, it is clear that once Wall-E  came within about 60-75 cm of the door, it ‘captured’ the attention of both the front and top-front sensors, which then tracked the door very nicely all the way down to the 10 cm avoidance threshold.  The same thing happened with the side wall, although because Wall-E was moving slower, the distance reversal happened a bit earlier.
  • After turning away from the side-wall obstacle, Wall-E tracked the same wall, but in the opposite direction and at a larger distance (about 60 cm vs about 40 on the way in).  It is interesting to note that the reported distanced from the front sensors also went up, presumably due to the longer slant-distance to the wall and back during the toward-wall heading excursions (and also probably because Wall-E’s heading excursions weren’t as large due to being slower on the carpet)
  • The ‘off-side’ (right sensor on the way in, and left sensor on the way out) showed very large variations – large enough to hit the stops (200 cm max distance) occasionally on the way in, and on almost every heading swing on the way out.  The measured distance from Wall-E’s average position on the way out to the far wall is about 150 cm, so a 45 degree heading variation would create a slant range distance of over 200 cm.  Smaller heading deviations would also likely create a situation where most of the  left sensor’s ping energy bounced away from the sensor, creating a ‘nothing in view’ response.

Posted 04/20/15

Today I made another run with the EEPROM enabled.   This time I was fortunate enough to have Wall-E encounter my wife’s stealth slippers, while still in the EEPROM collection window, so I may have captured that data.  Unfortunately, Wall-E was uncharacteristically smart enough to figure out he was stuck, and so backed away from the dreaded robot-eating slippers!

 

Sensor plot from Run 2. Wall-E triumphs over the ‘stealth slippers’ at the end!

Analysis:

  • This plot shows the same disconcerting initial section where the data just doesn’t look correct.  However, this time I know where it comes from; Wall-E’s normal running code is configured (now – not for Run 1) to zero out the EEPROM contents before the run starts, and this happens in setup().  When I want to read the information back out again, I need to load an entirely different program, and to do that, I have to connect the USB connector to the Arduino.  Unfortunately, this action also powers up the Arduino and sets Wall-E’s normal program running – and practically the first thing it does is start zeroing out the EEPROM.  Fortunately it only got through the first couple of hundred data points before being halted by the bootloader, but this is the reason for the initial group of zeros in the plot for Run 2, (and the initial group of too-stable data for Run 1).  In the future, I think I’ll incorporate a switchable delay  into Wall-E’s programming, maybe using the now-existing pushbutton, so that no data will be zeroed out or overwritten.
  • Between point 336 and 358 the front and top-front sensor readings ‘come off the stop’ of 250 cm as they start ‘seeing’ the doorway at the end of the hall.
  • T + 24: Wall-E hits the wall to the left of the doorway at point 628 after a hard left turn, and immediately backs up and recovers.  The hard left turn occurs when Wall-E’s right sensor picks up the short wall stub to the near right and then, after making an initial correcting left turn, picks up the door itself and ‘wall-follows’ it.  This is clearly shown in the data, as the right sensor (red) readings drop below the left ones at that point.  Wall-E is programmed to use the nearer wall for wall-following.
  • T + 27: Wall-E barely misses the end of the short wall stub.  It has been  ‘wall-following the door and then short wall on its left side this entire time.  Interestingly, between the t + 24 hit and this  near miss, the top-front sensor readings vary wildly, but the front sensor ones look fairly smooth.  My guess is that the upper sensor was alternately ‘seeing’ the wall stub approaching and the wall well beyond that point, while the lower one was just getting the wall stub.
  • T+ 29: At point 684, Wall-E hits the other wall and recovers.
  • T + 33: After recovering from the second wall hit, Wall-E runs into the stealth slippers and gets stuck – yay!!
  • T + 37: Wall-E gets away!!  After 4 seconds, Wall-E manages to figure out that it is ‘stuck’ and backs away from the evil stealth slippers!  The sensor data in this 4-second period is extremely interesting, as this is the first time I’ve been able to see what Wall-E is ‘seeing’ when he gets stuck on the evil stealth slippers.  As previously assumed (but never confirmed until now!) the left sensor readings show very little variation, making the ‘stuck’ condition very easy to distinguish (at least from the human eyeball viewpoint), but the readings from the other three sensors aren’t so easy to interpret.  Contrary to expectations, and my ‘test range’ results,  both the front and top-front sensor readings show significant variations (as does the readings from the ‘off-side’ side sensor.  The problem is, I have no clue as to why Wall-E got away from the slippers.  The current code uses only the top and top front sensor readings to determine the stuck condition, and I can’t see anything in the last 50 records that would cause that determination to succeed.    There are two criteria for a ‘stuck’ declaration; if neither the front or top front sensors show significant variation for the last 50 points, or if the difference between the front and top front sensors exceeds more than half the max front sensing distance (255 cm for this run) at any time.  There are actually two points within the last 50 that satisfy the second of these criteria (the top front sensor reading goes to 255 but the front sensor reading stays at 120 and 80 respectively), but these don’t trigger a ‘stuck’ declaration – Wall-E stays glued to the slippers for another 2-3 seconds.  So, either I don’t understand what my own code is actually doing, or I don’t understand what is actually going into the 50-point deviation tracking arrays, or the EEPROM data isn’t what Wall-E is actually seeing, or some combination of all the above  (or something else entirely!).
  • Last 50 records before EEPROM recording ended.  Somewhere in  here is something that Wall-E used to declare the 'stuck' condition.

    Last 50 records before EEPROM recording ended. Somewhere in here is something that Wall-E used to declare the ‘stuck’ condition.

  • T + 39: EEPROM recording ends.

Well, I think I’m making some progress toward understanding Wall-E’s point of view (or maybe it’s his ‘point of hear’ instead?), especially with respect to how Wall-E manages obstacles like the evil stealth slippers.  I’m not there yet, although I am encouraged by how stable the left (near wall) sensor data looked while Wall-E was stuck on the slipper.  If that behavior can be confirmed by further observations, then it might be an easy and definite way of ‘stuck’ detection.

So, I have modified Wall-E’s code yet again as follows:

  • Moved the EEPROM initialization code until after the LED light startup sequence, thereby (hopefully) giving the compiler/load/bootloader code time enough to stop Wall-E’s main program before it wipes out the EEPROM data from the previous run.
  • Modified my ‘IsStuck()’ stuck detection function so that if it ever does detect a ‘stuck’ condition, it will
    • write the contents of the front and top-front 50-point history arrays to the the EEPROM (either after the newly collected sensor data if there is room, or by overwriting the initial set of sensor readings if there isn’t)
    • Light all 4 LEDs
    • Shut down the motors
    • Enter an infinite loop

By having Wall-E shut down and go to sleep, I can ensure that the ‘stuck’ detection is the very last thing Wall-E does, which should  mean that the captured array and sensor data is what Wall-E was using to make the detection.

Then I modified the EEPROM read-out program to print out the array contents in single columns  for ease of plotting, with a line in between the array contents and the remaining sensor readout values.

 

 

Stay tuned….

Frank

 

Adventures with Wall-E’s EEPROM, Part II

Posted 04/17/15

I haven’t had much time to work on the ‘stuck detection’ problem lately, as I have been playing duplicate bridge every day at the ACBL regional tournament here in Gatlinburg, Tennessee.  However, my pick-up partner for the morning session didn’t show, so I’m using the free time to think about the problem some more.

In my last post I showed that Wall-E gets confused in certain circumstances when what I believe to be multipath effects corrupt the data collected by the two front-facing ultrasonic sonar sensors.  My current ‘stuck detection’ algorithm relies on at least the top sensor getting good clean distance data, even if the lower one is completely or partially blocked (the ‘stealth slipper’ scenario).  When the data from  both sensors is corrupted, I’m screwed.  In my last post I showed some data I collected from a ‘stuck’ scenario using Wall-E’s on-board EEPROM, and this data made it crystal clear that both sensors were getting bad data.  So, what to do?  Time to go back to the drawing board and devise a new, improved ‘stuck detection’ algorithm that takes in to account the new information.

Excel plot of the front and top-front sensor data while Wall-E was stuck on the base of a cat tree

Excel plot of the front and top-front sensor data while Wall-E was stuck on the base of a cat tree

Here’s what we have:

  • The ‘stealth slipper’ scenario, where the lower sensor is partially or completely blocked, while the upper one is not.  In this case, the upper sensor may report a real distance if there is an obstacle  within the set MAX_DISTANCE_CM parameter, or it may report zero if there isn’t .  The lower sensor can sometimes also report zero when it is blocked by the fuzzy slippers, as the slippers absorb or deflect enough of  the ultrasonic energy to make the sensor think there’s nothing there.  The ‘stuck’ detection algorithm handles this case in a two-step process.  First, if both sensors report very little variation  over time (meaning the distance to the next obstacle isn’t changing) then a ‘stuck’ condition is declared.  Or, if the deviation over time of the bottom sensor is different than the variation  over time of the top sensor (the bottom sensor is partially blocked by the slipper, and reports widely varying distance readings), then a ‘stuck’ condition is declared.  A potential flaw in this algorithm is that either or both sensors can report a constant zero even if Wall-E is moving normally, but there simply isn’t anything within the set  MAX_DISTANCE_CM parameter.  In the current algorithm, this is addressed by setting the  MAX_DISTANCE_CM parameter to 400 cm for the front sensors, on the theory that in a normal wall-following scenario, there is always something within 4 meters (13 feet or so).
  • The  normal wall-following scenario, where (hopefully) the front sensors report a steadily declining distance, with enough variation over time to avoid triggering a ‘stuck’ detection.  In this mode, the side sensor reporting the smaller distance will be  used for primary left/right guidance.
  • The multipath scenario, where the physical geometry is such that the front sensors report widely varying distances, sometimes the same, and sometimes different.   The present ‘stuck detection’ algorithm fails completely here, as the top sensor reports enough variation over time  to pass the first test, and there isn’t enough variation difference between the top and bottom sensors to satisfy the second one.  I don’t think there is anything to be done about the problem with the first test, as the presence of variation over time is the  only  reliable indicator of movement.  However it seems to me that the second test (differential variation between the two sensors) can be improved.  The differential variation test essentially compares the average variation in the top sensor to the average variation in the bottom one, and this clearly doesn’t work for the multipath case.  But, if I compared the two sensor readings on a point-by-point basis over a few seconds, I should be able to detect the mutipath case.  Maybe a running count of differences that exceed a set threshold (from the plot, it looks like a 10 cm distance threshold would work)?  Then if the count/sec exceeds some other threshold, declare a ‘stuck’ condition?

Anyway, I’m starting to think there may be some hope for Wall-E after all; maybe he won’t wind up stuck forever under a cat tree somewhere.  In any case, I think it would be wise to use my new-found EEPROM powers to collect some more ‘real-world’ data before I leap too far to a conclusion.  I think I might even want to include the side sensors in the reporting, so I can see what is actually happening during a typical wall-following sequence, as well as when Wall-E is actually stuck.  I think I can probably get at least 10-20 seconds of data without running out of EEPROM space, so we’ll see.

Stay tuned,

Frank

 

Adventures with Wall-E’s EEPROM

Posted 4/12/15

At the conclusion of my last post (New ‘stuck’ Detection Scheme, Part VI), I had decided that I needed to investigate the use of the EEPROM on board Wall-E’s Arduino Uno as a potential way of recording actual field data, in an effort to find out what is really happening when Wall-E gets stuck and can’t get un-stuck.

OK, so the plan is to add a pushbutton to Wall-E’s hardware to trigger sensor data collection into EEPROM.  Then I can read the data out later using another program.  I happened to have a fairly decent selection of pushbuttons from other projects, so this part wasn’t a problem.  I decided to use an unused analog port (A5), with it’s pullup resistor enabled, so all the pushbutton has to do is pull that line to ground.  Then I could modify Wall-E’s code to write sensor data to EEPROM for as long as the A5 line is LOW.

The following photos show the pushbutton hardware and wiring

Side view showing pushbutton wiring and strain relief

Side view showing pushbutton wiring and strain relief

Side view showing pushbutton location on left wheel cover

Side view showing pushbutton location on left wheel cover

Top view showing pushbutton connections to the Arduino Uno

Top view showing pushbutton connections to the Arduino Uno

Next, I modified Wall-E’s code to write the current front and top-front sensor measurements to EEPROM in an interleaved fashion whenever the A5 line was LOW.

Next I wrote another small sketch to read the EEPROM values back out and de-interleave them into two columns of sensor measurements so it would be convenient to use Excel to plot results.  After testing this on the bench, it was time to let Wall-E loose on the world to ‘go get stuck’.  As if Wall-E could sense there was something wrong, it did its best to  not  get stuck.  I had almost run out of patience when Wall-E ran into the base of one of our cat trees and stuck – grinding its wheels but not going anywhere.  I was able to collect several seconds of data from the two front sensors – YES!!

Wall-E stuck on the base of a cat tree

Wall-E stuck on the base of a cat tree

Wall-E stuck on the base of a cat tree

Wall-E stuck on the base of a cat tree

After loading the readout program, extracting the sensor data and sucking it into Excel, the following plot shows what I collected.

Excel plot of the front and top-front sensor data while Wall-E was stuck on the base of a cat tree

Excel plot of the front and top-front sensor data while Wall-E was stuck on the base of a cat tree

This was  not what I was expecting to see!  I had expected to see stable data, indicating that I had screwed up the algorithm somehow, and fixing the algorithm would fix the ‘stuck’ detection problem.   Instead, the above plot clearly shows that it is  the data that is screwed up,  not the algorithm!

Looking more closely at the data, it appears that the stable sections around 85 cm are probably representative of the actual distance from Wall-E’s front sensors to the wall behind the cat tree.  However, the only explanation I can come up with for the large variations in both sensor readings is some sort of multipath effect, caused by parts of the scene that aren’t directly ahead, but  are in the view of the ping sensors.  I have some experience with multipath effects from my time as a radar/antenna research scientist at The Ohio State University, and it is a very hard  problem to deal with.  Eliminating or suppressing multipath effects is basically impossible with single sensors, or even multiple co-located ones; in order to address multipath, a space-diversity scheme for sensors is required, where sensors are spaced far enough apart so that if a particular multipath path creates  constructive interference at one sensor, it will create destructive interference at the other.  Then the data from both sensors can be averaged to achieve a better, more stable result.  This is  infeasible to do for Wall-E, but maybe, just maybe, that isn’t totally necessary.  Maybe I can look for the pattern of variation between the two front sensors shown above. Currently I’m looking for large differences in the  total deviation  between the front and top-front sensors, but this is essentially an averaging process.  Maybe it would work to compare the two sensors on a measurement by measurement basis?

Stay Tuned…

Frank

 

New ‘stuck’ Detection Scheme, Part VI

Posted 04/10/15

After an exhaustive (and exhausting!) set of ‘indoor range’ tests that (I thought) gave me a very good understanding of the ‘stuck’ detection issue, I made the changes I thought were necessary and sent Wall-E back out into the real world – where he promptly got stuck and  didn’t  recover! He got stuck climbing up onto the lip of a rug – and sat there merrily grinding away for what seemed like forever (but was only for a minute or so) before I took mercy on it.

Clearly the situation ‘in the field’ isn’t quite as simple as my ‘indoor range’ configuration, but the differences are  not obvious.  In an effort to figure this out without running around in circles, I’m trying to change just one thing at a time, as follows:

  • Changed the ‘STUCK_DIST_DEVIATION_THRESHOLD’ from 5 cm to 10 cm.  This helped a little, and didn’t seem to increase the frequency of false positives significantly.
  • Changed the  MAX_DISTANCE_CM from 200 cm to 100 cm, on the theory that in the ‘real world’ there is more clutter beyond 100 cm that can cause significant measurement deviation.  This change caused Wall-E to declare a ‘stuck’ condition almost continually – and I have no idea how  THAT happened!
  • Changed the  MAX_DISTANCE_CM back to 200 to verify that Wall-E’s behavior changed back to what it was before the change.  Check.
  • Changed the  MAX_DISTANCE_CM back to 100 and removed the guard code around the call to  UpdateWallFollowMotorSpeeds() in  MoveAheadTilStuck().  Changing the MAX_DISTANCE_CM back to 100 caused the false ‘stuck’ declarations to resume, and removing the guard code had no effect one way or the other.

So, what’s the deal with changing the MAX_DISTANCE_CM parameter?  It is only used in two places in the code – in the NewPing() constructor for all four sensors, and in the line ‘frontdistval = (frontdistval > 0) ? frontdistval : MAX_DISTANCE_CM + 1; in  MoveAheadTilStuck().  This line converts a zero reading from the front sensor to  MAX_DISTANCE_CM + 1 (101 in this case).  Since I’m no longer using the front sensor reading for the ‘stuck’ determination, I have no clue why this line (or lack of it, for that matter) would make any difference.

The only other potential clue in this whole mess is the way the sensor reading arrays are being handled.  The idea was that when a ‘stuck’ detection occurred, The arrays should be re-initialized in such a way that another ‘stuck’ detection could not occur until after another ARRAY_SIZE measurements have been collected.  The way I chose to do that was to simply place a large positive reading followed by a zero in the top of each of the 4 arrays, guaranteeing (I thought!) that those two adjacent values would prevent a ‘stuck’ detection for at least ARRAY_SIZE measurement cycles.  In order to verify that this ‘poison pill’ feature is actually working, I added the ‘PrintDistInfo()’ function from my PingTest project to Wall-E4 and ran it in debug mode on my bench.   Using this technique, I was able to watch (albeit slowly) the ‘poison pill’ values roll through my distance sensor value arrays.  So, it appears that is working fine, and the ‘stuck’ detection algorithm is working perfectly, too – in that it detects the ‘stuck’ condition as soon as it is able too (all the real distance information is pretty static with Wall-E sitting on the bench with no power to the motors)

So, the only conclusion i can reach with this information is that the MAX_DISTANCE_CM reduction from 200 to 100 significantly reduced measurement deviation, to the point where Wall-E was declaring ‘stuck’  even when he wasn’t.  This tracks with another observation – Wall-E seemed to declare ‘stuck’ just as the distance from one  or the other side sensors increased, like an open door or something like that.  Apparently this causes a ‘out of bounds’ (zero) return with a MAX_DISTANCE_CM of 100 more often than with 200.

So, what to do?  I can just use the differential distance readings between the front and top-front sensors, but while this should work for the slipper case where the front sensor is partially or totally obstructed, it won’t work for the coat rack or rug edge case where both front sensors are unobstructed.  It might  work to use a two dimensional test; if the two front sensors have close to the same readings but those readings don’t vary over time,  OR their readings differ significantly at any time, then declare ‘stuck’.  If I go this way, I’ll need to open up the front sensor max distance to something more than 200 cm (300-500?) so Wall-E won’t declare ‘stuck’ in an open hallway.  Since the side sensors would no longer be used for the determination, I could keep their max distances short – say 100 cm, which would allow me to shorten the post-ping delays for them a bit.

  • Change  MAX_DISTANCE_CM to 400 cm.  Use  MAX_DISTANCE_CM for the two front sensors and  MAX_DISTANCE_CM / 4  for the side sensors.
  • Change the ‘stuck’ detection algorithm to use only the front sensors, as discussed above
  • Remove  the  aRightDist and aLeftDist arrays.
  • Change the inter-ping delays.  It is generally a good idea to wait 20-25 msec between ping sensor triggers to avoid returns from one sensor being interpreted as returns by another sensor.  However, I believe it is OK  to have  no delay between the left and right ping sensors.  In order for ping energy from the left sensor to be interpreted as a return by the right sensor, that energy has to arrive at the right sensor  after the right sensor has been triggered, and  before the right sensor’s energy gets back.  If the delay from left to right sensor activation is more than about 25 msec, there’s no way the first criteria (arriving after the right sensor is triggered) can be met, so this is perfectly safe, if a bit wasteful of time.  However, if they are triggered together (no inter sensor delay), then there is no way the second criteria can be satisfied for any reasonable geometry, as the left sensor’s energy will always have farther to travel by 2 times the distance from the left sensor to the nearest object.  So, I believe it is safe to trigger the left and right sensors together, then delay 15-25 msec between the L/R pair and either the top-front or front, and then another 15-25 msec between the two  front sensors.

OK, so I made the changes described above, and Wall-E is  still  getting stuck, although less frequently than before.  In fact, there were a couple of times during the last set of field trials where it seemed that Wall-E was actually doing very well.  However:

  • The wall following performance is still mediocre at best, especially compared to where it was before I started adding inter ping sensor delays.
  • Wall-E still gets stuck and won’t declare ‘stuck’ for no apparent reason.  In one case he had his nose pressed firmly up against a solid surface, which should have produced stable readings from both front sensors, but apparently satisfied neither the max deviation nor top-front/front deviation difference criteria.   In another, both front sensors were unobstructed, and the nearest obstacle was only about 75 cm away – should have been a slam-dunk, but wasn’t.

At this point, I think the only way forward is to find a way to record what is actually happening with Wall-E during a period where it the ‘stuck’ criteria should be met, but nothing is happening.  My hope is that I can figure out how to use Arduino’s EEPROM to record data ‘on the fly’.

Stay tuned!

Frank

 

New ‘stuck’ Detection Scheme, Part V – Stealth Slipper Study

Posted 04/08/15

As I drifted off to sleep last night, it occurred to me that I had not really completed my study of ping sensor responses, as I did not yet fully understand what was happening with the ‘stealth slipper’ (aka the wife’s fuzzy slippers) case.  So, this morning I re-opened the Paynter indoor test range for some additional tests.  As shown below, I placed a slipper in various orientations in front of Wall-E’s dual front ping sensor setup, and took sensor data for each case.

Test 1: Slipper Head-on:

Slipper head-on with robot front

Slipper head-on with robot front

Test 1 results.  Note front sensor ping being completely absorbed, causing it to return zeroes

Test 1 results. Note front sensor ping being completely absorbed, causing it to return zeroes

Test 2: Slipper Rotated 90 Degrees CW:

Slipper turned 90 degrees  clockwise

Slipper turned 90 degrees clockwise

Slipper rotated 90 degrees CW.  Note lower ping sensor still completely blocked, but upper one is still OK

Slipper rotated 90 degrees CW. Note lower ping sensor still completely blocked, but upper one is still OK

Test 3: Slipper Rotated 180 Degrees CW:

Slipper turned 180 degrees clockwise

Slipper turned 180 degrees clockwise

150408_SlipperTest3Plot

This result is pretty interesting in that it looks like the front sensor gets confused by the open cavity presented by the slipper in this configuration, while the top-front sensor is nice and stable.  This is a very good justification for having both forward-looking sensors!

Test 4: Slipper Rotated 270 Degrees CW:

 

 

 

Slipper turned 270 degrees clockwise

Slipper turned 270 degrees clockwise

150408_SlipperTest4Plot

The plot shows that the lower (front) sensor is completely blocked,  (returning zeros), while the upper (top-front) sensor is completely clear, returning a nice, stable reading with a max deviation of just 1 cm.  Again, this plot is a great justification for having two forward-looking sensors.

Test 5: Slipper Rotated 360 Degrees CW:

Slipper rotated 360 degrees clockwise (same configuration as Test 1)

Slipper rotated 360 degrees clockwise (same configuration as Test 1)

150408_SlipperTest5Plot

 

This is the same configuration as Test 1, but with different results :-(.   I suspect the difference is due to the slipper being offset laterally one way or another, just enough to cause the spikes noted.  Another possibility is that there is occasionally just enough echo from the fuzzy slipper to make the sensor think there is something there, but at extreme range.  In any case, the top-forward sensor continues to provide a nice, stable response with minimum deviation.

 

Test 6 – 10  : Slipper Rotated 90  Degrees CW and Translated from Far Right to Far Left:

This series of configurations starts with the slipper in the 90 degree CW rotation position (similar to Test 2) but translated to the right. Then it is moved through three intermediate positions (Tests 7-9) to a position out of view to the left (Test 10).

First of 5 tests with the slipper moving laterally from right to left

First of 5 tests with the slipper moving laterally from right to left

150408_SlipperTest6Plot

Second of 5 lateral displacement tests, with the slipper moving from right to left

Second of 5 lateral displacement tests, with the slipper moving from right to left

150408_SlipperTest7Plot

 

The Test 6 position is apparently far enough to the right so that both the front and top-front sensors have a (mostly) clear view to the front, producing stable returns with a maximum deviation of just 1 cm for both.

 

Third of 5 lateral displacement tests, with the slipper moving from right to left

Third of 5 lateral displacement tests, with the slipper moving from right to left

150408_SlipperTest8Plot

 

Tests 7 and 8 show the same result – the front sensor is blocked (returning zeros) and the top-front sensor can still see, returning a stable result with a maximum deviation of 1 cm.

 

Fourth of 5 lateral displacement tests, with the slipper moving from right to left

Fourth of 5 lateral displacement tests, with the slipper moving from right to left

150408_SlipperTest9Plot

Last of 5 lateral displacement tests, with the slipper moving from right to left.  In this test, the slipper is all the way out of the field of view

Last of 5 lateral displacement tests, with the slipper moving from right to left. In this test, the slipper is all the way out of the field of view

Test 10 results.  Note both front and top-front sensors return nearly identical, stable results.

Test 10 results. Note both front and top-front sensors return nearly identical, stable results.

Tests 9 and 10 are also similar, clearly showing that both the front and top-front sensors can see the wall at around 37/38 cm, with a maximum deviation for both sensors of 1 cm.

 

 

Summary and Conclusions:

This post describes a set of measurements intended to explore the effect of my wife’s ‘stealth slippers’ on Wall-E’s forward sensor performance, in order to implement an effective algorithm for getting Wall-E ‘un-stuck’ when it runs up against a slipper during its travels through the house.  Ten separate tests were performed in a controlled environment, recording both the front and top-front sensor responses to various slipper configurations.

Based on the test results above, I think I can safely make the following conclusions:

  • The top-forward sensor can reliably ‘see over’ a slipper and reliably produces a stable response (the actual distance if there is an obstacle, or zero if there is nothing within 200 cm), with a maximum variation of 1-2 cm.
  • When  both sensors report similar distances, then it is almost certain there is no nearby blocking obstacle (aka ‘stealth slipper’).
  • When the top-front and top-front sensors report wildly different numbers, then it is highly probable that Wall-E has gotten stuck on a slipper (or other low-lying obstacle) and the ‘stuck’ algorithm should be triggered.
  • The SR-04 sensors and the NewPing driver library seem remarkably accurate and stable. All the problems experienced so far with ‘unreliable readings’ have been self-inflicted, mostly by not heeding the time separation requirements.

Frank