Yearly Archives: 2020

Replacing HC-SRO4 Ultrasonic Sensors with VL53L0X Arrays

Posted 20 May 2020

In a recent post, I described the issue I had discovered with the HC-SR04 ultrasonic ‘Ping’ distance sensors – namely that they don’t provide reliable distance information for off-perpendicular orientations, making them unusable for determining when Wall-E2, my autonomous wall-following robot, is oriented parallel to the nearest wall.

In the same post, I described my discovery of the STMicroelectronics VL53L0X infrared laser ‘Time-of-Flight’ distance sensor, and the thought that this might be a replacement for the HC-SR04.  After running some basic experiments with one device, I became convinced that I should be able to use an array of three sensors oriented at 30 degree angles to each other to provide an accurate relative orientation to a nearby wall.

So, this post is intended to document my attempt to integrate two 3-sensor arrays into Wall-E2, starting with just the right side.  If the initial results look promising, then I’ll add them to the left side as well.

Physical layout:

The VL53L0X sensors can be found in several different packages.  The one I’m trying to use is the GY530/VL53L0X module available from multiple vendors.  Here’s a shot from eBay

GY530/VL53L0X test setup with Arduino. Note size relative to U.S. Dime coin

The basic idea is to mount three of the above sensors in an array oriented to cover about 60 degrees.  So, I designed and printed a bracket that would fit in the same place as the HC-SR04 ‘ping’ sensor, as shown below:

TinkerCad design for triple sensor mounting bracket

And here is the finished bracket with the center sensor installed (I haven’t received the others yet)

VL53L0X module mounted at the center position on triple-sensor bracket

System Integration:

The VL53L0X chip has a default I2C address of 0x29.  While this address can be changed in software to allow multiple V53L0X modules to be used, the chips don’t remember the last address used, and so must be reprogrammed at startup each time.  The procedure requires the use of the chip’s XSHUT input, which means that a digital I/O line must be dedicated to each of the six modules planned for this configuration, in addition to the power, ground and I2C SCL/SDA lines.  This doesn’t pose an insurmountable obstacle, as there are still plenty of unused I/O lines on the Arduino Mega 2560 controlling the robot, but since I want to mount the sensor arrays on the robot’s second deck in place of the existing HC-SR04 ‘ping’ sensors, those six additional wires will have to be added through the multiple-pin connector pair I installed some time ago. Again, not a deal-breaker, but a PITA all the same.

So, I started thinking about some alternate ideas for managing the sensor arrays.  I already have a separate micro-controller (a Teensy 3.5) managing the IR sensor array used for homing to charging stations, so I started thinking I could mount a second Teensy 3.2 on the second deck, and cut down on the requirement for intra-deck wiring.  Something like the following:

The Teensy would manage startup I2C address assignments for each of the six VL53L0X sensor modules via a secondary I2C bus, and would communicate distance and steering values to the Arduino Mega 2560 main controller via the system I2C bus.

22 May 2020 Update:

After some fumbling around and some quality time with the great folks on the Teensy forum, I managed to get a 3-sensor array working on a Teensy 3.5’s Wire1 I2C bus (SCL1/SDA1), using the Adafruit VL53L0X library.  The code as it stands capitalizes on the little-known fact that when a Teensy 3.x is the compile target, there are  three more ‘TwoWire’ objects availble – Wire1, Wire2, and Wire3.  So, I was able to use the following code to instantiate and configure three VL53L0X objects:

January 18 2022 Note:

The ‘magic’ that creates the ‘Wire1’, ‘Wire2’, and ‘Wire3’ objects occurs in WireKinetis.h/cpp in the Arduino\hardware\teensy\avr\libraries\Wire folder.

Here’s some sample output:

And the physical setup:

Triple VL53L0X LIDAR array. The two outside sensors are angled at 30 degrees. Teensy 3.5 in background

The next big questions are whether or not this 3-sensor array can be used to determine when Wall_E2 is oriented parallel to the nearest wall, and whether or not the steering value proposed in my previous post, namely

where the ‘l’, ‘r’ and ‘c’ subscripts denote the left/forward, right/rearward, and center sensor measured distances respectively.

23 May Update:

I’ve been improving my understanding of my triple-VL53L0X setup, and I think I’m to the point where I can try out the steering idea.  Here’s the experimental setup:

I have a wood barrier set up with a 40 cm offset from the center of the compass rose. Before getting started, I checked each sensor’s measured distance from the barrier by manually placing each sensor at the center of the compass rose, aligned as closely as possible with the perpendicular to the barrier.  When I did this, the measured offsets were within a few mm of the actual distance as measured by the tape measure.

The plan is to rotate the sensor bracket from -30 degrees to +30 degrees relative to the normal perpendicular orientation, while recording the left, center, and right measured distances.  Then I should be able to determine if the steering idea is going to work.  In this experiment, I manually rotated the array from 0 (i.e. the center sensor aligned with the perpendicular to the barrier) to -30 degrees, and then in 10 degree steps from -30 to 30 degrees.  The rotation was paused for a few seconds at each orientation.  As shown in the plot below, the Steering Value looks very symmetric, and surprisingly linear – yay!

Steering values resulting from manualy rotating the triple VL53L0X array from -30 to 30 degrees

However, the above experiment exposed a significant problem with my array design.  The 30 degree angular offset between the center sensor and the outer two sensors is too large; when the center sensor is oriented 30 degrees off perpendicular, the line-of-sight distance from the ‘off’ sensor to the wall is very near the range limit for the sensors, with the 40 cm nominal offset chosen for the test.  I could reduce the problem by reducing the initial offset, but in the intended application the initial offset might be more than 40 cm, not less.

So, back to TinkerCad for yet another bracket rev.  Isn’t having a 3D printer (or two, in my case) WONDERFUL!  The new version has a 20 deg  relative angle rather than 30, so when the center sensor is at 30 degrees relative to the wall, the outside one will be at 50 deg.  Hopefully this will still allow good steering while not going ‘off the wall’ literally.

Revised sensor carrier with 20-deg offsets vs 30-deg

04 June 2020 Update:

In order to make accurate measurements of the triple-VL53L0X array performance, I needed a way to accurately and consistently rotate the array with respect to a nearby wall, while recording the distance measurements from each array element.  A few years ago I had created a rotary table program to run a stepper motor for this purpose, but I hadn’t documented it very well, so I had to take some time away from this project to rebuild the tool I needed.  The result was a nice, well-documented (this time) rotary scan table controlled by a Teensy 3.2, and a companion measurement program.  The rotary scan table program can be synchronized with the measurement program via control pins, and the current step  # and relative pointing angle can be retrieved from the scan table via I2C.

Here’s the test setup:

Test setup for triple VL53L0X angle sweeps. Board in background extends about .5m left and right of center.

And here’s a short video showing one scan (-20 to +20 deg)

I ran two scans – one from -30 to +30 deg, and another from -20 to +20 deg.  Unfortunately, my test ‘wall’ isn’t quite long enough for the -30/+30 scan, and in addition the left-most sensor started picking up clutter from my PC in the -30 position. In any case, both scans showed a predictable ‘steering’ value transition from positive to negative at about +10 deg.

-30 to +30 deg scan. Note the steering value crosses zero at about +10 deg

-20 to +20 deg scan. Note the steering value crosses zero at about +10 deg

09 June 2020 Update:

After this first series of scans, I discovered that my scan setup was not producing consistent results, and so all the data taken to this point was suspect.  So, back to the drawing board.

Based on a comment on an earlier experiment made by john kvam of ST Microlectronics, regarding the 27 degree cone coverage of the photo beam from the LIDAR unit, I thought at least part of the problem was that the beam might be picking up the ‘floor’ (actually my desk surface in these first experiments).  As a way of illuminating (literally) the issue, I created a new stepper motor mount assembly with holes for mounting three laser diodes – one straight ahead, one tilted down at 14 degrees, and one tilted up at 14 degrees. The combination of all three allows me to visualize the extents of the VL53L0X beam coverage on the target surface, as shown in the following photos.

The second photo shows the situation with the ‘wall’ moved away until the beam extent indicator dots are just captured, with the distance measured to be about 220 mm.  The following short video shows how the beam coverage changes as the relative angle between the center sensor and the wall changes.

As shown in the video, the beam extents are fully intercepted by the 6″ wall only during the center portion of the scan, so the off-axis results start to get a little suspect after about 50 degrees off bore-sight.  However, for +/- 30 to 40 degrees, the beam extents are fully on the ‘wall’, so those measurements should be OK.  However, the actual measurements have a couple of serious problems, as follows:

  • As shown in the above Excel plots, the ‘steering’ value, calculated as (Front-Rear)/Center crosses the zero line well to the right of center, even though the sensors themselves are clearly symmetric with the ‘wall’ at 0 degrees
  • The scans aren’t repeatable; when I placed a pencil mark on the ‘wall’ at the center laser dot, ran a scan, and then looked at the laser dot placement after the ‘return to start position’ movement, the laser dot was well to the left of the pencil mark. After each scan, the start position kept moving left, farther and farther from the original start position.

So, I started over with the rotary table program; After some more research on the DRV8825, I became convinced that some or all of the problem was due to micro-stepping errors – or more accurately, to the user (me) not correctly adjusting the DRV8825 current-limiting potentiometer for proper micro-stepping operation.  According to Pololu’s description of DRV8825 operation, if the current limit isn’t set properly, then micro-stepping may not operate properly.  To investigate this more thoroughly, I revised my rotary table program to use full steps rather than micro-steps by setting the microstepping parameter to ‘1’.  Then I carefully set the scan parameters so the scan would traverse from -30 to +30 steps (not degrees). When I did this, the scan was completely repeatable, with the ‘return to starting position’ maneuver always returning to the same place, as shown in the following short video

The above experiment was conducted with the DRV8825 current limit set for about 1A (VREF = 0.5V).  According to information obtained from a Pololu support post on a related subject, I came to believe this current limit should be much lower – around 240-250 mA for proper microstepping operation, so I re-adjusted the current limiting pot for VREF = 0.126V.

After making this adjustment, I redid the full-step experiment and confirmed that I hadn’t screwed anything up with the current limit change – Yay!

Then I changed the micro-stepping parameter to 2, for ‘half-step’ operation, and re-ran the above experiment with the same parameters.  The ‘2’ setting for micro-stepping should enable  micro-stepping operation.  As shown in the following video, the scan performed flawlessly,  covering the -54 to 54 degree span using 60 micro-steps per scan step instead of the 30 previously, and returning precisely to the starting position – double Yay!

Next I tried a microstepping value of 8, with the same (positive) results.  Then I tried a stepping value of 32, the value I started with originally.  This also worked fine.

So, at this point I’m convinced that microstepping is working fine, with the current limit set to about 240 mA as noted above.  This seems to fully address the second bullet above, but as shown in the plot below taken with microstepping = 32 and with data shown from -30 to +54 degrees), I still have a problem with the ‘steering’ value not being synchronized with the actual pointing angle of the array sensor.

Next I manually acquired sensor data from -30 to +30 degrees by rotating the de-energized stepper motor shaft by hand and recording the data from all three sensors.  After recording them in Excel and plotting the result, I got the following chart.

This looks pretty good, except for two potentially serious problems:

  1. The ‘steering’ value, defined as (Front Distance – Rear Distance)/Center Distance, is again skewed to the right of center, this time about 8 degrees.  Not a killer, but definitely unwanted.
  2. Rotation beyond 30 degrees left of center is not possible without getting nonsense data from the Front (left-hand) sensor, but I can rotate 60 degrees to the right while still getting good data, and this is with much more ‘wall’ available to the left than to the right, as shown in the following photo

-30 deg relative to center

+30 deg relative to center

-60 deg relative to center

+60 deg relative to center

After finishing this, I received a reply to a question I had asked on the Pololu support forum about micro-stepping and current limiting with the Pololu DRV8825.  The Pololu guy recommended that the current limit be set to the coil current rating (350 mA for the Adafruit NEMA 17 stepper motor I’m using), or 0.175V on VREF.  So, I changed VREF from 0.122V to 0.175V and re-verified proper micro-stepping performance.  Here’s a plot using the new setup, micro-stepping set to 32, -54 to +54 deg sweep, 18 steps of 6 deg each.

and a short video showing the sweep action.

In the video, note that at the end, the red wire compass heading pointer and the laser dots return precisely to their starting points – yay!

So, at this point everything is working nicely, except I still can’t figure out why the steering value zero doesn’t occur when the sensor array is oriented parallel to the ‘wall’.  I’m hoping John, the ST Micro guy, can shed some light (pun intended) on this 😉

11 June 2020 Update.

I think I figured out why the steering value zero didn’t occur when the sensor array was oriented parallel to the wall, and it was, as is usually the case, a failure of the gray matter between my ears ;-).

The problem was the way I was collecting the data with the scanner program that runs the rotary table program.  The scanner program wasn’t properly synchronizing the sensor measurements with the rotary table pointing angle.  Once I corrected this software error, I got the following plot (the test setup for this plot still has the left & right sensors physically reversed).

As can be seen in the above plot, the steering value y-axis crossover now occurs very close to zero degrees, where the sensor array is oriented parallel to the wall.

12 June 2020 Update

After numerous additional steering value vs angle scans, I’m reaching the conclusion that the VL53L0X sensors have the same sort of weakness as the ‘ping’ sensors – they aren’t particularly accurate off-perpendicular.  To verify this, I removed the sensors from my 3-sensor array bracket and very carefully measured the distance from each sensor position to the target ‘wall’ using a tape measure and a right-angle draftsman’s triangle, with the results shown in the following Excel plot.  Measuring the off-perpendicular distances turned out to be surprisingly difficult due to the very small baseline presented by the individual sensor mounting surfaces and the width of the tape measure tape.  I sure wish I had a better way to make these measurements – oh, wait – that’s what I was trying to do with the VL53L0X sensors! ;-).

With just the physical measurements, the steering value crosses the y-axis very close to zero degrees relative to parallel, plus/minus measurement error as expected.  One would expect even better accuracy when using a sensor that can measure distances with milometer accuracy, but that doesn’t seem to be the case.  In the following Excel plot, the above tape measure values are compared to an automatic scan using the VL53L0X sensors. As can be seen, there are significant differences between what the sensors report and the actual distance, and these errors aren’t constant with angular orientation (otherwise they could be compensated out).

For example, in the above plot the difference between the solid green (rear sensor distance) and dashed green (rear sensor mounting surface tape measure distance) is about 80 mm at -30 degrees, but only about 30 mm at +30 degrees.  The difference between the measurements for the blue (front) sensor and the tape measure numbers is even more dramatic.

So it is clear that my idea of orienting the sensors at angles to each other is fundamentally flawed, as this arrangement exacerbates the relative angle between the ‘outside’ sensor orientations and the target wall when the robot isn’t oriented parallel to the wall.  For instance, with the robot oriented at 30 degrees to the wall, one of the sensors will have an orientation of at least 50 degrees relative to the wall.

So my next idea is to try a three sensor array again, but this time they will all be oriented at the same angle, as shown below:

The idea here is that this will reduce the maximum relative angle between any sensor and the target wall to just the robot’s orientation.

3-sensor linear array

I attached the three VL53L0X sensors to the linear array mount and ran an automatic scan from -30 to +30 degrees, with much better results than with the angled-off array, as shown in the Excel plot below:

VL53L0X sensors attached to the linear array mount. Note the nomeclature change

Triple sensor linear array automatic scan. Left/Right curve are very symmetric

So, the linear array performance is much better than the previous ‘angled-off’ arrangement, probably because the off-perpendicular ‘look’ angle of the outside sensors is now never more than the scan angle; at -30 and +30 degrees, the look angle is still only -30 and +30 degrees.  Its clear that keeping the off-perpendicular angle as low as possible provides significantly greater accuracy.  As an aside, the measured perpendicular distance from the sensor surface to the target ‘wall’ was almost exactly 250 mm, and the scan values at 0 degrees were 275, 238, and 266 mm for the left, center, and right sensors respectively.   so the absolute accuracy isn’t great, but I suspect most of that error can be calibrated out.

13 June 2020 Update:

So, now that I have a decently-performing VL53L0X sensor array, the next step is to verify that I can actually use it to orient the robot parallel to the nearest wall, and to track the wall at a specified offset using a PID engine.

My plan is to create a short Arduino/Teensy program that will combine the automated scan program and the motor driver program to drive the stepper motor to maintain the steering value at zero, using a PID engine.  On the actual robot, this algorithm will be used to track the wall at a desired offset.  For my test program, I plan to skew the ‘wall’ with respect to the stepper motor and verify that the array orientation changes to maintain a wall-parallel orientation.

14 June 2020 Update:

I got the tracking program running last night, and was able to demonstrate effective ‘parallel tracking’ with my new triple VL53L0X linear sensor array, as shown in the following video

If anyone is interested in the tracking code, here it is:  It’s pretty rough and has lots of bugs, but it shows the method.

At this point, I’m pretty confident I can use the linear array arrangement and a PID engine to track a nearby wall at a specified distance, so the next step will be to replace the ‘ping’ ultrasonic sonar sensor on Wall-E2, my autonomous wall-following robot, with the 3-sensor array, and see if I can successfully integrate it into the ‘real world’ environment.

Stay tuned!

Frank

Teensy NEMA 17 Stepper Motor Rotary Scanner Program

Posted 26 May 2020,

This post describes a small Teensy program developed to drive a NEMA 17 stepper motor to perform angular scans that can be used in conjunction with another program to obtain angle-synchronized performance data.

In a post several years ago I mentioned that I had developed a small Teensy program to drive a NEMA 17 stepper motor to perform angular scans to test Wall-E2’s IR Homing detection performance.  Unfortunately, I didn’t do a very good job of documenting the setup, so when I wanted to use the same capability for my new VLX53L0X ‘Time-of-Flight’ sensor project, I was unable to easily put the pieces back together again.  So, I decided to create a post dedicated to the software/hardware setup and usage, so the next time I need this capability it won’t be so hard to access.

The original rotary scanner program worked OK, but there was no way to synchronize the rotary table with the measurement program.  So I decided this time around I was going to add some features to make it more usable:

  • It should allow the measurement program to trigger the start of the scan, and trigger each subsequent rotation to the next position, in a measure-move-measure… sequence.
  • It should provide notification when each scan position has been reached, and when the entire scan is complete
  • It should allow for repeated scans without restarting the program
  • It should be capable of reporting each position step number and calculated angle relative to the set zero position to the measurement program via I2C.

Here’s the wiring diagram:

Wiring diagram for Rotary Table program

And the software

 

And the hardware layout:

I couldn’t find any information about winding polarity for this particular NEMA 17 stepper, so it was sort of a crapshoot as to which way the motor was going to turn for a nominal CW or CCW input to the program.  As it turned out, I had to reverse one set of wires on the L298N to get the stepper to turn in the correct direction.

Companion Measurement System

For the companion measurement system, I used a Teensy 3.5 micro-controller, as I needed to connect to the VL53L0X ‘Time of Flight’ sensors and the Rotary Table program/micro-controller via I2C, and The Teensy 3.5 has provisions for up to three separate I2C busses (Wire, Wire1 & Wire2).  Although it would be possible to put everything on one I2C bus, I wanted to isolate the two ‘sides’ (the rotary table control side, and the sensor control side).  This dual-IC2 bus arrangement is also the way I plan to integrate the VL53L0X sensor arrays into Wall-E2, my autonomous wall-following robot, so this will give me a chance to work out the bugs on a smaller scale.

Here’s the wiring diagram:

Wiring diagram for the companion measurement system, with triple VL53L0X ToF sensors

And the software:

and the hardware setup:

Rotary table Teensy 3.2 in foreground with L298N motor controller. Teensy 3.5 measurement controller in middle, with plugboard , NEMA 17 stepper motor and triple VL53L0X array in background

Here’s a short video showing a typical measurement scan:

And here’s a typical output:

In the above output, the ‘Step#’ and ‘RelDeg’ values were obtained from the rotary scanner program via the primary I2C bus, while the ‘Front’, ‘Center’, and ‘Rear’ distance values were obtained from the three VL53L0X ToF sensors via the secondary I2C bus. The ‘Steer’ value was calculated locally by the measurement controller.

Upgrade to Micro-step capable motor driver:

After getting everything to work properly, I was a bit puzzled why I wasn’t getting the correct relative angle values back from the rotary table subsystem.  After a while I figured out that the reason was that the rotary table program calculates an integer number of steps based on the total angle change divided by the number of scan steps, and, in general, the result won’t be exact. When moving from one scan step to the next the motor moves an integer number of steps, which in general will not be the desired angle change.  For a 60 degree scan arc with 6 steps, the desired angle/scan step is 10 deg, and at 1.8 deg/step  this would result in 10/1.8 = 5.5555… motor steps/scan step.  This value gets truncated to 5 motor steps/scan step, which results in each scan step being 1.8 deg/step * 5 steps  = 9 deg/scan step.  So, instead of the scan steps occurring at 0, 10, 20, 30, 40, 50, 60 deg, they are at 0, 9, 18, 27, 36, 45, and 54 deg, and the last 6 deg of scan is never covered.

The answer to this problem is to use a motor with more than 200 steps/rev, or to use a driver that can generate micro-steps, effectively increasing the angular resolution of the scan.  After some Googling, and some rooting around in my parts box, I came up with the Pololu DRV8825 Stepper Motor Driver part.  This driver supports up to 32 micro-steps/step and is considerably smaller than the L298N – such a deal!

Pololu Micro-stepping Driver

At 32 microsteps/step, a full motor rev would take 200*32 = 6400 microsteps or 0.05625 deg/microstep . Now the  above calculation would result in 10 deg/0.05625/10 = 177.777 –> 177 microsteps per scan step.  This would result in step angles of 0, 177*0.05625 = 9.95802, 19.916, 29.874, 39.83, 49.79, and 59.748.  So now we lose only the last 0.252 deg of scan – much nicer!

So, I put together a quick test, using an Arduino Uno, a spare NEMA 17 stepper motor, a Pololu DRV8825 from my parts box, and the Pololu Stepper library.  Here’s the program:

And the hardware test setup:

Pololu DRV8825 Microstep demo setup.

And a short video showing the stepper motor action.  Note that close attention to the end of the red pointer wire shows the difference between ‘full-step’ and ‘micro-stepped’ behavior; the micro-stepped rotations are much smoother.  Also, if you look very carefully, you can see the compass rose platform ‘counter-rotating’ in full-step mode, as the torque is high enough to rotate the stepper motor in the opposite direction to the shaft.

So now the idea is to incorporate micro-stepping capability into the Teensy NEMA 17 Rotary Table program, so that angle scans can be performed much more accurately than before.

02 June 2020 Update:

My original Pololu Microstep Demo program used an Arduino Uno to control the Pololu DRV8825 motor driver, but my Teensy Rotary Table setup obviously uses a Teensy and an L298N.  To change the Rotary Table setup to utilize the DRV8825, I’ll need to make some adjustments to pin configurations.

The first step in the process was to change out the Arduino Uno for a Teensy (a Teensy 3.5 in this case) to verify that there were no problem with the Pololu microstep demo program running on a Teensy – done.

The next step was to change out the L298N driver on the rotary table setup with the Pololu DRV8825 driver, and modify the rotary table program to use the new driver with microstepping.

Here’s the new combined hardware layout, with both the rotary table and measurement sub-systems shown

Both measurement and rotary table sub-systems shown. Rotary table Teensy 3.2 at left foreground, followed by Teensy 3.5 measurement controller, the connection plugboard, and the triple VL53L0X sensors in the left background. The new DRV8825 is mounted on the center plugboard, and the NEMA 17 stepper motor with compass rose test platter at the right

Here’s the updated software using the Pololu DRV8825

And the updated hardware schematic:

Here’s the output from a typical 180-degree scan

The triple VL53L0X scanner program:

The DRV8825 Rotary Table program:

For anyone interested in using this program, it is available on my GitHub site at https://github.com/paynterf/Teensy-DRV8825-Rotary-Scan-Table

 

Off-perpendicular measurement problems with HC-SRO4 Ping Measurements

Posted 14 May 2020,

For the last couple of months I have been working lately on updating my four-wheel autonomous wall-following robot.  Unfortunately, I have been unable to really nail down an effective algorithm for capturing and then holding a specified offset distance from the nearest wall.  If the robot starts with its body oriented parallel to the wall, it can successfully capture and then hold the desired distance.  Unfortunately, it has turned out to more difficult than I thought to consistently obtain the required parallel orientation starting position.

So, came up with an idea that was sure to work; instead of making a sweeping turn looking for an inflection point in the distance  reported by the HC-SR04 ‘ping’ sensor, I would instead make a series of N-degree turns.  I wouldn’t know the absolute orientation of the robot with respect to the wall, but I would know the total angle subtended by the robot, and (I thought) it should be much easier to determine the inflection point from the resulting Angle/Distance table.

Unfortunately, when I tested this idea, it failed because the distances reported by the ping sensor didn’t vary significantly, even though I could plainly see that the actual distance between the robot’s ping sensor and where the sensor was pointing was changing significantly – what the heck?  The following short video clip and Excel plot show the situation.

If you were to believe the Excel plot, the inflection point denoting parallel orientation would actually be at about 63 degrees off the perpendicular – clearly not right.

To further investigate the issue, I ran some simple manual ping vs tape measure and LIDAR vs tape measure tests.  The following photo shows the setup, and the Excel plots show the results.

Setup for LIDAR vs tape vs angle measurements. The ‘Ping’ vs tape measurements were set up the same way

As the above plots show, the ping sensor measurements are basically useless for anything more than a few degrees off perpendicular; as the ‘ping diff’ plot in the upper chart shows, the inflection point could be anywhere.  In contrast, the manual tape measurements show a distinct curve, and the point-point differential changes sign at very close to 0 deg off perpendicular.

The lower plot shows the same measurements but using LIDAR rather than the ultrasonic ping sensors. As the plot shows, the LIDAR measurements would actually be reasonably accurate; the inflection point for both the LIDAR measurement (the ‘LIDAR diff’ line above) and the tape measurement (‘Tape Diff’ line above) is at approximately 0 degrees off perpendicular.  Unfortunately, the Pulsed Light ‘LIDAR-lite’ units are much more expensive than the ubiquitous HC-SRO4 ‘ping’ sensor.

After noodling around on the web for a while, I found some references to a GY-530/VL53L0X ‘Time of Flight’ (ToF) sensor that looks like it might do the job.  From the Adafruit description of this neat device:

  • 3 to 120 cm in ‘default’ mode
  • As far as 1.5 to 2 meters on a nice white reflective surface in ‘long range’ mode
  • 3-5V compatible
  • Control via I2C (default addr = 0x29, but can be configured during setup)
  • Less than 40mA current draw

The big question, of course, is whether or not this device will be any better at off-perpendicular measurements than the ‘ping’ sensors.  I have some on order so hopefully I’ll be able to answer this question shortly.

18 May 2020 Update:

I got my GL530/VLX53L0X sensor in yesterday and I’ve now had a chance to play with it a bit.  It’s super small and pretty responsive.  Here’s a photo of the setup with an Arduino UNO.

GL530/VL53L0X ToF sensor test setup. Note the sensor could fit on a U.S. dime with room to spare

I got it to play and produce basic distance data, but haven’t done much else with it yet.  However, while noodling around looking at data sheets, I ran across this demo video by an ST engineer, and it really started me thinking about different ways to use this device as one of an array of sensors; maybe this would make the ‘RotateToParallelOrientation()’ function easier – or even unnecessary!

19 May Update:

I was able to make some reasonably precise measurements today with the GY530/VL53L0X Time-of-Flight sensor. Here’s the setup:

Testing setup for GY530/VL53LOX ToF Sensor. Note wood board test surface in background

And here’s a plot of the measured distance vs off-perpendicular angle, along with actual tape measure value for accuracy comparisons

While the measuring tape values and sensor distance values track very well over the -50 to +40 deg range, they aren’t the same.  The sensor measurements are consistently lower, by 10 mm or so.  That’s not a big deal, and there may be some calibration techniques available to zero out any constant error term.

I also noticed that the sensor value returned were occasionally dead wrong, reporting a value of 20 mm when the real value was more like 300-350.  Again, this might be addressable by averaging, but…

And lastly, as shown by the above plot, above about 40 degrees off-perpendicular, the sensor measurements become unreliable, probably due to low SNR return signal.

The good news is that the off-perpendicular plot does seem to behave fairly well in the -30 to +30 degree range, and has the expected quadratic bowl shape, so it should be usable for finding the parallel point, and maybe even for providing a steering value to a PID engine for offset hold operations once the proper offset has been captured.

To explore the ‘steering value’ idea, I simulated a 3-sensor setup by picking values out of the above plots 3 at a time.  For instance assuming the -40, -10, and +20 degree values were taken at the same time by 3 different sensors.  The following plot shows the results of the following calculation:

(Ml – Mr)/Mc

where, ‘l’, ‘r’ and ‘c’ subscripts refer to the left, right, and center sensors in a 3-sensor array angled at -30, 0 and +30 degrees respectively.  Here’s the plot

simulation of a 3-sensor array created by picking values from single-sensor sweep

This plot is quite exciting, as it shows a clear linear relationship between off-perpendicular orientation and steering values that could be used as the input to a PID engine.

Stay tuned,

Frank

 

 

I2C between an Arduino Mega and a Teensy 3.x

posted 18 March 2020

This post describes my efforts to troubleshoot an I2C communications problem between an Arduino Mega control board and a Teensy 3.2 IR beam demodulation module.

Background:

My Wall-E2 autonomous wall-following robot homes in on a modulated IR beacon to connect to a charging station. The IR beacon modulation is decoded by a dedicated Teensy 3.2 and provides left/right and combined steering values to an Arduino Mega main controller over an I2C link.

During my recent work to update the robot after some enhancements, I discovered that the main controller was no longer receiving steering information from the Teensy, even though both seemed to be operating properly.  Initial efforts to troubleshoot the problem did not bear fruit, so I was forced to back up and start over from scratch.

Troubleshooting:

Initially I thought the problem was a loose connection, as the system was working before.  However, I am now pretty sure that I have eliminated all the obvious culprits, so I am left with the non-obvious ones.

To start with, I resurrected an old example project to demonstrate I2C master/slave operation between two Teensy modules.

Teensy 3.2 Slave (left) and Teensy 3.5 Master (right)

Here’s the Teensy ‘Master’ code:

And the Teensy ‘Slave’ code:

With this configuration I got the following output:

Master:

Slave:

So this seems to be working OK.

Then I added in a third Teensy module (T3.5) running my newly developed I2C-Sniffer code, so that I could ‘sniff’ the I2C traffic between the two Teensy’s.  This will give me a ‘known-good’ baseline for when I move back to the non-working Arduino Mega Master and  Teensy Slave condition that is causing me problems.

Teensy 3.2 Slave (left), Teensy 3.5 I2C Sniffer (middle), Teensy 3.5 Master (right)

Here’s the Teensy ‘Sniffer’ code:

With this setup with the master sending data to the slave, I got the following outputs:

Master

Slave:

I2C Sniffer:

In the other direction, with the slave sending data to the master, I got the following:

Slave:

Master:

I2C Sniffer:

Where the HEX sequence 4B D8 A9 AD converted to a IEEE float = 2.83984e+07

So, the Teensy-to-Teensy I2C connection is clearly working in both directions, and the Teensy I2C Sniffer is successfully capturing and decoding the I2C traffic between the two modules – cool!

So, the next step is to replace the ‘Master’ Teensy with an Arduino Mega and repeat the process.  Hopefully this will allow me to figure out why the slave-to-master data transfer isn’t working

21 March 2020 Update:

Based on the above results, I formed a hypothesis that the problem with sending I2C data from a Teensy slave to an Arduino Mega master might be due to the Mega not being able to properly interpret 0-3.3V transitions from the Teensy.  So, I decided to construct a low-to-high level converter to make sure that SDA data from the Teensy was presented to the Mega as 0-5V transitions rather than the Teensy’s ‘raw’ 0-3.3V ones.  To do this I used the circuit shown below, except with a 1K instead of 10K resistor for R2 to clean up the transitions at 100KHz.

Bidirectional 3.3-5V level shifter using 2N7000 MosFet

Here’s a scope photo of the 3.3V input to and the 5V output from the 2N7000-based level shifter. The top trace is the Teensy 0-3.3V output, and the bottom trace is the 0-5V level-shifted version.  Both traces are 1V/cm, and the horizontal time scale is 20 uSec/cm.

Teensy slave, Mega master: Top: 0-3.3V input. Bottom: 0-5V output. Both traces are 1V/cm vertical, 20 uSec cm horizontally

To verify that the above circuit worked properly, I used it in the SDA line (the SCL line doesn’t require any level shifting as it is from the 5V Mega to the 3.3V Teensy) with a Teensy slave and a Teensy master.  This is OK, as Teensy 3.x’s can handle 5V inputs, and it worked properly in both directions.  Here’s a shot of the SDA line

Teensy slave, Teensy master: Top: 0-3.3V input. Bottom: 0-5V output. Both traces are 1V/cm vertical, 20 uSec cm horizontally

However, after replacing the Teensy master with the Mega one, I had the same problem as before – I can successfully transfer data from Mega master to Teensy slave, but not in the other direction. There is still a problem somewhere, and I no longer think it’s due to level-shifting problems.  From the above scope photos, it is clear that only a few bytes are transmitted by the Teensy slave each time it services the OnRequest() interrupt, while that same function transmits MUCH more data when servicing the same request from the Teensy 3.5 master.

Stay tuned!

23 March 2020 Update:

OK, back to the basics:  I downloaded the code for a very basic Arduino-Arduino I2C tutorial, and verified that it worked OK.  I also took a scope shot of SDA/SCL activity during the two-way data transfers, as shown below:

Basic Arduino-Arduino tutorial layout

I2C line activity for the ‘Basic Arduino – Arduino I2C Tutorial’

So now that I have a working baseline for the Arduino-Arduino I2C case, I plan to incrementally modify it toward duplicating the non-working Arduino-Arduino I2C case until it breaks, and then I’ll know that the last incremental modification is the culprit. At the moment, I’m leaning toward the use of the I2CDev & SBWIRE libraries as they are the only significant difference between the working and non-working setups.  We’ll see….

24 March 2020 Update:

I created a new Arduino project called ‘I2C_Master_Tut_Mod1’ initially identical to ‘I2C_Master_Tutorial’ and verified that it worked properly with the unmodified ‘I2C_Slave_Tutorial’ project, and then started modifying it.

  • Added #include <PrintEx.h> //allows printf-style printout syntax
    StreamEx mySerial = Serial; //added 03/18/18 for printf-style printing, and changed all occurrences of ‘Serial’ to ‘mySerial’ and modified print statements to ‘printf’ format. All worked fine.
  • Added #include <I2C_Anything.h> and changed loop() to write a float value to the slave.  However, this caused a compile error.  When I started tracking it down I realized that I had previously modified ‘I2C_Anything.h’ to #include SBWIRE.h rather than the original #include Wire.h.  So, this could be the culprit, assuming that there is something about SBWIRE that causes Arduino I2C slaves to misbehave.
  • Copied I2C_Anything.h/cpp from the Libraries folder to my new ‘I2C_Master_Tut_Mod1’ folder, and un-modified it to reference back Wire.h vs SBWIRE.h.

25 March 2020 Update:

As it turns out, I was never able to get the ‘I2C_Master_Tut_Mod1’ project working with #include <I2C_Anything.h> , no matter what I did, including copying the #include <I2C_Anything.h> to the local folder and including it as a resource int the VS2019 project, build cleans, removing the _vm folder created by Visual Micro, etc.  To make it even stranger, I created a new Arduino project in VS2019 called ‘I2C_Master_Tut_Mod2’, copied the ‘I2C_Anything.h’ file to the local folder, and it compiles without error!  So now I have two almost completely identical Arduino projects, one of which compiles fine and the other of which blows a bunch of errors about twi.c.  I’m currently working with Tim Leek at VisualMicro to sort out what I did wrong.

27 March 2020 Update:

Ok, I now have the Arduino Mega master to Arduino Uno slave setup working, with I2C_Anything.  I can transfer a float value in either direction with no problem. The master and slave code sets and the corresponding outputs are shown below.

Master Sketch:

Slave Sketch:

Master  & Slave Output:

So now I have a working Arduino-Arduino I2C master/slave setup, using I2C_Anything.h to transfer float values back and forth, and a working Teensy-Teensy I2C master/slave setup, using I2C_Anything.h to transfer float values back and forth. 

The next step was to replace the Arduino Uno I2C slave with the Teensy 3.2 I2C slave and see if I can get that combination working.  This turned out to be surprisingly easy – basically plug-and-pray (oops, I meant ‘play’).

Now that I have a simple Arduiono Mega Master – Teensy 3.2 Slave setup working, I can start to explore why my 4WD robot setup doesn’t work.  Some possibilities:

  • It could be that the SBWire version of the Wire library has some hidden bugs in the slave-related code.  The SBWire library eliminates the well-known and well-hated ‘hang’ bug in the Arduino wire library.
  • It could be that some other library I am using, like the RTC library or the Adafruit FRAM library is interfering with the Arduino-Teensy interrupts needed for master-slave communication.
  • It could be that the IR demod program running on the slave Teensy is somehow interfering with master-slave communications (this is very unlikely as this same program had been working flawlessly for several months).
  • Something else…

To start with, I copied the current ‘I2C_Master_Tut2’ VS2019 project to a new one – ‘I2C_Master_Tut3’ before making any modifications.  I plan to keep a ‘breadcrumb trail’ of incrementally modified projects so that I can go backwards in case I get lost at some point (and, in my fairly long 50+ years as a practicing engineer, I have learned that ‘getting lost’ is part and parcel of any non-trivial troubleshooting project).

Once I verified that ‘Tut3’ properly communicated with the Teensy slave, I started making modifications.

  • Replaced #include <Wire.h> with #include SBWire.h.  This failed in compile with a linker error, but succeeded after I did a ‘Build -> Clean’.  After uploading the new ‘Tut3’ version to the Mega, I found that master-slave communications still worked properly.  This is actually a welcome development, as that now eliminates SBWire as the culprit for lost Mega-Teensy I2C capability on the robot.
  • Replaced ‘#include “SBWire” with #include “MPU6050_6Axis_MotionApps_V6_12.h” and include “I2Cdev.h” (this includes SBWire.h).  These two libraries are required for interfacing to the MPU6050 IMU module.

29 March 2020 Update:

So I created a new copy of the I2C master, ‘I2C_Master_Tut4’ and started adding things from the original non-working robot program.

  • Added all the remaining #include’s.  Still works fine with the Teensy 3.2 slave
  • Added all the #defines. Still works fine
  • Added all the IRHOMING parameters.  OK
  • Added all ENUMs, Battery constants, distance measurement support constants & array declarations,  motor parameters, wheel direction constants and variables, and heading based turn parameters.  No problem
  • Added pin assignments.  No problem.
  • Added all the other pre-setup stuff.  No problem
  • Added ‘ Serial1.begin(9600); //03/04/16 bugfix’. This isn’t actually used anymore in the robot project, but it is there, so it could be the culprit.  Nope –  master/slave comms still OK.
  • Added RTC initialization and  support function.  Still no problem
  • Added FRAM initialization.  No problem.
  • Added MPU6050 initialization.  No problem
  • Added PID distance array and incremental variance initialization.  No problem
  • Added I/O pin initialization.  No problem
  • Added the rest of Setup(), and all the support functions required for the POST check to run.  Compiles and runs fine.  Of course, no peripherals are attached, so not much happens.

At this point I have all the pre-setup and setup code incorporated into the master/slave example, and everything is still happily plugging away.  So obviously the culprit hasn’t yet been identified.  However, before going any farther, I think I’ll drop ‘Tut4’ into the safe-deposit box and create a ‘Tut5’ to continue on into the loop() function.

OK, so I have a ‘Tut5’ project now, with everything up to loop() incorporated from the ‘FourWD_WallE2_V2’ robot project.  Now the question is, ‘What next?’.  I need to figure out what is causing I2C comms between the Mega master and the Teensy 3.2 slave to fail, so I need a way to faithfully replicate/simulate/emulate the actual robot code for this function.

The current robot algorithm as pertaining to the Teensy IR Demod module

  • Each time through the loop, the current operating mode is determined.
    • If IsIRBeamAvail() returns TRUE, the IR HOMING mode is activated
      • IsIRBeamAvail() gets three float values from the Teensy 3.2, and returns TRUE if the total of the first two values (Fin1 & Fin2) is above a set threshold.  It is this function that is failing to communicate with the Teensy.  More specifically, it is this function that is not successfully acquiring the three float values, and subsequently always returns FALSE.

So, it may be that all I have to do is to call IsIRBeamAvail() by itself, and modify the slave code to send back three floats as expected.  If this works, then I’ll have to start suspecting the robot’s Teensy or the I2C wiring, or something else entirely.

2 April 2020 Update:

After some additional fumbling around with I2C_Anything and the PrintEx library printf() formats, I now have a working ‘IsIRBeamAvail()’ function in ‘Tut5’, as follows:

Which, when connected to my basic  Teensy 3.2 I2C slave program produces the following (correct) output:

At this point we have a working robot code emulation that communicates successfully with a basic Teensy 3.2 slave.  So, the available culprits have been reduced significantly to

  • Something about the Teensy IR demod code
  • The I2C Wiring
  • A hardware problem at either the robot’s Mega controller or the robot’s Teensy 3.2 controller
  • Something else.

For the next step, I modified the Teensy IR Demod code to emulate the IR detector response and loaded this code into the Teensy 3.2 slave connected to the Mega master.  After some initial mis-steps, it started working nicely, with the following (correct) output from the Mega master:

So this eliminates the ‘Something about the Teensy IR demod code’ possibility from the above list.

Next, I replaced the direct SCL/SDA jumpers with the daisy-chain wiring from the robot, and miracle of miracles, I2C comms failed – YAY!!.  Now hopefully I can figure out (and fix) the problem.

  • First, I un-replaced the daisy-chain wiring to confirm that I2C comms were still working, and they were.
  • Unplugged the daisy-chain from the FRAM, 6050IMU, and RTC modules – now working OK
  • Plugged back in to FRAM unit (now have FRAM and Teensy 3.2 in chain)- – still OK
  • Plugged back in to 6050 IMU (now have IMU, FRAM, and Teensy 3.2 in chain)- failed
  • Plugged back in to RTC (now have RTC, FRAM, and Teensy 3.2 in chain)- failed

3 April 2020 Update:

The next step in the saga was to load the ‘Tut5’ code onto the robot’s Mega 2560 controller, with the slave code still running on the original (off-robot) Teensy 3.2.  Setting things up in this fashion allowed the sensors to receive power from the robot’s Mega controller as in normal operation.

In the photo above, the Teensy 3.2 slave is shown to the left of the robot.  The I2C ‘daisy-chain’ cable starts at the Mega controller (just to the right of the front left wheel), and goes through three ‘hops’ to the Teensy. With this setup, the I2C comms code still ran fine, with direct I2C jumper wires or with the daisy-chain wiring setup (with no connections to the other I2C sensors) as shown above.

The next step was to connect the robot’s Mega controller to the robot’s Teensy 3.2 running the unmodified IR Demodulation code, with the I2C daisy-chain cable, but without anything else connected.  This actually worked!  So now I have a working robot system again, so something about the connections with the other sensors must be killing the I2C link to the Teensy.  Here’s a scope photo of the SDA line showing the data activity. The vertical scale is 1V/cm, showing the HIGH value is about 3.8V, so the Mega must be comfortable with that value as a HIGH logic level

In the above photo, notice the business card for Probe Master, Inc (www.probemaster.com).  These folks were kind enough to replace my ten year old scope probe with a new upgraded version at no cost – thank you Probe Master!

At this point I have the ‘Tut5’ program running on the Mega, and the unmodified IR Demod code running on the robot’s Teensy 3.2, and the Mega appears to be acquiring valid demod data from the Teensy.  To further validate the data, I fired up my charging station with it’s square-wave modulated IR beam, and was pleased to see that the acquired data from the Teensy varied appropriately as I manually rotated the robot, as shown in the following Excel chart.

So, after all this work, the whole thing came down to a bad I2C ‘daisy-chain’ cable. This particular cable has been on the robot for a while, and was the soldering graduation exercise of my grandson Danny  when he was here a couple of summers ago.  It wasn’t the best job I’ve ever seen, but it was pretty good for a 15 year old ;-).  In any case, I took the opportunity to build a new cable with smaller diameter wire so things would fit a little bit better into the Pololu pins, sockets, and 2-pin header sleeves as shown in the following photo.

New I2C ‘daisy-chain’ cable.  Run starts from the Mega connector at bottom left, then to the RTC, IMU and FRAM modules in that order, then last to the Teensy 3.2 IR Demod module.

05 April 2020 Epilogue:

Well, there was one last ‘gotcha’ in all this.  When I loaded the original ‘FourWD-WallE2_V2.ino’ program back onto the Mega, it still refused to acquire valid data from the Teensy IR Demod module.  So, I compared the ‘Tut5’ code to the ‘_V2’ code, and noticed three significant differences:

  • In the ‘_V2’ code, the ‘Fin1/2’ variables had at some time been changed from ‘long’ to ‘int’. While this sounds reasonable, it isn’t – because a Teensy ‘int’ is 4 bytes – the same as a Mega ‘long’.
  • In the ‘_V2’ code, the  Wire.requestFrom() call asks for 32 bytes, but the Teensy only sends 12.
  • In the ‘_V2’ code there is a loop that waits for either a timeout or receiving an entire 32-byte buffer from the Teensy.  Apparently the Teensy (and/or the Mega) is slow enough so that Wire.Available() never reports a non-zero buffer size, so the loop times out.

Replacing the line

with the line

did the trick, but I have no idea why.

Replacing the debug printout lines

with the line

Produced lines like this one from the ‘_V2’ program

Which appear to be the correct values for the given robot orientation with respect to the charging station.

With no other changed except removing power from the charging station, the output became

showing definitively that the ‘_V2’ program correctly identified and decoded the square-wave modulated IR beam.

So, after a two-week off-road trip to I2C comms hell and back, I think I finally have that particular issue put to bed.  It wasn’t exactly what I wanted to do, but it was at least interesting, and more important – consumed a lot of coronavirus quarantine time ;-).

Stay tuned!

Frank

Arduino SPI Data Exchange Between Two Arduinos in a Master/Slave Configuration

posted 13 March 2020,

While updating my four wheel autonomous wall-following robot (aka Wall-E2), I ran into a roadblock when I couldn’t get my Teensy 3.2 IR demodulator/tracker module to communicate with the Mega 2560 main microcontroller over I2C.  This worked fine when I last tested it, but now it seems to have taken a vacation.  It’s been a while, and I have added some functionality since I originally installed and tested IR homing to the charging station, but it still should all work, right?

In any case, after trying (and failing) to get the I2C bus connection working, I thought I might try an alternate solution and just use SPI between the main controller (Mega 2560) and the IR homing controller (Teensy 3.2).  This would have the advantage of making the Teensy independent of the I2C bus, and also give me the chance to play with a part of the Arduino ecosystem that I haven’t used before.

As usual, I started this process with a lot of Googling, and quickly ran across Australian Nick Gammon’s  “SPI – Serial Peripheral Interface – for Arduino  .  This post was way more than I ever wanted to know about SPI, but it sure is complete!

Then I moved on to trying to get SPI working for myself.  As noted above, my intended application is to transfer steering values from my IR detector/demodulator/homing module to the main Mega 2560 microcontroller. The Mega uses the steering data to adjust wheel speeds to home in on a charging station, so Wall-E2 can continue to roam our house autonomously.

It turns out that SPI between two Arduinos isn’t entirely straightforward, at least not at first.  I went through a bunch of iterations, and even more passes through the available documentation.  In the end though, I think I wrassled it into a reasonable facsimile of a working solution, as shown below.  The program repeatedly transfers a fixed string (“Hello World!”) from the UNO (master) to the Mega 2560 (slave), and then transfers the string version of a float value (3.159) from the slave to the master.

The Circuit:

The circuit and layout is just about as basic as it gets.  The UNO (master) connections are the default pinouts:

UNO (master) connected to Mega 2560 (slave) using default SPI pins on both ends

 

The Master:

The Slave:

The above configuration produced the following output:

Master:

 

Slave:

Now that I’ve had my fun and games with SPI, I’m still not at all sure I want to use it to connect the Mega to the Teensy IR Demodulator/Homing module.  While it is almost certain to work (eventually), it has some drawbacks

  • It requires 4 additional wires
  • It requires that I add an ISR and other code to the Teensy module, and companion code to the Mega
  • More things to go wrong.

So, I think I’ll try again to get the original I2C based code working, as it worked before, so it should work again.  If I can’t get it working after another good try, I’ll go with door #2 (SPI)

Stay tuned!

Frank

 

Updating the Four Wheel Robot

Posted 29 February 2020,

Happy Leap Day!

For the last several months I have been using my older 2 wheel robot to investigate improved wall following techniques using relative heading from the onboard MPU6050 IMU module.  As the reader may recall (and if you can’t, look at this summarizing post) I had a heck of a time achieving reliable operation with the MPU6050 module mounted on the two wheel robot.  In the process of figuring all that out, and in collaboration with Homer Creutz, we also developed a nifty polling algorithm for obtaining heading information from the MPU6050, a method that has now been incorporated into Jeff Rowberg’s fantastic I2CDev ecosystem.

After getting the MPU6050 (and the metal-geared motors on the 2 wheel robot) to behave, I was also able to significantly enhance wall-following performance (at least for the 2 wheel robot).  Now it can start from any orientation relative to a nearby wall, figure out an approximate parallel heading, and then acquire and then track a specified offset distance from the wall – pretty cool, huh?

So, now it is time to integrate all this new stuff back into the 4-wheel robot, and see if it will translate to better autonomous wall-tracking, charging station acquisition, obstacle avoidance, and doing the laundry (well, maybe not the last one).  The major changes are:

  • Update the project with the newest MPU6050 libraries:
  • Revise the original 4 wheel code to use polling vs interrupt for heading values
  • Installation of the ‘FindParallelHeading()’ function and all its support routines
  • Integration of the parallel heading determination step into current wall tracking routines
  • Verification of improved functionality
  • Verification that the new work hasn’t degraded any existing functionality
  • Incorporating heading and heading-based turn capabilities into obstacle avoidance

To implement all the above, while attempting to insulate myself from the possibility of a major screwup, I created a brand-new Arduino project called ‘FourWD_WallE2_V2’ and started integrating the original code from ‘FourWD_WallE2_V1’ and ‘TwoWheelRobot_V2’

Update the project with the newest MPU6050 libraries:

The original FourWD_WallE2_V1 project used the older MPU6050_6Axis_MotionApps20.h library, but the two wheel robot uses the newer MPU6050_6Axis_MotionApps_V6_12.h one.  In addition, Homer Creutz had updated the new library even further since its incorporation into the two wheel robot.  The first step in updating the 4 wheel robot was to re-synchronize the library on my PC with the newer version on GitHub.  This was accomplished very easily – yay!  The next step was to copy most of the #includes and program constants from the original 4 wheel project into the new one, and then get the resulting skeleton program to compile.  This took a few tries and the addition of several files into the project folder as ‘local’ resource and header files, but it got done.  At the conclusion of this step, the project had empty setup() and loop() functions and no auxiliary/support functions, but it did compile – yay!

Revise the original 4 wheel code to use polling vs interrupt for heading values

The original project uses a flag modified by an ISR (Interrupt Service Routine) to mediate heading value acquisition.  The two wheel robot uses a polling based routine to do the same thing.  However, the algorithm used by the two wheel robot isn’t exactly the same as the one provided by Homer Creutz as part of the new MPU6050_6Axis_MotionApps_V6_12.h library.  In addition, the two wheel robot uses a different naming convention for the current heading value retrieved from the IMU. The 4-wheel robot uses ‘global_yawval’, and the 2-wheel one uses ‘IMUHdgValDeg’.  The 4-wheel robot uses ‘bool GetIMUHeadingDeg()’ to retrieve heading values, but the 2-wheel robot uses ‘bool UpdateIMUHdgValDeg()’ to better indicate it’s function.  So, all instances of ‘global_yawval’ will need to be changed to ‘IMUHdgValDeg’, and references to ‘GetIMUHeadingDeg()’ will have to instead reference ‘UpdateIMUHdgValDeg()’.

I started this step by copying the entire ‘setup()’ and ‘loop()’ function contents from the old 4-wheel robot project to the new one, and then working through the laborious process of getting everything to compile with the new variable and function names.  First I just started with ‘setup()’, and kept copying over the required support functions until I’d gotten everything.  For each support function I checked the corresponding function in the 2 wheel project to make sure I wasn’t missing an update or enhancement.  BTW, the combination of Microsoft’s Visual Studio and Visual Micro’s wonderful Arduino extension made this much easier, as the non-compiling code is highlighted in red in the margins of the VS edit window, reducing the need for multiple compiles  The affected functions were:

  • GetDayDateTimeStringFromDateTime(now, buffer): not in 2 wheel project
  • GetLeft/RightMedianDistCm():  Eliminated – these were never really used.  Replaced where necessary with GetAvgLeft/RightDistCm() from the two wheel project.
  • GetFrontDistCm(): Not used in 2 wheel project
  • dmpDataReady(): ISR for MPU6050 interrupts. Replaced with polling strategy
  • StopLeft/RightMotors(): Not used in 2 wheel robot – copied unchanged
  • SetLeft/RightMotorDir(): Not used in 2 wheel robot – copied unchanged
  • RunBothMotors(), RunBothMotorsMsec(): Not used in 2 wheel robot – copied unchanged
  • IsCharging(): Not used in 2 wheel robot – copied unchanged.

At this point, the entire setup() function compiles without error, and the setup() code runs properly. The next step is to add in the loop() functionality and then modify as necessary to replace interrupt-based heading acquisition with polling-based, replace ‘global_yawval’ with ‘IMUHdgValDeg’, and to replace ‘GetIMUHeadingDeg()’ with ‘UpdateIMUHdgValDeg()’

notes:

  1. Revise UpdateWallFollowMotorspeeds as necessary to incorporate heading-based offset tracking
  2. Revise/Replace RollingTurn() & GetIMUHeadingDeg() as necessary – done
  3. Global replace of global_yawval with IMUHdgValDeg showed 25 replacements
  4. Replaced ‘GetIMUHeadingDeg()’ with ‘UpdateIMUHdgValDeg()’ from 2 wheel robot project
  5. Added GetCurrentFIFOPacket() from 2 wheel robot project
  6. Replaced ‘if(devStatus == 0)’ block with the one from 2 wheel project
  7. Had to comment out PrintWallFollowTelemetry(frontvar),  FillPacketFromCurrentState(CFRAMStatePacket* pkt), and DisplayHumanReadablePacket(CFRAMStatePacket* pkt) to get everything to compile.

At the conclusion of all the above, the _V2 project now compiles completely.

1 March 2020 Update:

After getting the entire program to compile, I decided to try some simple tests of heading-based turn capability, so I modified setup() to have the robot do some simple S-turns, and then a backup-and-turn procedure.  As the accompanying video shows, this seemed to work quite well.  This is very encouraging, as it demonstrates polling-based rather than interrupt-based MPU6050 heading value acquisition and verifies that the latest MPU6050 libraries work properly.

The next step was to incorporate the ‘command mode’ facility from the two wheel robot. This facility allows a user within range of the Wixel RF link to take over the robot and issue movement commands, like a crude RC controller.  After making these changes, I was able to take control of the robot and manually command some simple maneuvers as shown in the following video.

As shown above, the left 180 degree turn as currently implemented for the 4-wheel robot takes forever!  I’ll need to work on that.

04 March 2020 Update:

I lowered the value of  the OFFSIDE_MOTOR_SPEED constant while leaving the DRIVESIDE_MOTOR_SPEED constant unchanged to make turns more aggressive, as shown in the following set of three video clips.  In the first two, the OFFSIDE_MOTOR_SPEED is 0, while in the last one, it is 25.  I think I’ll leave it set to 25 for the foreseeable future.

 

Stay tuned,

Frank

06 April 2020 Update:

After a two-week trip to I2C hell and back, I’m ready to continue the project to update my autonomous wall-following robot with new heading-based turn and tracking capability. The off-road trip was caused by (I now believe) the combination of a couple of software bugs and an intermittent I2C ‘daisy-chain’ cable connecting the Arduino Mega controller to four I2C peripherals. See this post for the gory details.

Installation of the ‘FindParallelHeading()’ function and all its support routines:

In the TwoWheelRobot project, the ‘FindParallelHdg()’ function is used to orient the robot parallel to the nearest wall in preparation to approaching and then tracking a specified offset distance.  The algorithm first determines the parallel heading by turning the robot and monitoring the distance to the near wall.  Once the parallel heading is determined, the robot turns toward or away from the wall as necessary to capture and then track the desired offset distance.

Here’s a short video and telemetry from a representative run in my ‘indoor range’.

So, now to port this capability to the FourWD_WallE_V2 project:

08 April 2020 Update:

The current operating algorithm  for WallE2 is pretty simple.  Every 200 mSec or so it assesses the current situation and decides on a new operating mode.  This in turn allows the main code in loop() to decide what to do.

Here’s the code for GetOpMode()

In terms of the project to port heading-based specified-offset wall tracking to WallE2, the only pertinent result from GetOpMode() is the default MODE_WALLFOLLOW result.

In the main loop(), a switch(CurrentOpMode) decides what actions, if any need to occur.  Here’s the relevant section of the code.  In the MODE_WALLFOLLOW case, the first thing that happens is an update of the TRACKING CASE via

‘TrackingCase = (WallTrackingCases)GetTrackingDir()’

which returns one of several tracking cases;  TRACKING_RIGHT, TRACKING_LEFT, or TRACKING_NEITHER.  A switch(TrackingCase) handles each case separately.

The two-wheel robot code uses a very simple left/right distance check to determine which wall to track.  In the four-wheel code, the ‘GetTrackingDir()’ function uses a LR_PING_AVG_WINDOW_SIZE-point running average for left & right distances and returns TRACKING_LEFT, TRACKING_RIGHT, or TRACKING_NEITHER enum value.

09 April 2020 Update:

It looks like the two-wheel code actually checks the L/R distances twice; once in GetParallelHdg() and again in the main loop() code.  Once it determines which wall to track, then it uses the same code for both tracking directions, with a ‘turndirection’ boolean to control which way the robot actually turns (CW or CCW) to effect capture and tracking.

The four-wheel code uses two LR_PING_DIST_ARRAY_SIZE buffers – aRightDist & aLeftDist – to hold ‘ping’ measurements. These arrays are updated every MIN_PING_INTERVAL_MSEC with the latest left/right distances, pushing older measurements down in the stack.  The ‘GetTrackingDir()’ function computes the average of the LR_PING_AVG_WINDOW_SIZE latest measurements for tracking direction (left/right/neither) determination.

There are also two utility functions ‘GetAvgLeftDistCm()’ and  ‘GetAvgRightDistCm()’ that are used in several places, but they don’t do a running average of the last LR_PING_AVG_WINDOW_SIZE measurements; instead they do an average of the first LR_PING_AVG_WINDOW_SIZE ones!  Fortunately for the program right now the LR_PING_AVG_WINDOW_SIZE and LR_PING_DIST_ARRAY_SIZE values are the same — 3.

So, I think part of the port needs to involve normalizing the distance measurement situation.  I think the proper way to do this is to revise the ‘GetAvgLeftDistCm()’ & ‘GetAvgRightDistCm()’ functions to compute the running average as it done currently now in GetTrackingDir(), and then call those functions there.  This considerably simplifies GetTrackingDir() and increases its cohesion (in the software engineering sense) as it no longer contains any computations not directly related to its purpose. DONE in FourWD_WallE2_V3 project

The ‘GetTrackingDir()’ function is called in only one place – at the top of the ‘case MODE_WALLFOLLOW:’ block of the ‘ switch (CurrentOpMode)’ switch statement.  The ‘TrackingCase’ enum value returned by ‘GetTrackingDir()’ is then used  within the MODE_WALLFOLLOW:’ block in a new ‘switch (TrackingCase)’ switch statement to determine the appropriate action to be taken.  Here’s the relevant code section:

looking at just the ‘TRACKING_LEFT’ section,

There are three potential actions available in this section; a ‘back up and turn’ obstacle avoidance maneuver, a ‘step-turn to the right’ “upcoming obstacle” avoidance maneuver, and a “continue wall tracking” motor speed adjustment action.

It’s the “continue wall tracking” action that is of interest for porting the new tracking algorithm.  At this point, if this is the first time for the TRACKING_LEFT mode, the robot needs to execute the FindParallelHdg() routine, then capture the offset distance, and then start tracking.  If the previous mode was TRACKING_LEFT, then just continue tracking.

A potential problem with the port idea is that the ‘FindParallelHdg()’ and offset capture routines are ‘blocking’ functions, so if something else happens (like the robot runs into an obstacle), it might not ever recover.  In the current four wheel code, this is handled by checking for obstacle clearance each time through the MIN_PING_INTERVAL_MSEC interval check.  Maybe I can incorporate this idea into the ‘capture’ and ‘maintenance’ phases of the angle-based wall tracking algorithm.  Maybe something like the following state diagram?

Possible state diagram for the TRK_RIGHT case

11 April 2020 Update:

I’m concentrating on the TRACKING_RIGHT sub-case in MODE_WALLFOLLOW, because my ‘local’ (in my office) test range is optimized for tracking the right-hand wall, and I figure I should work our the bugs on one side first.

  • Ported the ‘FindParallelHdg()’ code from the two wheel to the four wheel project, and in the process I changed the name to ‘RotateToParallelOrientation()’ to more accurately describe what the function does.
  • In porting over the actual code that decides what ‘cut’ to use to capture and maintain the desired offset, I realized this should be it’s own function so it can be called from both the left and righthand tracking algorithms.  Then I discovered it already was a function in the two wheel program – but wasn’t being used that way for some unknown reason.  Ported the ‘MakeTrackingTurn()’ function to FourWD_WallE_V3

26 April 2020 Update – Charging Station:

While trying (and failing so far) to work out the wall-following ‘capture/maintain’ algorithms for the four wheel robot, the battery voltage got down to the point where the ‘GetOpMode()’ function was starting to return DEAD_BATTERY. So, I decided this was a good time to complete the required software & hardware modifications to the charging station to work with the new 90 mm x 10 mm wheels I recently added to the robot. To accommodate the larger  diameter wheels I raised the entire charging station electronics platform up by some 14  mm. To accommodate the much narrower wheel width, I had to completely redo the wheel guard geometry, which also required re-aligning the charging station approach guides.

When I was all done with the required physical mods, I discovered that although the robot would still home to the charging station, it wouldn’t shut off its motors when it finally connected to the charging probe. I could see from telemetry that the probe plug had successfully engaged the probe jack’s integrated switch, but the motors continued to run.   However, if I lifted the robot slightly off its wheels from the back while keeping it plugged in, the motors stopped immediately, and the charging operation proceeded normally.  It appeared for all the world like the plug was only partially engaged into the jack. So, I tried the same trick with the robot on its stand (this raises the robot up slightly so the motors can turn without the robot running off on me) and a second probe plug tied to the power supply but physically separate from the charging station, and this worked fine; as soon as the plug was pushed into the jack, the motors turned off and life was good.

So, back to the charging station; I thought maybe the plug wasn’t making full contact due to misalignment and after critically examining the geometry I made some adjustments.  However, this did not solve the problem, even when the plug was perfectly aligned with the jack.  But, with the plug alignment cone attached to the robot it is hard to see whether or not the plug is fully inserted into the jack, so I still thought that maybe I just needed to have the robot plug in with a bit more authority. To this end I modified the software to monitor the LIDAR distance measurement as the robot approached the charging station, and have the robot speed up to max wheel speed when it got within 20 cm.  I also printed up a ‘target panel’ for the charging station so the LIDAR would have a consistent target to work with.  This worked great, but still didn’t solve the problem; the robot clearly sped up at the end of the approach maneuver, but also still literally “spun its wheels” after hitting the charging station stops.   Lifting the back slightly still caused the motors to stop and for charging to proceed normally.  However, I was now convinced this phenomenon wasn’t due to plug/jack misalignment, and I had already confirmed that all electrical connections were correct. So, having eliminated the hardware, the software must be at fault

So, now I was forced to dissect the software controlling the transition from wall-following to IR homing to battery charge monitoring.  In order for the robot to transition from the IR Homing mode to the charge monitoring mode, the following conditions had to exist:

  • The physical charging jack switch must be OPEN, causing the voltage read by the associated MEGA pin to go HIGH.
  • 12V must be present at the charging jack +V pin
  • The Difference between the Total current (measured by the 1NA169 ‘high-side’ current sensor located between the charging jack and the battery charger) and the Run current (measured by the 1NA169 between the power switch and the rest of the robot) must be positive and greater than 0.5A.

Of the above conditions, I was able to directly measure the first two in both the ‘working’ (with the robot on its stand and using the auxiliary charging probe) and ‘non-working’ (when the robot engaged the charging plug automatically) conditions and verified that they were both met in both cases.  That left the third condition – the I_Total – I_Run condition.

The reason for the I_Total – I_Run condition is to properly manage charge cutoff at the 80% battery capacity point.  The robot has a resting (idle) current of about 0.2A, so a 0.5A difference would mean that the battery charge current has decreased to 0.3A, which is slightly above the recommended 90% capacity level (see this post for a more detailed discussion).  So, this condition is included in the ‘GetOpMode()’ algorithm for determining when the battery is charging, and when the battery is fully charged.  In normal operation, I_Total = 0 (nothing connected to the charger) and I_Run = Robot running current, so I_Total – I_Run < 0 and the IsCharging ()  function returns FALSE.  When the robot is connected to a charger, I_Total is usually around 1.5A initially, and I_Total – I_Run > 0.5, causing IsCharging() to return TRUE and the robot to enter the CHARGING mode, which disables the motors.  What I didn’t realize though is that the larger diameter wheels and better motors cause I_Run to be a lot higher than I had anticipated, which means that when the robot plugs into the charging station, the I_Run value goes over 2A with all four motors stalled. This in turn means that the and I_Total – I_Run > 0.5 condition is never met, IsCharging() continues to return FALSE, and the motors never turn off – OOPS!

So, how to fix this problem?  It appears that I don’t want to use the I_Total – I_Run > 0.5 condition as part of the IR Homing –> CHARGING mode transition, but I do want to use it as part of the CHARGING –> CHARGE_DISCONNECT mode transition. This should work, but this exercise got me thinking about how I the charge operation relates to the rest of the robot’s behavior.

The basic idea is for the robot to autonomously seek out and connect to on of a number of charging stations whenever it is ‘hungry’, defined by a battery voltage less than some set threshold, and to avoid those same stations when it isn’t.  When the robot is ‘hungry’ and a charging station signal is detected, the robot should home in on the station and connect to a charging probe, stop its motors, wait until the battery is 80% or so charged, and then back out of the station and go on its merry way.

As currently programmed, the robot operates in one of several modes as shown below.

The program calls a function called ‘GetOpMode’ every 400 msec to determine the proper mode based on sensor input and (to some degree) past history).  The GetOpMode() function is shown below:

The return value from GetOpMode() is used in a SWITCH block to determine the appropriate actions to be taken, as shown below, with the MODE_WALLFOLLOW case reduced to one line for clarity:

In the normal sequence of events, the MODE_IRHOMING case will be executed first. If a charging station signal is detected, the robot will call IRHomeToChgStn() whether or not it is ‘hungry’.  However, if it is ‘not hungry’ IRHomeToChgStn() will return FALSE with the robot oriented 90 degrees to the charging station, which should cause the program to re-enter WALL_FOLLOW mode.  If the robot is ‘hungry’ and successfully connects to the charging probe, IRHomeToChgStn() returns TRUE.  In either case, the program exits the SWITCH block and goes back the top of loop() to start all over again.  The return value of IRHomeToChgStn() is not actually ever used.

The next time GetOpMode() runs, if the robot is indeed connected to the charging station, GetOpMode() returns MODE_CHARGING.  When this section of the SWITCH statement is executed, the motors are stopped and MonitorChargeUntilDone() is called. This function is a blocking call, and doesn’t exit until the robot is fully charged or the timeout value has been reached.  When MonitorChargeUntilDone() returns, the robot backs away from the charging station, turns 90 deg, and returns to wall-following.

When looking back through the above paragraphs, it becomes clear that managing the charging process is broken into two separate but interdependent parts; GetOpMode() recognizes the conditions for entering charging and returns with MODE_CHARGING.  The MODE_CHARGING section of the SWITCH block in loop() actually executes the steps required to begin charging (like stopping the motors, turning off the red laser) and the steps required to disconnect at end-of-charge.

It also becomes clear that once the MODE_CHARGING section of the SWITCH statement is entered, it stays there until MonitorChargeUntilDone() returns at end-of-charge, and the robot disconnects from the charging station.  I think this means the GetOpMode() code can be significantly simplified and made much more readable.  Here’s the new version of GetOpMode():

 

29 April 2020 Update – Tracking (Again):

While screwing around with the charging station code, I managed to charge Wall-E2’s battery to the point where it refuses to connect to the charging station – the ‘Not Hungry’ condition. Rather than just let it run down by running the motors on its stand, I decided to continue working on the wall-following code improvements ported over from the two wheel model.

The last time I worked on the tracking code was back on 11 April when I ported the ‘RotateToParallelOrientation()’ and ‘MakeTrackingTurn()’ functions from the two wheel robot.

For reference, here’s where ended up with the TRACKING_RIGHT case from last time:

The significant changes in the code from where I started are:

  • Disabled ‘Far obstacle’ detection while in ‘capture’ mode to avoid obstacle avoidance step-turns in the middle of trying to capture the desired offset distance
  • When a ‘far obstacle’ IS detected, the StepTurn() function is now called immediately, rather than just starting the turn and letting it play out over multiple passes through the loop. This is now possible due to having accurate relative heading information, and is a huge improvement over the old timed-turn method.
  • Tried, and then abandoned the idea of using the front LIDAR measurement to directly acquire the desired offset distance by turning 45 deg from parallel, driving until the front distance was sqrt(2) (1.414) times the desired distance, and then turning back to parallel.  This failed miserably because the robot’s turning radius is WAY too large to make this work.
  • ported the offset capture/maintenance code directly from the two wheel robot.

So, now the capture/maintenance code in the four whee robot looks like this

I ran a couple of trials in my ‘local’ (office) test range and they looked promising, so I tried a test in my ‘field’ (hallway outside my office) range, with very good results.  Starting from about 75 cm out from the right wall and oriented slightly toward the wall, the robot tracked into the 30 cm desired offset and then maintained that offset to the end of the hallway.  Interesting, it also attempted a ‘step-turn’ obstacle avoidance maneuver at the end, as shown in the following video:

Here is the relevant telemetry output from this run.

From the above telemetry and video it is clear the robot was successful in capturing and maintaining the desired wall offset.  Salient points are (leading numbers denote mSec from start):

  • parallel heading found at 30.984 deg – very close, and very efficient
  • Left/Right Avg Dist = 104/72. So the robot started out with a 40 cm error
  • 5522: In RollingTurn(CW, FWD,10.00). The capture process started with two quick 10-deg CW turns toward the right wall.
  • 5778:  After about 6 times through the loop, the ‘cut’ is increased another 10 deg toward the right wall.
  • 6057: Cut reduced by 10 deg with distance to wall = 64 cm
  • 6693: Cut increased by 10 deg with distance = 57 cm
  • 7017: Cut reduced by 10 deg with distance to wall = 55 cm
  • 7536: Cut increased by 10 deg with distance = 47 cm
  • 7867: Cut reduced by 10 deg with distance to wall = 46 cm
  • 8304: Cut reduced by 10 deg with distance to wall = 42 cm
  • 8727:  Cut reduced by 10 deg with distance to wall = 34 cm

So, it only took about 3 seconds after the initial ‘find parallel’ maneuver to close from 72 to 34 cm and to remove the entire initial ‘cut’ of 20 degrees.  Pretty successful, I would say!

30 April 2020 Update:

After the above successful test, I tried a wall-tracking run with the robot initially oriented slightly away from the wall, and this was a dismal failure.  Looking through the code, I became convinced that the algorithm as currently implemented will never work for the ‘away from wall case’, so now I’m busily rewriting the whole thing so it will work for both cases.

After a couple false starts, I think I now have a working ‘turn to parallel’ algorithm that works for both the ‘toward wall’ and ‘away from wall’ starting orientations.  Here’s a couple of video clips showing the operation. For purposes of this demonstration, I added short (1 sec) pauses between each step in the ‘find parallel’ operation so they can be easily identified in the video.  In actual operation each step should flow smoothly into the next one.

Here’s the code for the newly re-written ‘RotateToParallelOrientation()’:

In the above code, the actual inflection point detection routine was abstracted into its own function ‘FindInflectionPoint()’ as shown below:

At this point I think I have the wall-tracking case completely solved for the right-hand wall tracking case.  Now I have to port the algorithm to the left-hand wall and ‘neither wall’ tracking cases and make sure it all works.  But for now I’m pretty happy.

1 May 2020 Update: Obstacle Avoidance

Happy May Day!  In Ohio USA we have now been in ‘lockdown for two entire months – All of March and all of April.  According to our governor, we may be able to go out and play at least a little bit in the coming month – yay!

At the end of both the last two videos of successful ‘RotateToParallelOrientation()’ runs, the robot gets close to the end of the hallway and attempts a left/right step-turn maneuver, ending with its nose against the far wall.  The maneuver occurs way too close to the obstacle to do any good as a ‘in-line obstacle’ avoidance maneuver; at this distance, a 90 degree turn to follow the new wall would be a better deal.

When I looked into the telemetry log to determine what happened, I discovered the late step-turn was a consequence of an earlier decision to significantly shorten the ‘far obstacle’ detection distance, in order to avoid step-turns in the middle of the robot’s attempt to capture and then maintain the desired wall offset distance.  Unfortunately the detection distance wasn’t re-instated to its former value once the robot had captured the desired wall offset, so it didn’t detect the upcoming wall as an obstacle until too late.

And this line of thought brings up another issue; what defines the transition from the ‘approach’ state to the ‘distance hold’ state, anyway?  In previous work I had sketched out a proposed state diagram for the wall-following mode, as shown below:

Possible state diagram for the TRK_RIGHT case

And I now realize it is both incomplete and poorly labelled.  I now believe the ‘Capture’ state should be renamed to ‘Approach’, and the ‘Maintain’ state should be renamed ‘Offset Hold’.  Moreover, there should be a third ’90-deg left turn’ state that is entered if the step-turn maneuver fails to bypass the ‘far obstacle’.  Maybe something like:

02 May 2020 Update: Back to Charging Station Code

After a week of working with wall-following code, I managed to discharge Wall-E2’s battery to the point where it’s looking to be fed again, so I’m back on the charging station support code.  When we last left this portion of the saga, I had discovered the problem with Wall-E2 not shutting off its motors after connecting to the charging station, and was in the process of making the changes when Wall-E2 turned up its nose at the charging station and I had to go do other stuff for a while.

Now, back on the Charging Station case, I found that I had to make some significant changes to GetOpMode; I had to move the DEAD_BATTERY condition check to after the IR Homing and Charger connected mode checks; otherwise, the robot can’t home to a charging station with a low battery because the DEAD_BATTERY mode will be detected first – oops!  The new version of GetOpMode() is shown below:

After these (and some other minor improvements) I was able to get Wall-E2 to successfully home to a charging station, stop its motors (yay!) and commence charging.  As the following videos show, I was able to do this on my desktop ‘local’ range, and also ‘in the field’ (aka my atrium hallway).

After the robot connects to the charging station, the rear LED strip changes function to become a ‘fuel guage’, with ’empty’ on the right and ‘full’ on the left.  In the videos above, note that about 6 seconds after the robot connects, the LED strip changes to show ‘almost empty’.

Stay tuned,

Frank

 

 

 

 

 

 

 

Wall tracking: finding the heading parallel to the nearest wall

Posted 14 February 2020,

Happy (American) Valentines Day! In my last post, I described my plan to use Wall-E2’s new relative heading super power to find the relative heading parallel to the nearest wall.  I ended that post with “…and not all that hard to program, either”.  Well, this turned out to be a bit of an exaggeration as things weren’t quite as easy as I first thought; the interaction of the physics of the robot and the time scales associated with ping measurements complicated things a bit.

Background:

For some time now I have been working on ways to enhance Wall-E2’s autonomous wall-tracking ability.  Wall-E2 can track walls fairly well, but lacks the ability to track a wall at a specified stand-off distance.  Currently, tracking occurs at whatever distance Wall-E2 first detects the nearest wall. While this isn’t terrible, I wanted to do better.

Unfortunately, the way in which the measured ‘ping’ distance to the nearest wall interacts with the relative orientation of the robot with respect to that wall makes it almost impossible to determine the actual offset distance, and therefore how to determine what to do to maintain a constant offset distance.  As shown in the following diagram, when the robot makes a turn, the measured distance to the wall will change just due to the orientation change, without the robot’s actual offset distance changing at all.

Without having some idea of the angle theta in the above diagram, making a judgement of where the robot is relative to the target offset distance is difficult, if not impossible.   This situation was the impetus for adding the MPU6050 Inertial Measurement Unit (IMU) to Wall-E2’s list of super powers. The general idea was that knowledge of relative headings would allow Wall-E2 to make accurate heading-controlled turns without relying solely on timing.  After a lot of work to eliminate RFI/EMI problems associated with the Pololu metal-geared motors on Wall-E2, I’m happy to say that the MPU6050 is now quite stable, and making turns of just a few degrees is quite possible.

However, acquiring and then maintaining a particular offset distance from the nearest wall is still not straightforward.  Back in early December last year I demonstrated the ability to acquire and then maintain a constant offset distance, but only if the robot started out reasonably parallel to the wall.  If the robot was oriented toward or away from the wall by more than a few degrees, it would not work.  So I needed to find a way to first orient the robot parallel to the nearest wall at any distance, so that my current acquisition & tracking algorithm would work successfully.

The basic idea behind finding the parallel heading is that when the robot is turned through a forward arc and the measured ‘ping’ distance decreases and then starts increasing, the robot’s relative heading at this inflection point is the desired parallel heading.   If the distance instead starts increasing, then the robot started out either parallel to or facing away from the wall.  In either case, reversing the turn back toward the wall will cause the measured distance to decrease to a minimum and then increase again.  As in the first case the heading at the point at which the measured distance starts to increase is the desired parallel heading.

Although the basic idea as described above is very straightforward, as usual there are some ‘gotchas’ in the actual implementation:

In order to minimize heading overshoot due to the robot’s mass and angular momentum,  the parallel heading search turns must be performed at lower-than-normal speeds.  After some experimentation I settled on a turn rate of about 60 deg/sec.  With the robot starting with an angle-in orientation of about 30 deg, this means that it takes about 1 second to sweep through to an angle-out orientation of about 30 deg.  With the Arduino UNO setup I’m using for the tests, I was getting distance measurements about every 30-50 mSec, so about 33 to 20 measurements/sec, or around 2-3 measurements/degree.

The lower turn rate significantly reduces the rate at which the ‘ping’ distance changes per unit time, making it much harder to detect the distance inflection point.  In effect, the lower turn rate flattens the ‘distance/degree’ slope, making inflection point detection more difficult.  At 20-30 measurements/degree and only a few cm change from max on one side to max on the other, there are a lot of identical measurements returned.

My initial cut at addressing the the above issue was to space the ping measurements further apart in time, thereby increasing the ‘distance/degree’ slope. After trying this (using a ‘elapsedMillisec’ variable) I realized that an equivalent method would be to simply increase the size of the inflection detection window (the number of times the ping measurement must be on the ‘other side’ of the inflection point in order to qualify as a valid inflection). After some experimentation, I arrived at a value of 20.

For some reason, it was much easier to find a good parallel heading value if the robot started out pointed toward the near wall. If it started out pointed away from the wall, the robot often stopped well short of or well after the actual parallel heading.  Eventually I developed a 4-turn process for this case to really nail down the parallel heading.  Here are some short videos demonstrating the algorithm.

Now that I can reliably determine the relative heading that orients the robot parallel to the nearest wall, I should be able to marry this capability with my already-developed algorithm for acquiring and maintaining a specific offset distance.

20 February 2020 Update:

So I combined the ‘find parallel heading’ feature with my already-existing angle-based tracking algorithm, and this worked fairly well.  Here’s a short video demonstrating the technique:

In the above video, the blue painter’s tape strips are marked every 10 cm, with a double-width mark at 30 cm (the desired offset distance). As the video shows, the robot first determines an approximate parallel heading, and from there starts the normal angle-based tracking algorithm.

Next, I tried an ‘enhancement’ to the above by having the robot move toward the wall on a 30-45 deg ‘cut’ from the parallel heading, and then turning back to parallel at the desired offset distance.  As the following video shows, this didn’t turn out so well.  If the robot doesn’t start out exactly parallel, then the ‘cut’ is either too steep or too shallow, resulting in a too-early or too-late turn back to the parallel heading.

So it looks like the ‘find parallel then start tracking’ approach works pretty well, but the ‘find parallel then drive to offset on a cut then back to parallel’ approach hasn’t been very successful.

27 February 2020 Update

After thinking about the difficulties I was encountering with my ‘FindParallel’ algorithm, I realized that the reason the robot was often overshooting the parallel orientation was due to small aberrations in ‘ping’ distance measurements that caused the ‘hit counter’ to reset to zero in the middle of an otherwise perfect arc of distance values.  The ‘hit counter’ is incremented each time the newest ‘ping’ distance measurement trends along the same line, and is reset to zero whenever the newest ping measurement breaks the trend. When the hit counter exceeds a preset level, the parallel condition is considered to be detected.  I thought I might be able to improve performance by making the algorithm a little more tolerant of such aberrations.  So, rather than having the ‘hit counter’ reset to zero, I changed the algorithm to decrement by a set amount rather than reset it to zero.  This markedly improved performance, as shown in the following videos.

There are four sections in the above video.  In the first clip, the robot starts out pointed away from the wall and outside the desired 30 cm offset. The ‘FindParallel’ algorithm executes, and then approaches and then tracks the wall at the desired 30 cm offset.  The next three clips show the same situation, except starting outside the 30 cm offset and pointed toward the wall, and then from inside the 30 cm offset, pointed away from and toward the wall.  In each case, the robot successfully acquires a reasonably parallel heading and then acquires and tracks the 30 cm offset distance.

Stay tuned!

Frank

 

Back to the future with Wall-E2. Wall-following Part VIII

Posted 25 January 2020

About 6 weeks ago I posted that I had finally killed the “intermittent MPU6050 failure” dragon, by belatedly following Pololu’s recommendations for installing bypass capacitors on their metal-geared motors.  Unfortunately it turned out that my celebration was cut short by more annoying intermittent MPU6050 failures, so I was once again forced back to the drawing board.

This time I decided that the only way to figure out what was going on was to actively examine the I2C traffic in real time, to determine who exactly was doing what to whom.  So, over the course of the six week period between my last declaration of victory to this one, I created a Teensy based I2C bus ‘sniffer’ and used it to figure out what was going on.  I was able to determine that the ‘master’ micro-controller continued to operate normally through a failure, but the MPU6050 didn’t. I was also able to determine that just resetting the IMU would not allow the system to recover, but resetting the micro-controller often did.   Moreover, I was able to definitively show that the problem was caused by ‘contact bounce’ on one or more of the four 6″ male-male jumper wires connecting the micro-controller to the IMU.  Eliminating these jumpers also (I hope) eliminated the last piece of the “I2C Intermittent failure” puzzle.

Looking back over the entire I2C failure saga, I now realize that this was the classic case of multiple failure modes complicating the troubleshooting effort.  The RFI/EMI problem caused by the Pololu metal-geared motors completely overshadowed the issue of non-secure jumper connections. Then, after finally coming to my senses and installing the recommended bypass capacitors on the motors, the ‘contact bounce’ problem was unmasked.  I do love interesting problems, but this one went past ‘interesting’ and was well into ‘agonizing’ by the time I got it solved ;-).

After getting everything set up, I ran some wall tracking tests in my entry hall ‘test range’ with pretty good results, as shown in the short video clip below.

Stay tuned,

Frank

30 January 2020 Update:

Still having trouble with the initial approach to a wall from outside the target distance. The robot still has a tendency to dive into the wall, unable to cope with the problem of the measured distance increasing instead of decreasing when the robot turns into the wall.  This inverse relationship makes it almost impossible to use a simple ‘turn toward the wall and wait for the distance to count down’ technique.

After thinking about this for while, I realized that this would all be so much simpler if I cheated and started with the robot placed parallel to the target wall.  Then the robot could simply turn 45 deg toward the wall and proceed until the measured wall distance was appropriate (Dtgt / 0.707), and then turn parallel again.  Then I realized that I could easily determine the parallel condition by turning the robot toward and/or away from the wall while continuously measuring the distance; when the measurement goes through a minimum, then the robot is parallel to the wall.  Simple in concept, and not all that hard to program, either.

 

IMU Motor Noise Troubleshooting, Part III

Posted 19 January 2020

In Part II of this saga, I described my continuing efforts to track down and fix the problem of intermittent failures associated with the MPU6050 IMU on my robot.  The MPU6050 IMU is required for the ability to make precise heading-based turns, which is in turn required to track walls at a designated stand-off distance.

This post summarizes the work to date and suggests new avenues of investigation for fully addressing the motor noise issue.

Summary of work to date:

  •  July 2019: First started working with heading-based turns, and first noticed the motor noise problem.  Basically the problem presented itself as frequent, abrupt, and wildly divergent heading readings when the motors were running, but perfectly stable readings when the motors are not running.  See this post for the details.
  • October 2019: Successfully demonstrated polling-based (vs interrupt-based) MPU6050 IMU management. This development meant that I could acquire yaw (heading) values on an as-needed basis rather than at a 20 or 200Hz rate, throwing away 99% of the results.  This was demonstrated in this post.
  • November 2019: Made another run at solving the motor noise problem using a home-brew optical isolator and  a 2-stage power filter.  After a LOT of work, I wound up discovering that most (but not all!) of the problem could be addressed with proper RF bypassing at the terminals of the metal-geared Pololu motors I was using.  See this post for the details.
  • Early December 2019:  Demonstrated heading-based wall offset tracking using my 2-motor robot, with RF bypassing installed on both Pololu metal gear motors.  No IMU failures were noticed during these runs.  See this post for details.
  • Early December 2019:  Reprised some of the motor driver testing performed back in May of 2019 (see this post), and again noticed MPU6050 IMU communication failures when the motors were running, but none when the motors weren’t running. This test was performed on the 2-motor robot using the Pololu motors with the RF bypassing in place. So clearly just the bypassing was not of and by itself sufficient to solve the problem; something else had to be going on.  See this post for the details.
  • Late December 2019 to mid-January 2020:  I decided I needed a tool to monitor the I2C bus traffic between the robot’s controller and the MPU6050 IMU – an I2C ‘sniffer’.  After some research, I found that the cheapest commercial sniffer cost about $330, and DIY sniffers were few and far between. I did, however, find a Teensy-based sniffer program by Kito, so I had a starting place.  After three major development stages, I had a Teensy 3.2 program that would reliably monitor I2C communications between an Arduino (Mega or Uno) master and a MPU6050 slave, using the polling approach developed earlier.  See this post, this post, and this post for the development details.

Current Effort:

With the above history in mind, I applied my new I2C sniffer tool to the Motor Noise Problem.  As usual, I started this using the simplest possible setup; an Arduino Mega acting as the I2C master running my polling based ‘MPU6050_MotorNoiseTest1’ program, and a Teensy 3.2 and a MPU6050 IMU module both mounted on a small plugboard, as shown below.

Arduino Mega I2C master, with Teensy I2C sniffer and MPU6050 module on a separate plugboard

I played around with this setup for a while, and captured at least one IMU communications failure with the sniffer active. The failure occurred when I was moving the plugboard around a bit to verify that the MPU6050 IMU heading values changed appropriately.  At some point I noticed the I2C monitor output had changed its character significantly, so I quickly stopped the sniffer program and opened the log file (see attached file below).

From the log I can see that things proceeded normally until 6012443 mSec ( 1.67 hours) and then changed to report that nothing was being received from register 0x72 (the FIFO count register). This continued until 6022224 mSec (9.8 seconds later) where it returned to what looks like normal operation.

So, my preliminary guess at what happened is the connection from the Mega to the Teensy/MPU6050 got dropped momentarily, and it took the Teensy a while to find another START sequence in the I2C data stream from the Mega, as the ‘2048’ number in “6017280: processed = 2048 elements in 3 mSec” means that the capture buffer overflowed before a START sequence was detected.  “At 6022240: processed = 1224 elements in 2 mSec” means that a normal Mega ‘burst’ was captured and operation returned to normal.

Since the Teensy I2C monitor is on the MPU6050 end of the male-male jumpers, It begins to look like the Mega was still doing fine, but the jumper connection burped on one end or the other.  More testing to follow.

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download

 

Next, I moved the plugboard containing the Teensy I2C Sniffer and the MPU6050 module to my 2-motor robot, and used the existing Arduino Uno on the robot as the master, as shown below.

I loaded my MotorNoiseTest1 program on the Uno, and allowed it to run both motors at a steady rate, while monitoring the I2C traffic with the Teensy, and also monitoring the heading values being printed out by the Uno.  I started the program just before 1PM, and it was still running fine with no IMU errors at 10pm, more than 9 hours later!  The I2C sniffer log shows regular communication with the MPU6050, and the calculated yaw value based on the packet bytes received by the sniffer program matches the yaw value calculated in the Uno program. This is clear verification that the sniffer program will run ‘forever’, and that at least in this case, the two motor robot will also run ‘forever’ with no  yaw errors.

Based on my earlier experience with the captured I2C communications failure, I’m more inclined now to believe that motor vibration or other mechanical perturbation is causing a momentary I2C bus or power/ground lead disconnect.  More tests to follow:

21 January 2020 Update:

After the 10-hour run described above, I tried to induce some failures by fiddling with the I2C and power/ground jumper wires, and found that I could easily and reliably cause a failure by ‘flicking’ the wires with my finger or a pen.  After each failure, the built-in recovery routine of clearing the FIFO and resetting the DMP failed to restore communications.  However, manually resetting the UNO did allow the system to recover.

From the above, I believe it’s safe to say that the current male-male jumper connections between the UNO and the Teensy/IMU are unreliable, and are hopefully the only remaining failure mode.  I haven’t quite figured out how to replace the connections with something more reliable, but I’m working on it.  I moved the IMU module from the plugboard and plugged its I2C pins directly into the I2C sockets on the UNO.  Then I replaced the power & ground leads with a permanent twisted pair connection to the Wixel shield, as shown in the following photo.

MPU6050 plugged directly into Uno board, with pwr/gnd jumpers replaced with permanent twisted pair

Then I fired up the system and ran it for a while but was unable  to make it fail.  This is encouraging news to say the least.

Stay tuned,

Frank

Teensy I2C Sniffer for MPU6050 Part II

Posted 13 January 2020,

In my last post on this subject, I described my efforts to build an I2C bus sniffer using a Teensy 3.2 micro-controller.  This post describes my efforts to move from a fixed array containing a 928-byte snapshot of an I2C bus conversation between an Arduino Mega 2560 and a MPU6050 IMU to a live, repeated-burst setup.

As the source for I2C traffic for the MPU6050 IMU I am using my MPU6050_MotorNoiseTest1 Arduino project with no motors or sensors connected.  All the code does is ask the MPU6050 for a yaw value every 200 mSec (the value of NAV_UPDATE_INTERVAL_MSEC), as shown below:

The Teensy code to monitor the I2C bus traffic is shown below.  When I first started working with this project, I copied Kito’s I2C sniffer code, which used Teensy’s Timer1 interval timer set to produce interrupts every 1 uSec, and an ISR to capture the data.  This turned out to be hard to deal with, as I couldn’t add instrumentation code to the ISR without overrunning the 1 uSec interrupt period, leading to confusing results.  So, for this part of the project I disabled the Timer1 interrupt, and called the ISR directly from the loop() function.  As others have pointed out, the Arduino loop() function does a lot of housekeeping in the background, so for top performance it is best to never let loop() execute, by placing another infinite loop inside loop() or inside setup().  This is what I did with the code designed to investigate whether or not the Teensy could keep up with an I2C bus running at 100Kbs.

The ‘capture_data()’ function (no longer used as an ISR) captures SCL & SDA states with a single port operation as shown

and then everything from a START pair (0xC followed by 0x4) to a STOP pair (0X4 followed by 0xC) inclusive is captured in the raw_data array.

Any I2C Sniffer project like this one assumes that I2C activity occurs in short bursts with fairly long pauses in between.  This is certainly the case with my robot project, as yaw data is only acquired every 200 mSec.  However, there is still the problem of determining when a I2C ‘burst’ has finished so the sniffer program can decode and print the results from the last burst.  In my investigation, it became clear that at the end of the burst both the SDA line goes HIGH and stays that way until the next START condition (a 0XC followed by a 0X4).  So then the question becomes “how many 0XC/0XC pairs do I have to wait before determining that the last burst is over?”

In order to answer this question I decided to use my trusty Tektronix 2236 O’Scope and Teeny’s ‘digitalReadFast’ and ‘digitalWriteFast’ functions to implement a hardware-based timing capability using Teensy pins 0,1, and 2 (MONITOR_OUT1, 2 & 3 respectively).  Among other things, this allowed me to definitively determine that a ‘idle’ (0XC/0XC) count of 1000 was too small, but an idle count of 2500 was plenty, without consuming too much of the available processing time.  It also turned out that ‘idle’ counts all the way up to 30,000 work too, but leave less time for processing.

O’Scope shot showing I2C traffic on the bottom trace, and the point at which 2500 0xC/0xC (Idle) pairs is reached on the top trace (the high-to-low transition)

As can be seen in the above photo, the I2C ‘sentence’ lasts about 15 mSec, and the ‘idle’ condition is detected about 5 mSec later for a total of about 20 mSec out of the nominal 200 mSec cycle time for my robot application. This leaves about 190 mSec for I2C sentence processing and display.

18 January 2020 Update:

Success!!  I now have a working Teensy 3.2 I2C Sniffer program that can continuously monitor the I2C traffic between my Arduino Mega test program acting as a I2C master and a MPU6050 IMU I2C slave.   The Teensy code is available on my GitHub account here.

A major challenge in creating the sniffer program was the requirement to sample the I2C SCL & SDA lines quickly enough to accurately detect the line transitions denoting all the different I2C signals.  With the I2C bus running at 100Kbs, SCL (clock) transitions occur every 5 uSec. Good sampling requires at least 2 and preferably more samples per SCL state.  As noted above, I started off by copying the ISR routine from Kito’s I2C sniffer, but discovered I needed to add some logic to zero in on the desired I2C bus states (IDLE, START, DATA & STOP), and the additional code made the ISR take more than the desired 1 uSec window.  After posting about this problem to Paul Stoffregen’s Teensy forum, I got some good pointers for speedup, incuding a post that mentioned the Teensy FASTRUN macro that runs functions from RAM rather than FLASH. As it turned out, adding this macro to the program allowed me to reduce the ISR cycle time from about 1.4 uSec to about .89 uSec – yay!  The final ISR routine is shown below:

Note the use of digitalWriteFast() calls to output timing pulses on Teensy hardware pins so I could use my trusty Tek 2236 100 MHz O’scope to verify proper timing.

Once I got the ISR running properly, then I focused on getting the data parsing algorithm integrated into the program.  I had previously shown that I could correctly parse simulated I2C traffic, so all the current challenge was to integrate the algorithm in a way that allowed continuous capture-decode-print cycles at at rate that could keep up with the desired 5 measurements/sec rate.  So, I instrumented the sniffer program to display the decoded IMU traffic, along with the calculated yaw value and the time required to perform the decode.

Here’s a short section of the printout from the test program, showing the time (in minutes), the yaw (relative heading ) value retrieved from the IMU, and left/right ping distances (unused in this application).

And here is the corresponding output from the I2C sniffer program

In the above printout, each printout shows the individual transmit & receive ‘sentences’ to/from the IMU, and the 28-byte packet received from the IMU containing, among other things, the values required to calculate a yaw (relative heading value).  As can be seen, the yaw value calculated from the received bytes, closely matches the yaw values retrieved using the test program.  In addition the last line of each section of the readout shows the time tag for the start of the decode process, and the total time taken to decode all the bytes in that particular burst.  From the data, it is clear that only 1-2 mSec is required to decode and display a full burst.

The complete I2C Sniffer program is available on my GitHub site here.  The complete test program that obtains a yaw value from the IMU every 200 mSec is shown below:

The above program was intended to help me troubleshoot the intermittent MPU6050 connection failures I have been experiencing for some time now.  The purpose of the new I2C sniffer project is to create a tool to log the actual I2C traffic between this  program and the IMU. The idea is that when a failure occurs, I can look back through the sniffer log to see what happened; did the Arduino Mega stop transmitting requests, or did the IMU simply stop responding, or something else entirely.