Tag Archives: LIDAR

Wall-E3 Right Wall Following Trial

Posted 23 March 2022,

Earlier this month I was able to demonstrate a multi-lap left-side wall tracking run by Wall-E3 in my office ‘sandbox’. This post describes my efforts to extend this capability to right-side wall tracking.

Since I already had the left-side wall tracking algorithm “in the can”, I thought it would be a piece of cake to extend this capability to right-side tracking. Little did I know that this would turn into yet another adventure in Wonderland – but at least when I finally made it back out of the rabbit-hole, the result was a distinct improvement over the left-side algorithm I started with. Here’s the left-side code:

The above code works, in the sense that it allows Wall-E3 to successfully track the left-side wall of my ‘sandbox’. However, as I worked on porting the left-side tracking code to the right side, I kept thinking – this is awful code – surely there is a better way?

After letting this problem percolate for few days, I decided to see if I could approach the problem a little more logically. I realized there were two major conditions associated with the problem – namely is the robot’s initial position inside or outside the desired offset distance K? In addition, the robot can start out parallel to the wall, or pointed toward or away from the wall. Ignoring the ‘started out parallel’ degenerate case, this reminded me of a 3-parameter Karnaugh map configuration, so I started sketching it out in my notebook, and then later in a Word document, as shown below:

As shown above, I broke the 3-parameter into two 2-parameter Karnaugh maps, and the output is denoted by αT. After a few minutes it became obvious that the formula for αT is pretty simple – its either αR – αA1 or αR – αA2 depending on whether the robot starts out outside or inside the desired offset distance. In code, this boils down to one line, as shown at the bottom of the Karnaugh map above, using the C++ ‘?’ trinary operator, and choosing CW vs CCW is easy too, as a negative result implies CCW, and a positive one implies CW. The actual code block is shown below:

Here’s a short video of Wall-E3 navigating the office ‘sandbox’ while tracking the right-side wall.

So, it looks like Wall-E3 now has tracking ability for both left-side and right-side walls, although I still have to clean things up and port the simpler right-side code into TrackLeftWallOffset().

25 March 2022 Update:

Well, that was easy! I just got through porting the new right-side wall tracking algorithm over to TrackLeftWallOffset(), and right out of the box was able to demonstrate successful left-wall tracking in my office ‘sandbox’.

At this point I believe I’m going to consider the ‘WallE3_WallTrack_V3’ project ‘finished’ (in the sense that most, if not all, my wall tracking goals have been met with this version), and move on to V4, thereby limiting the possible damage from my next inevitable descent through the rabbit hole into wonderland.

Stay tuned,

Frank

Teensy 4.1 Replacement for Teensy 3.5?

A while back, I had some problems with damaged Teensy 3.5 main controllers on Wall-E3, my autonomous wall-following robot. I eventually traced the problem back to large voltage transients that occurred when I connected/disconnected the charging probe. These transients were conducted to the Teensy 3.5 pin used to detect the probe connection status, causing the Teensy to immediately reboot, and then eventually become unusable. I solved this problem by using a non-conductive photonic charger connect/disconnect system using the supplied charge LED on the TP5100 charger coupled to a photoresistor.

Unfortunately, I ran through most of my Teensy 3.5 stock while I was figuring this out, and now this part is unavailable from PJRC or anywhere else – maybe a victim of the world-wide chip shortage? The good news though is that the Teensys 4.1, successor to the Teensy 3.5 is available, so I purchased a few to evaluate as a replacement for the T3.5 on my robot.

The Teensy 4.1 has the same form factor and pin layout as the T3.5, which means it is a drop-in replacement in most cases. However, there are some gotchas. According to the comparison sheet on Paul Stoffregen’s PJRC site, the T4.1 is 5X faster (600MHz vs 120MHz) and has more memory. However, it doesn’t have any analog output DACs (T3.5 has 2), and more importantly, the T4.1 pins are not 5V tolerant!

To start my evaluation, I loaded a simple ‘blink’ program, and as expected it worked great. Here’s the test setup, utilizing my newly-discovered OONO breakout board.

Teensy 4.1 on the OONO breakout board

After this first test, I am convinced that the T4.1 will work nicely as a T3.5 replacement, except possibly for the 5V tolerance issue. To investigate this, I looked at my current system schematic

Wall-E3 System Schematic

Wall-E3’s main controller is directly connected to two VNH5019 motor controllers, several LEDs (the Chg Stat Display module), two INA169 current sense modules, a Pulsed Light (now owned by Garmin) LIDAR system via it’s ‘Mode’ pin, the ‘HC-05 BT Module’ (now replaced by my new 5V Reg/Wixel board) via its TX/RX pins, and the battery pack via the ‘Chg Conn’ pin (this the photonic connection discussed above). It is also connected to a MPU6050 IMU, a Teensy 3.2 running the IR charging beam detector, and another Teensy 3.5 running the 7-element VL53L0X distance sensing array, all via two different I2C ports.

The I2C ports are no problem, as they are either Teensy-to-Teensy or Teensy-to-MPU6050, which has an onboard regulator to regulate 5V down to 3.3, so the data lanes are 3.3V. The LED panels is passive (doesn’t generate any signals of its own), so that’s not a problem. The Wixel RF Transceiver inside my 5V Regulator/Wixel module runs on 3.3V, so the RX/TX lines are compatible with Teensy 4.1. The ‘Irun’, ‘Itot’ and ‘BattV’ A/D inputs are all below 3.3V at their maximum values (The Irun and Itot lines max out at about 2V (2 amps through the current sensor), while BattV maxes out at about 2.4V (8.4V max battery voltage minus 6V drop through a 6V zener diode).

The ‘Mode’ line on the LIDAR could be an issue. The original Pulsed Light LIDAR was acquired by Garmin, so the original datasheets are no longer available. The Garmin datasheet says that the MODE pin output is limited to 3.3V, but I don’t know if that is the same for the original Pulsed Light model I’m using on Wall-E3. So, I hooked up my digital O’scope to the LIDAR’s MODE pin on Wall-E3 and measured it directly. As shown in the following scope grab, the output is indeed limited to 3.3V – yay!

Pulsed Light LIDAR-Lite MODE pin pulse output. Note max amplitude (Ma) = 3.308V

So, it looks like I can drop a Teensy 4.1 into Wall-E3’s system and it should do fine. I don’t really need the speed and/or memory improvements, but I do need a replacement for the now-defunct Teensy 3.5, in case I manage to kill yet another one.

21 March 2022 Update:

I took the time today to see how (or if) the Teensy 4.1 breakout module would fit on my current Wall-E3 robot. As shown below, it’s pretty big!

Based on the above photos, I don’t think this breakout board has a future on Wall-E3. Even though the module will fit, that doesn’t take into account the fact that the wiring from the module will extend horizontally from the module, rather than vertically as it does with the plain ‘Teensy + Female header’ arrangement. It was a great idea a the time, but I guess the reality is that the breakout module will be relegated to testing,

07 September 2024 Update:

I was playing with my 4WD robot and noticed the MPU6050 IMU wasn’t responding correctly, so to start the troubleshooting process I pulled the MPU6050 off the robot and connected it to a Teensy 4.1 instead of the Teensy 3.5 I have in the robot, because I had a 4.1 available and didn’t have a 3.5. This led to a couple of interesting discoveries:

  • I discovered it is very difficult to get a Teensy 4.1 with pins to fit into my ‘small’ plugboard. No amount of finger pressure would get the pins to fit into the correct sockets. This continued until I discovered that, way back in the day when I first got a couple of 4.1’s to try, I had soldered header pins to all the pins on the 4.1, including the five pins across the breakout board – oops! A few seconds with a side-cutter to remove these pins and I was back in business.
  • The default Wire1 pins on a Teensy 4.1 are different than the ones on a 3.5/3.2, and this threw me for a bit of a loop. Eventually I figured out that Wire1.begin() gets aimed at the proper default pins based on the compile target – Teensy 4.1 vs Teensy 3.5.

After making these changes, I was able to compile and run my ‘Teensy_MPU6050_DMP6_V4.ino’ Arduino sketch and verified that my MPU6050 module was working fine.

Stay tuned,

Frank

Wall-E3 Replacing Mega 2560 With Teensy 3.5 Part VIII

Posted 19 February 2022,

At this point in the evolution of Wall-E3, all the hardware seems to be working, so it’s time to get serious about wall tracking. Last fall I made another run at wall tracking with Wall-E2, and wound up with an algorithm that would first capture the desired wall offset, and then track it ‘forever’. This worked great, but the approach is at odds with the general processing architecture. The current tracking architecture is set up as a loop, where all pertinent parameters are updated every loop period, and the appropriate action is taken. In the case of wall tracking, the ‘action’ was one left/right motor speed update. This allows rapid recognition of, and adaptation to, various ‘error’ conditions, like being stuck or about to run into something. The algorithm developed last fall does none of this, so it can’t react properly (or at all, for that matter) to things like an upcoming wall.

I’m starting to think I can use a hybrid approach – use the current capture/tracking algorithm pretty much as it stands from last fall, but have it check for ‘error’ conditions each time through its own internal loop. If any unusual conditions are detected, then force an exit from the tracking routine and another pass through the main ‘traffic director’ function ‘GetOpMode()’. The updated mode assignment will then percolate back down through loop() and cause the appropriate handling function to be called.

22 February 2022 Update:

As a start, I ported the ‘TrackLeft/RightWallOffset() functions to Wall-E3 and, after the normal number of screwups and mistakes, I got the left side tracking algorithm working, as shown in the Excel plot and short movie clip below:

Here’s the complete code for ‘TrackLeftWallOffset()’:

The above algorithm works great, but it runs in an infinite loop once it has captured the wall offset. Based on my above comments about a hybrid approach, I could add tests for stuck and/or front or back obstacles to this loop (instead of the current ‘while(true)’). This would cause TrackLeftWallOffset() to exit, and the GetOpMode() function could assign the appropriate mode, which would then cause the proper function to execute.

Or, I could eliminate GetOpMode() entirely and put it’s logic in ‘loop()’? Actually, looking at the loop() function in FourWD_WallE2_V12.ino, my last iteration with the Arduino Mega2560, I see that GetOpMode() is called at the start of loop(), and then the OpMode switch statement comes pretty much immediately afterwards. Here’s the code:

The MODE_CHARGING and MODE_HOMING cases are self-contained, so no changes would be needed for them. The MODE_WALLFOLLOW case is sub-divided into TRACKING_LEFT and TRACKING_RIGHT cases. If all the inline code in TRACKING_LEFT was replaced with TrackLeftWallOffset() and that of TRACKING_RIGHT with TrackRightWallOffset(), with these two functions augmented by the current if (bIsStuck) , if(bObstacleAhead) and if(bObstacleBehind) guard code (pretty much as it now stands), then that should work. I think I’ll give that whirl and see what happens.

To start the process, I created yet another project – WallE3_WallTrack_V3 (to preserve the currently ‘working OK on left side’ status of WallE3_WallTrack_V2) and try porting the GetOpMode() and loop() code from FourWD_WallE2_V12.

02 March 2022 Update:

I now have a ‘loop() only’ version of WallE3 running that properly tracks the left side. Everything is basically the same as before, except the loop() function, shown below:

The ‘IR HOMING’ and ‘CHARGING’ blocks are essentially unchanged, and the ‘WALL TRACKING’ block is much simpler. All ‘anomaly’ (robot stuck either forward or backward, robot approaching an obstacle ahead or behind, dead battery, etc are all handled internally to the two ‘TrackLeft/RightWallOffset()’ functions. Here’s the (potentially infinite) ‘while()’ loop:

As the code above shows, the while loop will continue to execute as long as the ‘errcode’ value is ‘NO_ANOMALIES’. Internally the ‘CheckForErrorCondx()’ function surveys the inputs from all sensors and attempts to detect any anomalous behavior. Any return value except NO_ANOMALIES causes the while() loop to exit. Each potential anomaly condition has its own handling function, which exits back to ‘loop()’ and the process starts all over again. I believe this is a much cleaner approach than I had before with the ‘GetOpMode()’ function.

I also took the opportunity at this point to fix a long-standing problem with the code. The front-facing LIDAR unit returns distances in Cm, while all seven VL53L0X time-of-flight distance sensors report in mm. Not only did this torture me mentally (let’s see – is it Cm or mm here?), but it caused the new ‘CalcRearVariance()’ function to crater, because the 10x larger numbers, when squared, caused the ‘uint16_t’ type to overrun and produce crazy variance numbers. So, I changed the GetRequestedVL53L0XValues() function to convert mm to Cm, changed all the variable names from xxxxMM to xxxxCm and carefully combed through the entire codebase, correcting the inevitable wash of ’10x’ errors. In the end though, I made the codebase much more consistent and understandable (I hope).

Stay Tuned,

Frank

Wall Tracking Trials Using Office ‘Sandbox’ Part I

Posted 12 November 2020

Back in October I added a TIMER5 timer interrupt to my autonomous wall-following robot (WAll-E2) code to manage sensor updates. Since then I have made the timer interrupt the sole timing source for all sensor and tracking updates, and upped the update rate from 5Hz to 10Hz. In addition, I’ve been making some improvements to Wall-E2’s obstacle detection/response abilities, and this post describes the results of these enhancements.

Wall-E2’s job is to autonomously track walls forever. This implies the ability not only track walls, but to deal with obstacles as they occur, and recharge its batteries at one or more provided charging stations as needed. Wall-tracking per se has been the subject of several previous posts, and is now reasonably well managed using the ‘find parallel’ technique described here. This post deals with the effort to detect and respond to obstacles as they occur. Here’s a recent run in my office ‘sandbox’

In the above telemetry printout, the first obstacle encounter occurs at 7.65 sec, corresponding to about 4 sec into the video. The obstacle is recognized at 18 cm, well inside the desired offset distance of 30 cm. I believe this occurred because the robot had just started turning back toward the near wall with a target steering value of -WALL_OFFSET_TRACK_SETPOINT_LIMIT (-0.3, the maximum toward-wall steering value) which meant that the normal forward obstacle detection limit of WALL_OFFSET_TGTDIST_CM (30cm in this case) wasn’t in force and the backup limit of MIN_FRONT_OBSTACLE_DIST_CM (20cm in this case) triggered instead. This causes the following code block to execute:

As can be seen in the above code snippet, this causes the robot to make a 90º ‘spin turn’ to the right, and then restart wall tracking.

At about 12.6 sec (about 10 sec into the movie) we see it detect the upcoming wall at about 30 cm (due to a bug in the code, the printed values are incorrect). This causes the following code block to execute:

This code executes a 90º ‘step turn’ (identical to a ‘spin turn’) to the right, and drops back into wall tracking mode.

At about 16 sec into the movie and 20 sec after program start, the robot again detects an upcoming obstacle at about 30cm, and again executes a 90º ‘step turn’ to the right to follow the new wall.

About 1.5 sec later, the robot detects one of the chair legs (I think it was the one nearest the wall in the movie) and tries to get away using another 90º ‘spin turn’, but then exhibits some abnormal behavior. When it attempts to find the parallel orientation to the new (non-existent) wall, it exits RotateToParallelOrientation(Left) with SteeringVal = -79.86, a very strange result. I believe this is because Wall-E2 detected the ‘stuck’ condition while it was attempting to complete the parallel orientation procedure, in this ‘while’ loop

So, it exited abnormally, thus the odd SteeringVal number, and then re-detected it in the main tracking loop because the front distance history array isn’t re-initialized after the first detection. This, apparently, is a ‘feature’, not a bug – who knew! ;-).

After the second ‘stuck’ condition detection, the robot attempts to disengage using the ExecuteStuckRecoveryManeuver(), which, in this case tries to back up and then execute an ‘end-around’ maneuver to get past the chair leg. It finished the backup portion of the maneuver successfully with 23cm remaining rearward, and then executed a 90º ‘spin turn’. Then it went forward 21cm using the front distance sensor (not shown in the video), and halted when I took over manual control.

All in all, this was a very successful ‘sandbox’ run. Lots of good data with clear indications of where things are working well and where things need to be modified/fixed.

  • A bug in the telemetry display code for the ‘Wall Offset Limit’ detection printout (fixed).
  • In the situation at 7.65 sec where the obstacle detection occurred at 18cm vs 30cm, the robot should recognize that it needs to back up to the wall offset target before making the spin turn (done).
  • And, of course, porting all this new stuff to the right-side tracking sections

16 November 2020 Update:

Tonight I got the first cut done at porting the TRACKNG_LEFT algorithms over to the TRACKING_RIGHT case, and made what appears to be a successful right-side sandbox run, as shown in the following short video

And here is the telemetry from the run:

Here’s an Excel plot of the Right side center distance and L/R motor speeds vs time.

Comparing the times from the video and the telemetry, it appears the video time is about 3-3.5 sec lower than the telemetry values. In the video, Wall-E2 detects the first upcoming wall at about 9 sec, and this corresponds with the telemetry at 12.358 where the wall is detected with front distance of 19cm. The reason the wall didn’t get detected earlier is the robot was currently tracking back toward the wall with a steering value of -0.3, and this causes the wall detection value to be reduced to suppress false positives.

The robot then takes 2 sec to back up to 33 cm and turn 90 deg CCW, and then it starts tracking the right-side wall again. It detects the next wall at 17.1 sec (14 sec in video) and 30 cm (the steering value at that point was 0.10, so no reduction in upcoming obstacle detection distance). Because the detection occurred at 30 cm, the robot doesn’t need to back up; it just makes another 90 deg spin turn CCW and starts tracking the right-side wall again. The last segment clearly shows that Wall-E2 is capable of tracking to and capturing the desired offset of 30 cm.

All in all, a very successful right-side sandbox run.

Stay tuned!

Frank

Left Side Wall Tracking Success With VL53L0X Array, Part II

Posted 10 October 2020

After the left-wall tracking success described previously in this post, I made some more adjustments and also set up a ‘ tracking sandbox’ in my lab to test Wall-E2’s ability to detect & respond to upcoming obstacles. Here’s a short video showing Wall-E2 in action

Tracking run demonstrating obstacle avoidance maneuvers

Here’s the raw output from the run:

And here is an Excel plot of just the movement sections of the above, highlighting the avoidance maneuvers.

left-side wall distances are shown in mm, while the front distance is shown in cm. Note 1-2 sec gaps during turns

Comparing the Excel plot to to the video, the front distance plot shows a monotonically decreasing value and then a large jump after each obstacle avoidance turn. It appears that the robot acquires and tracks the 30cm offset target successfully on the first wall, but doesn’t do as well on the second one. It was much more successful on the third wall. The plot for the last wall is only about 2 seconds long.

All in all, this looks like a pretty successful run for Wall-E2. It tracked three different walls (the fourth wall was too short to track) and successfully avoided obstacles three times – woo hoo!

12 October 2020 Update:

On the above ‘sandbox’ run, I noticed that at the end of the third leg at about 14 seconds into the run, the ‘spin turn’ at the white foam core wall wasn’t a ‘step turn’, but a ‘backup and turn’ triggered by the front distance going below the front obstacle limit of 20 cm, rather than the tracking obstacle clearance limit of 30 cm. Here are two output lines that illustrate the difference

and

In the video, these events are at about 7 & 14 seconds respectively. From this I came to the conclusion that at least the front distance wasn’t getting updated enough to keep the robot from getting too close to the obstacle before it realized there was a problem. At the time, the update rate for the system was set at 5Hz or 200 mSec. If the robot is travelling at 50 cm/sec, it means that it will travel 10 cm between distance updates – ouch!

So, I changed the timer interrupt timeout value for a 10Hz rate, and ran the ‘sandbox’ run again. This time when I looked at the output I could see that each leg terminated with something like

and it was clear that the updates were happening about every 100 mSec. Here’s the output:

and a short video:

And an Excel plot showing the left wall and forward distances progressing through the run.

Note that the front distance is shown in cm, while the left wall distances are shown in mm

At this point, I’m pretty happy with Wall-E2’s new-found wall tracking superpowers, at least for the left wall case. Now I need to port the V7 left-side-only code back into the main program and also port it to the right wall case.

Stay tuned!

Frank

I2C Hangup bug cured! Miracle of Miracles! Film at 11!

Posted 06 July 2020

Miracle of miracles!  Arduino finally got off their collective asses and decided to do something about the well-known, well-documented, and long-ignored I2C hangup bug.  Thanks to Grey Christoforo of Oxford, England for submitting the pull request that started the ball rolling.  See this  github issue thread for all the gory details.  However, in a bizarre outcome, the implementation of the needed timeouts isn’t implemented by default! You have to modify your code to add a call to a new function, like the following:

Note that you have to explicitly add a timeout value (3000 in my example above) or the timeout feature will still not be enabled! The ‘true’ parameter tells the library to reset the I2C bus if a timeout is detected – surely something you will want to do.

I’m currently working on a ‘before/after’ post to demonstrate that the new timeout feature actually works with real hardware scenarios.  However, due to the intermittent nature of the I2C hangup bug, it takes a while (hours/days) to grind through enough iterations to excite the bug reliably, so it may be a while before I have a good demonstration

One last thing; at some point the examples in C:\Program Files (x86)\Arduino\hardware\arduino\avr\libraries\Wire\examples (on my Win 10 machine) will probably be updated/expanded to show how to properly implement the new timeout feature, but this has not happened yet AFAICT.

The rest of this post describes my attempt to verify that the new timeout feature does, in fact, work as advertised.  The idea is to construct a “before-and-after” demonstration, where the ‘before’ configuration reliably hangs up using the Wire library without the timeout enabled, and an ‘after’ configuration that is identical to the ‘before’ setup except with the timeout enabled.

Before Configuration:

I actually started with a ‘before-before’ configuration using the SBWire library, as I have been working with I2C projects and the SBWire library ever since I gave up on the Arduino Wire library two years ago.  This configuration is patterned after Wall-E2, my current autonomous wall-following robot, which uses an Adafruit RTC, an Adafruit FRAM, a DFRobots MPU6050 IMU, and six VL53L0X time-of-flight proximity sensors (the ToF sensors are managed by a slave Teensy over the I2C bus).  For this test, I arranged all the I2C components on a plug board and connected to them using an Arduino Mega 2560 (the same controller I have on Wall-E2), as shown in the following photo.

From left to right; two VL53L0X ToF modules, FRAM module, DS3231 RTC module, MPU6050 IMU module

The software is a cut down version of the robot software, and in this first test all it does is print out time/date from the RTC and the relative heading value from the IMU.  After almost 13 hours, it was still running fine, as shown below:

So now I have a ‘known good’ (with SBWire) hardware configuration.  The next step is to change the software back from SBWire to Wire without the timeout implemented.  This should fail – the IMU readout should hangup within a few hours as it did before I originally switched to SBWire.

July 08 2020 Update:

After laboriously changing back from SBWire to Wire, I got the configuration shown in the following photo to work properly using the new Wire library without the new timeout feature enabled.

From left to right: MPU6050 IMU, DS3231 RTC, Adafruit I2C FRAM, and 3e VL53L0X ToF proximity sensors, all on the Mega 2560’s I2C bus

I programmed the Mega to access everything but the FRAM 10 times/second, and print out the results on the serial monitor, and then let it run overnight.  When I got up this morning I expected to see that it had hung up after a few hours, but discovered that it was still running fine after eight hours – bummer!  at 10 meas/sec that is 480 min * 60 sec/min * 10 = 288,000 I2C measurement cycles * 5 I2C transactions per cycle = 1,440,000 I2C transactions.  I was bummed out because it will be impossible to verify whether or not the timeout feature actually works if I can’t get a configuration that reliably hangs up. When I came back a few hours later, I saw that the printout to the serial monitor had stopped at around 700 minutes, but this turned out to be the monitor hanging up – not the I2C bus – double bummer.

So, I modified the program to only report results every second instead of 10/second so I won’t run out of serial monitor again, and restarted the ‘before’ configuration.

10 July 2020 Update:

I added the Sunfounder 20 x 4 I2C LCD display to the setup so I could display the IMU heading and proximity sensor distances locally, as shown below

I2C Test setup with Sunfounder 20 x 4 I2C LCD added

After getting this setup running, I was trying to figure out how to definitively demonstrate I2C bus hangups without the Wire library timeout feature (the ‘before’ configuration) and then demonstrate continued operation with timeouts enabled (the ‘after’ configuration).  In an email conversation, Grey Christoforo pointed me to another poster who was doing the same thing, by using an external transistor to short one I2C line to ground under program control, thereby demonstrating that the timeout feature allowed continued operation.  This gave me the idea that manually shorting one of the I2C lines to ground should do the same thing, and would allow me to demonstrate the ‘before’ and ‘after’ configurations.

The following code snippet shows the code necessary to enable the Wire library timeout feature

Although not entirely necessary, this is how I instrumented my code to capture timeout events and display them on my serial monitor

All my other hardware setup code has been removed for clarity.  Notice though, that I tried a number of different timeout values, starting from the default value of 25000 (25 mSec) down to 2000, and then back up to 3000.  At least in my particular configuration, the 1000 value was too small – it caused a timeout flag to be generated on every pass through the loop.  This was an unexpected result, as the SBWire library uses a 100 uSec (i.e. a timeout value of 100) for it’s default timeout value, and this setting has always worked fine in all my I2C projects.

In any case, here’s a short video that demonstrates that the Wire library can now recover from an I2C bus traffic interruption via the use of the new timeout feature.

 

Stay tuned!

Replacing HC-SRO4 Ultrasonic Sensors with VL53L0X Arrays Part II

Posted 16 June 2020

In my previous post on this subject, I described my efforts to replace the ultrasonic ‘ping’ distance sensors with modules built around the ST Microelectronics VL53L0X ‘Time of Flight’ LIDAR distance sensor to improve Wall_E2’s (my autonomous wall-following robot) ability to track a nearby wall at a specified standoff distance.

At the conclusion of my last post, I had determined that a three-element linear array of VL53L0X controlled by a Teensy 3.5 was effective in achieving and tracking parallel orientation to a nearby wall. This, should allow the robot to initially orient itself parallel to the target wall and then capture and maintain the desired offset distance.

This post describes follow-on efforts to verify that the Arduino Mega 2560 robot controller can acquire distance and steering information from the Teensy 3.5 sensor controller via its I2C bus.  Currently the robot system communicates with four devices via I2C – a DF Robots MPU6050 6DOF IMU, an Adafruit DS3231 RTC, an Adafruit FRAM, and the Teensy 3.2 controller for the IR Homing module for autonomous charging.  The idea here is to use two three-element linear arrays of VL53L0X modules controlled by a Teensy 3.5 on its secondary I2C bus, controlled by the Mega system controller on the main I2C bus.  The Mega would see the Teensy 3.5 as a just a fifth slave device on its I2C bus, and the Teensy 3.5 would handle all the interactions with the VL53L0X sensors.  As an added benefit, the Teensy 3.5 can calculate the steering value for tracking, as needed.

The first step in this process was to verify that the Mega could communicate with the Teensy 3.5 over the main I2C bus, while the Teensy 3.5 communicated with the sensor array(s) via its secondary I2C bus.  To do this I created two programs – an Arduino Mega program to simulate the main robot controller, and a Teensy 3.5 program to act as the sensor controller (the Teensy program will eventually be the final sensor controller program).

Here’s the Mega 2560 simulator program:

And the Teensy 3.5 program

Here’s the hardware layout for the first test:

Arduino Mega system controller in the foreground, followed by the Teensy 3.5 sensor array controller in the middle, followed by a single 3-element sensor array in the background

The next step was to mount everything  on the  robot’s second deck, and verify that the Teensy 3.5 sensor array controller can talk to the robot’s Mega controller via the robot’s I2C bus.   Here’s the hardware layout:

Right-side three sensor array mounted on robot second deck. Teensy 3.5 sensor array controller shown at middle foreground.

After mounting the array and array controller on the robot’s second deck, I connected the Teensy to robot +5V, GND and the I2C bus, and loaded the VL53L0X_Master.ino sketch into the robot’s Mega system controller.  With this setup I was able to demonstrate much improved parallel orientation detection.  The setup is much more precise and straightforward than the previous algorithm using the ‘ping’ sensors.  Instead of having to make several turns toward and away from the near wall looking for a distance inflection point, I can now determine immediately from the sign of the steering value which way to turn, and can detect the parallel condition when the steering value changes sign.

I think I may even be able to use this new ‘super-power’ to simplify the initial ‘offset capture’ phase, as well as the offset tracking phase.  In earlier work I demonstrated that I can use a PID engine to drive a stepper motor to keep the sensor array parallel to a moving target ‘wall’, so I’m confident I can use the same technique to drive the robot’s motors to maintain a parallel condition with respect to a nearby wall, by driving the steering value toward zero.  In addition, I should be able to set the PID ‘setpoint’ to a non-zero value and cause the robot to assume a stable oblique orientation with respect to the wall, meaning I should be able to have the robot drive at a stable oblique angle toward or away from the wall to capture the desired offset.

19 June 2020 Update:

As a preliminary step to using a PID engine to drive the ‘RotateToParallelOrientation’ maneuver, I modified the robot code to report the front, center, and rear distances from the VL53L0X array, along with the calculated steering value computed as (F-R)/C.  Then I recorded the distance and steering value vs relative pointing angle for 20, 30, and 40 cm offsets from my target ‘wall’.

Here’s the physical setup for the experiment (only the setup for the 20 cm offset is shown – the others are identical except for the offset).

Experiment setup for 20 cm offset from target wall

Then I recorded the three distances and steering value for -40 to +40º relative pointing angles into an Excel workbook and created the plots shown below:

Array distances and steering value for 20 cm offset. Note steering value zero is very close to parallel orientation

Array distances and steering value for 30 cm offset. Note steering value zero is very close to parallel orientation

Array distances and steering value for 40 cm offset. Note steering value zero is very close to parallel orientation

The front, center, and rear distance and steering value plots all look very similar from 20-40 cm offset, as one would expect.  Here’s a combined distance plot of all three offset values.

Combined distance plots for all three offsets. Note that the ‘flyback’ lines are an artifact of the plotting arrangement

In the above chart, it is clear that the curves for each offset are very similar, so an algorithm that works at one offset should also work for all offsets between 20 and 40 cm.  The next chart shows the steering values for all three offsets on the same plot.

Steering values vs relative pointing angle for all three offset distances

As might be expected from the way in which the steering value is computed, i.e. (Front-Rear)/Center, the value range decreases significantly as the offset distance increases.  At 20 cm the range is from -0.35 to +0.25, but for 40 cm it is only -0.18 to +0.15.  So, whatever algorithm I come up with for achieving parallel orientation should be effective for the lowest range in order to be effective at all anticipated starting offsets.

To try out my new VL53L0X super-powers, I re-wrote the ‘RotateToParallelOrientation’ subroutine that attempts to rotate the robot so it is parallel to the nearest wall.  The modification uses the PID engine to drive the calculated steering value from the VL53L0X sensor array to zero.  After verifying that this works (quite well, actually!) I decided to modify it some more to see if I could also use the PID engine and the VL53L0X steering value to track the wall once parallel orientation was obtained.  So I set up a two-stage process; the parallel orientation stage uses a PID with Kp = 800 (Ki = Kd = 0), and the tracking stage uses the same PID but with Kp set to half the parallel search value.  This worked amazingly well – much better than anything I have been able to achieve to date.

The following short video is composed of two separate ‘takes’ the first one shows a ‘parallel orientation’ find operation with Kp = 800.  The second one shows the effect of combining the ‘parallel find’ operation with Kp = 800 with a short segment of wall tracking with Kp = 400.  As can be seen in the video, the tracking portion looks for all the world as if there were no corrections being made at all – it’s that smooth!  I actually had to look through the telemetry data to verify that the tracking PID was actually making corrections to keep the steering value near zero.  Here’s the video

and here’s the output from the two-step ‘find parallel and track’ portion of the video

From the telemetry it can be seen that the ‘find parallel’ stage results in the robot oriented parallel to the wall at about 360 mm or so.  Then in the ‘track’ stage, the ‘Steer’ value is held to within a few hundredths (0.01 to 0.07 right at the end), and the offset distance stays practically constant at 361 to 369 mm – wow!

The results of the above experiments show without a doubt that the three VL53L0X linear array is quite accurate parallel orientation and wall tracking operations – much better than anything I’ve been able to do with the ‘ping’ sensors.

The last step in this process has yet to be accomplished – showing that the setup can be used to capture a desired offset after the parallel find operation but before the wall tracking stage. Based on the work to date, I think this will be straightforward, but…..

21 June 2020 Update:

I wasn’t happy with the small range of values resulting from the above steering value computation, especially the way the value range decreased for larger offsets.  So, I went back to my original data in Excel and re-plotted the data using (Front-Rear)/100 instead of (Front-Rear)/Center.  This produced a much nicer set of curves, as shown in the following plot

Steering values vs relative pointing angle using (Front-Rear)/100 instead of (Front-Rear)/Center

Using the above curve, I modified the program to demonstrate basic wall offset capture, as shown in the following short video

The video demonstrates a three stage wall offset capture algorithm, with delays inserted between the stages for debugging purposes.  In the first stage, a PID engine is used with a very high Kp value to rotate the robot until the steering value from the VL53L0X array is near zero, denoting the parallel condition.  After a 5 second delay, the PID engine Kp value is reduced to about 1/4 the ‘parallel rotate’ value, and the PID setpoint is changed to maintain a steering value that produces an approximately 20º ‘cut’ toward the desired wall offset value.  In the final stage, the robot is rotated back to parallel, and the robot is stopped.   In the above demonstration, the robot started out oriented about 30-40º toward the wall, about 60-70 cm from the wall.  After the initial parallel rotation, the robot is located about 50 cm from the wall.  In the offset capture stage, the robot moves slowly until the center VL53L0X reports about 36 cm, and then the robot rotates to parallel again, and stops the motors.  At this point the robot is oriented parallel to the wall at an offset that is approximately 28 cm – not quite the desired 30 cm, but pretty close!

The next step will be to eliminate the inter-stage pauses, and instead of stopping after the third (rotate to parallel) stage, the PID engine will be used to track the resulting offset by driving the VL53L0X array steering value to zero.

23 June 2020 Update:

After nailing down the initial parallel-find, offset capture  and return-to-parallel steps, I had some difficulty getting the wall tracking portion to work well. Initially I was trying to track the wall offset using the measured distance after the return-to-parallel step, and this was blowing up pretty much regardless of the PID Kp setting.  After thinking about this for a while, I realized I had fallen back into the same trap I was trying to escape by going to an array of sensors rather than just one – when the robot rotates due to a tracking correction, a single distance measurement will always cause problems.

So, I went back to what I knew really worked – using all three sensor measurements to form a ‘steering value’ as follows:

To complete the setup, the PID setpoint is made equal to the measured center sensor distance at the conclusion of the ‘return-to-parallel’ step.  When the robot is parallel to the wall, Front = Rear = 0, so the Steering value is just the Center distance, as desired, and the PID output will drive the motors to maintain the offset setpoint.  When the robot rotates, the front and rear sensors develop a difference  which either adds to or subtracts from the center reading in a way that forms a reliable ‘steering value’ for the PID.

Here are a couple of short videos demonstrating all four steps; parallel-find, approach-and-capture, return-to-parallel, and wall-tracking.  In the first video, the PID is set for (5,0,0) and it tracks pretty nicely.  In the second video, I tried a PID set for (25,0,0) and this didn’t seem to work as well.

At this point I’m pretty satisfied with the way the 3-element VL53L0X ToF sensor array is working as a replacement for the single ultrasonic ‘ping’ sensor.  The robot now has the capability to capture and track a specific offset from a nearby wall – just what I started out to do lo these many months ago.

25 June 2020 Update:

Here are a couple of short videos in my ‘outdoor’ (AKA entry hallway) range. The first video shows the response using a PID of (25,0,0) while the second one shows the same thing but with a PID value of (5,0,0).

The following Excel plot shows the steering value (in this case, just Fdist – Rdist), the corresponding PID response, and the center sensor distance measurement

I Googled around a bit and found some information on PID tuning, including this blog post.  I tried the recommended heuristic method, and wound up with a PID tuning set of (10,0,1), resulting in the following Excel plot and video

 

Stay tuned!

Frank

 

 

Replacing HC-SRO4 Ultrasonic Sensors with VL53L0X Arrays

Posted 20 May 2020

In a recent post, I described the issue I had discovered with the HC-SR04 ultrasonic ‘Ping’ distance sensors – namely that they don’t provide reliable distance information for off-perpendicular orientations, making them unusable for determining when Wall-E2, my autonomous wall-following robot, is oriented parallel to the nearest wall.

In the same post, I described my discovery of the STMicroelectronics VL53L0X infrared laser ‘Time-of-Flight’ distance sensor, and the thought that this might be a replacement for the HC-SR04.  After running some basic experiments with one device, I became convinced that I should be able to use an array of three sensors oriented at 30 degree angles to each other to provide an accurate relative orientation to a nearby wall.

So, this post is intended to document my attempt to integrate two 3-sensor arrays into Wall-E2, starting with just the right side.  If the initial results look promising, then I’ll add them to the left side as well.

Physical layout:

The VL53L0X sensors can be found in several different packages.  The one I’m trying to use is the GY530/VL53L0X module available from multiple vendors.  Here’s a shot from eBay

GY530/VL53L0X test setup with Arduino. Note size relative to U.S. Dime coin

The basic idea is to mount three of the above sensors in an array oriented to cover about 60 degrees.  So, I designed and printed a bracket that would fit in the same place as the HC-SR04 ‘ping’ sensor, as shown below:

TinkerCad design for triple sensor mounting bracket

And here is the finished bracket with the center sensor installed (I haven’t received the others yet)

VL53L0X module mounted at the center position on triple-sensor bracket

System Integration:

The VL53L0X chip has a default I2C address of 0x29.  While this address can be changed in software to allow multiple V53L0X modules to be used, the chips don’t remember the last address used, and so must be reprogrammed at startup each time.  The procedure requires the use of the chip’s XSHUT input, which means that a digital I/O line must be dedicated to each of the six modules planned for this configuration, in addition to the power, ground and I2C SCL/SDA lines.  This doesn’t pose an insurmountable obstacle, as there are still plenty of unused I/O lines on the Arduino Mega 2560 controlling the robot, but since I want to mount the sensor arrays on the robot’s second deck in place of the existing HC-SR04 ‘ping’ sensors, those six additional wires will have to be added through the multiple-pin connector pair I installed some time ago. Again, not a deal-breaker, but a PITA all the same.

So, I started thinking about some alternate ideas for managing the sensor arrays.  I already have a separate micro-controller (a Teensy 3.5) managing the IR sensor array used for homing to charging stations, so I started thinking I could mount a second Teensy 3.2 on the second deck, and cut down on the requirement for intra-deck wiring.  Something like the following:

The Teensy would manage startup I2C address assignments for each of the six VL53L0X sensor modules via a secondary I2C bus, and would communicate distance and steering values to the Arduino Mega 2560 main controller via the system I2C bus.

22 May 2020 Update:

After some fumbling around and some quality time with the great folks on the Teensy forum, I managed to get a 3-sensor array working on a Teensy 3.5’s Wire1 I2C bus (SCL1/SDA1), using the Adafruit VL53L0X library.  The code as it stands capitalizes on the little-known fact that when a Teensy 3.x is the compile target, there are  three more ‘TwoWire’ objects availble – Wire1, Wire2, and Wire3.  So, I was able to use the following code to instantiate and configure three VL53L0X objects:

January 18 2022 Note:

The ‘magic’ that creates the ‘Wire1’, ‘Wire2’, and ‘Wire3’ objects occurs in WireKinetis.h/cpp in the Arduino\hardware\teensy\avr\libraries\Wire folder.

Here’s some sample output:

And the physical setup:

Triple VL53L0X LIDAR array. The two outside sensors are angled at 30 degrees. Teensy 3.5 in background

The next big questions are whether or not this 3-sensor array can be used to determine when Wall_E2 is oriented parallel to the nearest wall, and whether or not the steering value proposed in my previous post, namely

where the ‘l’, ‘r’ and ‘c’ subscripts denote the left/forward, right/rearward, and center sensor measured distances respectively.

23 May Update:

I’ve been improving my understanding of my triple-VL53L0X setup, and I think I’m to the point where I can try out the steering idea.  Here’s the experimental setup:

I have a wood barrier set up with a 40 cm offset from the center of the compass rose. Before getting started, I checked each sensor’s measured distance from the barrier by manually placing each sensor at the center of the compass rose, aligned as closely as possible with the perpendicular to the barrier.  When I did this, the measured offsets were within a few mm of the actual distance as measured by the tape measure.

The plan is to rotate the sensor bracket from -30 degrees to +30 degrees relative to the normal perpendicular orientation, while recording the left, center, and right measured distances.  Then I should be able to determine if the steering idea is going to work.  In this experiment, I manually rotated the array from 0 (i.e. the center sensor aligned with the perpendicular to the barrier) to -30 degrees, and then in 10 degree steps from -30 to 30 degrees.  The rotation was paused for a few seconds at each orientation.  As shown in the plot below, the Steering Value looks very symmetric, and surprisingly linear – yay!

Steering values resulting from manualy rotating the triple VL53L0X array from -30 to 30 degrees

However, the above experiment exposed a significant problem with my array design.  The 30 degree angular offset between the center sensor and the outer two sensors is too large; when the center sensor is oriented 30 degrees off perpendicular, the line-of-sight distance from the ‘off’ sensor to the wall is very near the range limit for the sensors, with the 40 cm nominal offset chosen for the test.  I could reduce the problem by reducing the initial offset, but in the intended application the initial offset might be more than 40 cm, not less.

So, back to TinkerCad for yet another bracket rev.  Isn’t having a 3D printer (or two, in my case) WONDERFUL!  The new version has a 20 deg  relative angle rather than 30, so when the center sensor is at 30 degrees relative to the wall, the outside one will be at 50 deg.  Hopefully this will still allow good steering while not going ‘off the wall’ literally.

Revised sensor carrier with 20-deg offsets vs 30-deg

04 June 2020 Update:

In order to make accurate measurements of the triple-VL53L0X array performance, I needed a way to accurately and consistently rotate the array with respect to a nearby wall, while recording the distance measurements from each array element.  A few years ago I had created a rotary table program to run a stepper motor for this purpose, but I hadn’t documented it very well, so I had to take some time away from this project to rebuild the tool I needed.  The result was a nice, well-documented (this time) rotary scan table controlled by a Teensy 3.2, and a companion measurement program.  The rotary scan table program can be synchronized with the measurement program via control pins, and the current step  # and relative pointing angle can be retrieved from the scan table via I2C.

Here’s the test setup:

Test setup for triple VL53L0X angle sweeps. Board in background extends about .5m left and right of center.

And here’s a short video showing one scan (-20 to +20 deg)

I ran two scans – one from -30 to +30 deg, and another from -20 to +20 deg.  Unfortunately, my test ‘wall’ isn’t quite long enough for the -30/+30 scan, and in addition the left-most sensor started picking up clutter from my PC in the -30 position. In any case, both scans showed a predictable ‘steering’ value transition from positive to negative at about +10 deg.

-30 to +30 deg scan. Note the steering value crosses zero at about +10 deg

-20 to +20 deg scan. Note the steering value crosses zero at about +10 deg

09 June 2020 Update:

After this first series of scans, I discovered that my scan setup was not producing consistent results, and so all the data taken to this point was suspect.  So, back to the drawing board.

Based on a comment on an earlier experiment made by john kvam of ST Microlectronics, regarding the 27 degree cone coverage of the photo beam from the LIDAR unit, I thought at least part of the problem was that the beam might be picking up the ‘floor’ (actually my desk surface in these first experiments).  As a way of illuminating (literally) the issue, I created a new stepper motor mount assembly with holes for mounting three laser diodes – one straight ahead, one tilted down at 14 degrees, and one tilted up at 14 degrees. The combination of all three allows me to visualize the extents of the VL53L0X beam coverage on the target surface, as shown in the following photos.

The second photo shows the situation with the ‘wall’ moved away until the beam extent indicator dots are just captured, with the distance measured to be about 220 mm.  The following short video shows how the beam coverage changes as the relative angle between the center sensor and the wall changes.

As shown in the video, the beam extents are fully intercepted by the 6″ wall only during the center portion of the scan, so the off-axis results start to get a little suspect after about 50 degrees off bore-sight.  However, for +/- 30 to 40 degrees, the beam extents are fully on the ‘wall’, so those measurements should be OK.  However, the actual measurements have a couple of serious problems, as follows:

  • As shown in the above Excel plots, the ‘steering’ value, calculated as (Front-Rear)/Center crosses the zero line well to the right of center, even though the sensors themselves are clearly symmetric with the ‘wall’ at 0 degrees
  • The scans aren’t repeatable; when I placed a pencil mark on the ‘wall’ at the center laser dot, ran a scan, and then looked at the laser dot placement after the ‘return to start position’ movement, the laser dot was well to the left of the pencil mark. After each scan, the start position kept moving left, farther and farther from the original start position.

So, I started over with the rotary table program; After some more research on the DRV8825, I became convinced that some or all of the problem was due to micro-stepping errors – or more accurately, to the user (me) not correctly adjusting the DRV8825 current-limiting potentiometer for proper micro-stepping operation.  According to Pololu’s description of DRV8825 operation, if the current limit isn’t set properly, then micro-stepping may not operate properly.  To investigate this more thoroughly, I revised my rotary table program to use full steps rather than micro-steps by setting the microstepping parameter to ‘1’.  Then I carefully set the scan parameters so the scan would traverse from -30 to +30 steps (not degrees). When I did this, the scan was completely repeatable, with the ‘return to starting position’ maneuver always returning to the same place, as shown in the following short video

The above experiment was conducted with the DRV8825 current limit set for about 1A (VREF = 0.5V).  According to information obtained from a Pololu support post on a related subject, I came to believe this current limit should be much lower – around 240-250 mA for proper microstepping operation, so I re-adjusted the current limiting pot for VREF = 0.126V.

After making this adjustment, I redid the full-step experiment and confirmed that I hadn’t screwed anything up with the current limit change – Yay!

Then I changed the micro-stepping parameter to 2, for ‘half-step’ operation, and re-ran the above experiment with the same parameters.  The ‘2’ setting for micro-stepping should enable  micro-stepping operation.  As shown in the following video, the scan performed flawlessly,  covering the -54 to 54 degree span using 60 micro-steps per scan step instead of the 30 previously, and returning precisely to the starting position – double Yay!

Next I tried a microstepping value of 8, with the same (positive) results.  Then I tried a stepping value of 32, the value I started with originally.  This also worked fine.

So, at this point I’m convinced that microstepping is working fine, with the current limit set to about 240 mA as noted above.  This seems to fully address the second bullet above, but as shown in the plot below taken with microstepping = 32 and with data shown from -30 to +54 degrees), I still have a problem with the ‘steering’ value not being synchronized with the actual pointing angle of the array sensor.

Next I manually acquired sensor data from -30 to +30 degrees by rotating the de-energized stepper motor shaft by hand and recording the data from all three sensors.  After recording them in Excel and plotting the result, I got the following chart.

This looks pretty good, except for two potentially serious problems:

  1. The ‘steering’ value, defined as (Front Distance – Rear Distance)/Center Distance, is again skewed to the right of center, this time about 8 degrees.  Not a killer, but definitely unwanted.
  2. Rotation beyond 30 degrees left of center is not possible without getting nonsense data from the Front (left-hand) sensor, but I can rotate 60 degrees to the right while still getting good data, and this is with much more ‘wall’ available to the left than to the right, as shown in the following photo

-30 deg relative to center

+30 deg relative to center

-60 deg relative to center

+60 deg relative to center

After finishing this, I received a reply to a question I had asked on the Pololu support forum about micro-stepping and current limiting with the Pololu DRV8825.  The Pololu guy recommended that the current limit be set to the coil current rating (350 mA for the Adafruit NEMA 17 stepper motor I’m using), or 0.175V on VREF.  So, I changed VREF from 0.122V to 0.175V and re-verified proper micro-stepping performance.  Here’s a plot using the new setup, micro-stepping set to 32, -54 to +54 deg sweep, 18 steps of 6 deg each.

and a short video showing the sweep action.

In the video, note that at the end, the red wire compass heading pointer and the laser dots return precisely to their starting points – yay!

So, at this point everything is working nicely, except I still can’t figure out why the steering value zero doesn’t occur when the sensor array is oriented parallel to the ‘wall’.  I’m hoping John, the ST Micro guy, can shed some light (pun intended) on this 😉

11 June 2020 Update.

I think I figured out why the steering value zero didn’t occur when the sensor array was oriented parallel to the wall, and it was, as is usually the case, a failure of the gray matter between my ears ;-).

The problem was the way I was collecting the data with the scanner program that runs the rotary table program.  The scanner program wasn’t properly synchronizing the sensor measurements with the rotary table pointing angle.  Once I corrected this software error, I got the following plot (the test setup for this plot still has the left & right sensors physically reversed).

As can be seen in the above plot, the steering value y-axis crossover now occurs very close to zero degrees, where the sensor array is oriented parallel to the wall.

12 June 2020 Update

After numerous additional steering value vs angle scans, I’m reaching the conclusion that the VL53L0X sensors have the same sort of weakness as the ‘ping’ sensors – they aren’t particularly accurate off-perpendicular.  To verify this, I removed the sensors from my 3-sensor array bracket and very carefully measured the distance from each sensor position to the target ‘wall’ using a tape measure and a right-angle draftsman’s triangle, with the results shown in the following Excel plot.  Measuring the off-perpendicular distances turned out to be surprisingly difficult due to the very small baseline presented by the individual sensor mounting surfaces and the width of the tape measure tape.  I sure wish I had a better way to make these measurements – oh, wait – that’s what I was trying to do with the VL53L0X sensors! ;-).

With just the physical measurements, the steering value crosses the y-axis very close to zero degrees relative to parallel, plus/minus measurement error as expected.  One would expect even better accuracy when using a sensor that can measure distances with milometer accuracy, but that doesn’t seem to be the case.  In the following Excel plot, the above tape measure values are compared to an automatic scan using the VL53L0X sensors. As can be seen, there are significant differences between what the sensors report and the actual distance, and these errors aren’t constant with angular orientation (otherwise they could be compensated out).

For example, in the above plot the difference between the solid green (rear sensor distance) and dashed green (rear sensor mounting surface tape measure distance) is about 80 mm at -30 degrees, but only about 30 mm at +30 degrees.  The difference between the measurements for the blue (front) sensor and the tape measure numbers is even more dramatic.

So it is clear that my idea of orienting the sensors at angles to each other is fundamentally flawed, as this arrangement exacerbates the relative angle between the ‘outside’ sensor orientations and the target wall when the robot isn’t oriented parallel to the wall.  For instance, with the robot oriented at 30 degrees to the wall, one of the sensors will have an orientation of at least 50 degrees relative to the wall.

So my next idea is to try a three sensor array again, but this time they will all be oriented at the same angle, as shown below:

The idea here is that this will reduce the maximum relative angle between any sensor and the target wall to just the robot’s orientation.

3-sensor linear array

I attached the three VL53L0X sensors to the linear array mount and ran an automatic scan from -30 to +30 degrees, with much better results than with the angled-off array, as shown in the Excel plot below:

VL53L0X sensors attached to the linear array mount. Note the nomeclature change

Triple sensor linear array automatic scan. Left/Right curve are very symmetric

So, the linear array performance is much better than the previous ‘angled-off’ arrangement, probably because the off-perpendicular ‘look’ angle of the outside sensors is now never more than the scan angle; at -30 and +30 degrees, the look angle is still only -30 and +30 degrees.  Its clear that keeping the off-perpendicular angle as low as possible provides significantly greater accuracy.  As an aside, the measured perpendicular distance from the sensor surface to the target ‘wall’ was almost exactly 250 mm, and the scan values at 0 degrees were 275, 238, and 266 mm for the left, center, and right sensors respectively.   so the absolute accuracy isn’t great, but I suspect most of that error can be calibrated out.

13 June 2020 Update:

So, now that I have a decently-performing VL53L0X sensor array, the next step is to verify that I can actually use it to orient the robot parallel to the nearest wall, and to track the wall at a specified offset using a PID engine.

My plan is to create a short Arduino/Teensy program that will combine the automated scan program and the motor driver program to drive the stepper motor to maintain the steering value at zero, using a PID engine.  On the actual robot, this algorithm will be used to track the wall at a desired offset.  For my test program, I plan to skew the ‘wall’ with respect to the stepper motor and verify that the array orientation changes to maintain a wall-parallel orientation.

14 June 2020 Update:

I got the tracking program running last night, and was able to demonstrate effective ‘parallel tracking’ with my new triple VL53L0X linear sensor array, as shown in the following video

If anyone is interested in the tracking code, here it is:  It’s pretty rough and has lots of bugs, but it shows the method.

At this point, I’m pretty confident I can use the linear array arrangement and a PID engine to track a nearby wall at a specified distance, so the next step will be to replace the ‘ping’ ultrasonic sonar sensor on Wall-E2, my autonomous wall-following robot, with the 3-sensor array, and see if I can successfully integrate it into the ‘real world’ environment.

Stay tuned!

Frank

Off-perpendicular measurement problems with HC-SRO4 Ping Measurements

Posted 14 May 2020,

For the last couple of months I have been working lately on updating my four-wheel autonomous wall-following robot.  Unfortunately, I have been unable to really nail down an effective algorithm for capturing and then holding a specified offset distance from the nearest wall.  If the robot starts with its body oriented parallel to the wall, it can successfully capture and then hold the desired distance.  Unfortunately, it has turned out to more difficult than I thought to consistently obtain the required parallel orientation starting position.

So, came up with an idea that was sure to work; instead of making a sweeping turn looking for an inflection point in the distance  reported by the HC-SR04 ‘ping’ sensor, I would instead make a series of N-degree turns.  I wouldn’t know the absolute orientation of the robot with respect to the wall, but I would know the total angle subtended by the robot, and (I thought) it should be much easier to determine the inflection point from the resulting Angle/Distance table.

Unfortunately, when I tested this idea, it failed because the distances reported by the ping sensor didn’t vary significantly, even though I could plainly see that the actual distance between the robot’s ping sensor and where the sensor was pointing was changing significantly – what the heck?  The following short video clip and Excel plot show the situation.

If you were to believe the Excel plot, the inflection point denoting parallel orientation would actually be at about 63 degrees off the perpendicular – clearly not right.

To further investigate the issue, I ran some simple manual ping vs tape measure and LIDAR vs tape measure tests.  The following photo shows the setup, and the Excel plots show the results.

Setup for LIDAR vs tape vs angle measurements. The ‘Ping’ vs tape measurements were set up the same way

As the above plots show, the ping sensor measurements are basically useless for anything more than a few degrees off perpendicular; as the ‘ping diff’ plot in the upper chart shows, the inflection point could be anywhere.  In contrast, the manual tape measurements show a distinct curve, and the point-point differential changes sign at very close to 0 deg off perpendicular.

The lower plot shows the same measurements but using LIDAR rather than the ultrasonic ping sensors. As the plot shows, the LIDAR measurements would actually be reasonably accurate; the inflection point for both the LIDAR measurement (the ‘LIDAR diff’ line above) and the tape measurement (‘Tape Diff’ line above) is at approximately 0 degrees off perpendicular.  Unfortunately, the Pulsed Light ‘LIDAR-lite’ units are much more expensive than the ubiquitous HC-SRO4 ‘ping’ sensor.

After noodling around on the web for a while, I found some references to a GY-530/VL53L0X ‘Time of Flight’ (ToF) sensor that looks like it might do the job.  From the Adafruit description of this neat device:

  • 3 to 120 cm in ‘default’ mode
  • As far as 1.5 to 2 meters on a nice white reflective surface in ‘long range’ mode
  • 3-5V compatible
  • Control via I2C (default addr = 0x29, but can be configured during setup)
  • Less than 40mA current draw

The big question, of course, is whether or not this device will be any better at off-perpendicular measurements than the ‘ping’ sensors.  I have some on order so hopefully I’ll be able to answer this question shortly.

18 May 2020 Update:

I got my GL530/VLX53L0X sensor in yesterday and I’ve now had a chance to play with it a bit.  It’s super small and pretty responsive.  Here’s a photo of the setup with an Arduino UNO.

GL530/VL53L0X ToF sensor test setup. Note the sensor could fit on a U.S. dime with room to spare

I got it to play and produce basic distance data, but haven’t done much else with it yet.  However, while noodling around looking at data sheets, I ran across this demo video by an ST engineer, and it really started me thinking about different ways to use this device as one of an array of sensors; maybe this would make the ‘RotateToParallelOrientation()’ function easier – or even unnecessary!

19 May Update:

I was able to make some reasonably precise measurements today with the GY530/VL53L0X Time-of-Flight sensor. Here’s the setup:

Testing setup for GY530/VL53LOX ToF Sensor. Note wood board test surface in background

And here’s a plot of the measured distance vs off-perpendicular angle, along with actual tape measure value for accuracy comparisons

While the measuring tape values and sensor distance values track very well over the -50 to +40 deg range, they aren’t the same.  The sensor measurements are consistently lower, by 10 mm or so.  That’s not a big deal, and there may be some calibration techniques available to zero out any constant error term.

I also noticed that the sensor value returned were occasionally dead wrong, reporting a value of 20 mm when the real value was more like 300-350.  Again, this might be addressable by averaging, but…

And lastly, as shown by the above plot, above about 40 degrees off-perpendicular, the sensor measurements become unreliable, probably due to low SNR return signal.

The good news is that the off-perpendicular plot does seem to behave fairly well in the -30 to +30 degree range, and has the expected quadratic bowl shape, so it should be usable for finding the parallel point, and maybe even for providing a steering value to a PID engine for offset hold operations once the proper offset has been captured.

To explore the ‘steering value’ idea, I simulated a 3-sensor setup by picking values out of the above plots 3 at a time.  For instance assuming the -40, -10, and +20 degree values were taken at the same time by 3 different sensors.  The following plot shows the results of the following calculation:

(Ml – Mr)/Mc

where, ‘l’, ‘r’ and ‘c’ subscripts refer to the left, right, and center sensors in a 3-sensor array angled at -30, 0 and +30 degrees respectively.  Here’s the plot

simulation of a 3-sensor array created by picking values from single-sensor sweep

This plot is quite exciting, as it shows a clear linear relationship between off-perpendicular orientation and steering values that could be used as the input to a PID engine.

Stay tuned,

Frank

 

 

Mid-2018 Wall-E2 Project Status

Posted 26 August 2018

It’s been a year and a half since I last described the status and challenges in my ongoing campaign to create Wall-E2, an autonomous wall-following robot.   The name ‘Wall-E’ was taken from the 2006 movie of the same name.   In the movie, Wall-E was an autonomous trash-compactor robot that had all sorts of adventures, and my Wall-E2 autonomous wall-following robot certainly fills that bill!

From the previous system status report in early 2017, I described the following tasks:

Its been a year and a half since I updated the status of my ongoing campaign to create an autonomous wall-following robot.   The robot system consists of the following main subsystems:

  • Battery and charging subsystem
  • Drive subsystem (wheels, motors and motor drivers)
  • IR homing subsystem for charging station
  • LIDAR for front ranging and ultrasonic SONAR for left/right ranging
  • I2C Sensor subsystem (MPU6050 6DOF IMU, FRAM, RTC)
  • Operating system

Battery and charging subsystem:

Since the last update, the battery and charging system has been updated from dual 1-Amp single-cell Adafruit PB1000C chargers utilizing a 5V source to a TP-5100 2-amp dual-cell charger utilizing a 12V source.   This significantly simplified the entire system, as now the battery pack doesn’t have to be switched between series and parallel operation. Also, now the charging and supply leads are independent so the supply leads to the rest of the robot were upgraded to lower gauge wire to reduce the IR drops when supplying motor drive currents.   See this post for details.

Drive subsystem (wheels, motors and motor drivers):

The motors were upgraded to provide a better gear ratio, although this was done before I realized that most of the traction issues were caused by IR drops in the battery wiring.   The motor driver modules are unchanged, but I may later swap them out for more modern 3V-capable drivers so that I can swap in an Arduino Due microcontroller for the Mega (the Due has the same footprint/IO as the Mega, but has a much faster CPU and more memory)

 IR homing subsystem for charging station:

The IR homing subsystem utilizes a pulsed IR beacon on the charging station coupled with dual IR sensors in a flared sunshade housing, backed by a Teensy 3.5 CPU configured as a null pattern matched-filter.   The Teensy reports left/right homing error as a value between -1 and 1 over an I2C bus to the main microcontroller, which drives the motors to null out the signal.   As the system stands today, the operating system can successfully home in on the charging station and connect to the charger. The robot knows its current battery voltage (charge condition) and therefore can decide to connect to the charger or to avoid it.

LIDAR for front ranging and ultrasonic SONAR for left/right ranging:

The front/left/right ranging subsystem is one of the most mature subsystems on the robot.   The subsystem can successfully follow walls, and detect/recover from stuck’ conditions.   The only thing this subsystem lacks is the ability to make consistent turns on different terrain, due to the lack of heading information (this will be supplied by the new tri-sensor module)

I2C Sensor subsystem (MPU6050 6DOF IMU, FRAM, RTC):

The I2C sensor subsystem is a new addition since the last update, and has yet to be fully integrated into he system.   The subsystem consists of a Inversense MPU6050 6DOF solid-state accelerometer, and Adafruit FRAM (Ferromagnetic RAM) and RTC (Real-Time Clock) modules.   The MPU6050 gives the robot the ability to sense relative heading changes, which makes it capable of executing consistent N-degree turns on both hard flooring like the kitchen and atrium areas and the carpet in the rest of the house. The FRAM and RTC units should allow the robot to remember its charge/discharge history, even through power ON/OFF cycles.

The relative heading capability has been tested off-line from the main operating system, but has not yet been integrated into the OS. Same for the FRAM/RTC modules.   Integration of this subsystem was stalled for quite a while due to problems with the Arduino I2C (Wire) library, but these problem were just recently resolved by switching to a more robust I2C library (SBWire).   See this post for details.

 

Operating system:

The operating system has evolved quite a bit over the course of this adventure, but its current state seems pretty stable.   The OS is implemented as a set of modes, as follows:

  • MODE_CHARGING: Occurs when the robot is physically connected to a charging station
  • MODE_IRHOMING: Occurs when a charging station beacon signal is detected
  • MODE_WALLFOLLOW: Occurs when the robot isn’t in any other mode.
  • MODE_DEADBATTERY: Occurs when the sensed battery voltage falls below DEAD_BATT_THRESH_VOLTS volts

 

 

Future Work Plans:

  • Complete the integration of the tri-sensor module: This entails adding the hardware and software required to sense loss of power so that the current date/time stamp can be written to the FRAM, along with the complementary ability to read out the last power cycle date/time stamp from the FRAM on power-up.   In addition, the current timed turn routines need to be replaced by the new heading-sensitive turn algorithms.
  • Investigate the idea of multiple charging stations with different IR beacon frequencies. The current matched filter algorithm forms a very narrow-band filter, to discriminate the desired IR beacon signal from unwanted flooding’ from overhead lighting sources and sunlight.   The center frequency of the filter is set in software on the Teensy microcontroller, so it should be possible to have the Teensy routinely check for beacon signals at other signals, as long as the frequencies are far enough apart to prevent overlap.   The current filter center freq was more or less arbitrarily set to 520Hz. high enough to be well away from, and not a multiple of, 60Hz, but low enough for the Teensy processing rate.   Something like 435Hz (60*7.25) would probably work just as well, and is far enough away from 520Hz to be well outside the filter bandwidth (about +/- 10Hz IIRC).

Complete the implementation of the fixed charging station.

This task has been completed, and along the way the charging voltage was changed from 5V to 12V, to accommodate the new 12V on-board battery charging system.   See this post for details

Integrate the IR homing software from the 3-wheel robot into Wall-E2’s code base:

This task has also been accomplished.   See this post for details.