Tag Archives: Wall Tracking

Wall-E3 Right Wall Following Trial

Posted 23 March 2022,

Earlier this month I was able to demonstrate a multi-lap left-side wall tracking run by Wall-E3 in my office ‘sandbox’. This post describes my efforts to extend this capability to right-side wall tracking.

Since I already had the left-side wall tracking algorithm “in the can”, I thought it would be a piece of cake to extend this capability to right-side tracking. Little did I know that this would turn into yet another adventure in Wonderland – but at least when I finally made it back out of the rabbit-hole, the result was a distinct improvement over the left-side algorithm I started with. Here’s the left-side code:

The above code works, in the sense that it allows Wall-E3 to successfully track the left-side wall of my ‘sandbox’. However, as I worked on porting the left-side tracking code to the right side, I kept thinking – this is awful code – surely there is a better way?

After letting this problem percolate for few days, I decided to see if I could approach the problem a little more logically. I realized there were two major conditions associated with the problem – namely is the robot’s initial position inside or outside the desired offset distance K? In addition, the robot can start out parallel to the wall, or pointed toward or away from the wall. Ignoring the ‘started out parallel’ degenerate case, this reminded me of a 3-parameter Karnaugh map configuration, so I started sketching it out in my notebook, and then later in a Word document, as shown below:

As shown above, I broke the 3-parameter into two 2-parameter Karnaugh maps, and the output is denoted by αT. After a few minutes it became obvious that the formula for αT is pretty simple – its either αR – αA1 or αR – αA2 depending on whether the robot starts out outside or inside the desired offset distance. In code, this boils down to one line, as shown at the bottom of the Karnaugh map above, using the C++ ‘?’ trinary operator, and choosing CW vs CCW is easy too, as a negative result implies CCW, and a positive one implies CW. The actual code block is shown below:

Here’s a short video of Wall-E3 navigating the office ‘sandbox’ while tracking the right-side wall.

So, it looks like Wall-E3 now has tracking ability for both left-side and right-side walls, although I still have to clean things up and port the simpler right-side code into TrackLeftWallOffset().

25 March 2022 Update:

Well, that was easy! I just got through porting the new right-side wall tracking algorithm over to TrackLeftWallOffset(), and right out of the box was able to demonstrate successful left-wall tracking in my office ‘sandbox’.

At this point I believe I’m going to consider the ‘WallE3_WallTrack_V3’ project ‘finished’ (in the sense that most, if not all, my wall tracking goals have been met with this version), and move on to V4, thereby limiting the possible damage from my next inevitable descent through the rabbit hole into wonderland.

Stay tuned,

Frank

Wall-E3 Multi-Lap Wall-Following Trial

Posted 06 March 2022

I’ve pretty much finished with transitioning my autonomous wall-following robot from the old Arduino MEGA 2560 main controller to the new Teensy 3.5 main controller, and now I am moving on to actually getting Wall-E3 to do the job it was created for – namely, to follow walls autonomously. This post describes the result of a multi-lap run through my little testing ‘sandbox’, consisting of a set of barriers forming a 2m X 2m rectangular ‘room’. Here’s a short video showing the run:

I captured the telemetry output from the run and went back through it pretty much line-by-line, trying to make sure I understood all the actions displayed in the video – particularly the little off-piste excursion at about 1:00 on the second lap, just before the end of the run (the run was cancelled when a portion of the wall fell over on Wall-E3).

First Leg: 36.0- 38.9 Sec (5-8 sec in video)

In the first leg, the robot starts off parallel to the wall on the left, but too close (16cm instead of 40cm). It makes a 27.1º CW turn away from the wall to achieve a cut angle of 30º, moves forward, and then turns back 30º CCW to end up parallel to the wall, and about 36cm away – almost perfectly spaced 40cm off the wall.

Next it tracks down the wall from 14.992 sec to 17.390sec , trying (and generally succeeding) to maintain the 40cm standoff distance.

At 17.390 sec it detects an upcoming obstacle (the obstacle detection distance was set to 20cm for this run), and stops (not quite getting all the way stopped before running into the wall – oops!).

First-to-Second Leg Transition:

The handling procedure for the ‘OBSTACLE_AHEAD’ case is to stop, back straight up to achieve the nominal wall offset distance (40cm here), then make a 90º turn (CW in this case) away from the wall to orient itself parallel to the wall again.

As can be seen from the above telemetry, that is exactly what happens, backing up to the point where the front LIDAR sensor shows 38cm and then making the required 90º turn with SpinTurn(CW, 90.00, 45.00). The ‘backup and turn’ evolution is completed in approximately 1.5 sec.

Second Leg: 47.4 – 49.0 Sec (16-19 sec in video)

Second-to-Third Leg Transition:

Third Leg: 57.0 – 59.8 Sec (25-29 sec in video)

Transition and Fourth Leg: 61.0 – 70.4 Sec (30-40 sec in video)

Transition and Fifth Leg: 71.6 – 80.9 Sec (40-50 sec in video)

Fifth-to-Sixth Leg Transition: 82.1 – 88.0 Sec (50-58 sec in video)

Sixth Leg up to Anomaly 88.9 – 89.8 Sec (57-59 sec in video)

On this leg an anomaly occurred. The robot detected a ‘Stuck Ahead’ condition, defined by the condition where the mathematical variance of the last N front LIDAR distance readings falls below a set threshold. This should never happen while the robot is actually moving, but it clearly did happen in this case (unfortunately I wasn’t recording the front distance measurement or the distance measurements to the other wall, so I can’t go back and see exactly what happened). The recovery procedure for a ‘stuck ahead’ condition is to make a 90 deg turn away from the nearest wall, move forward for 1 second, and then make another 90 deg turn to return to a parallel course, but offset 10-20 cm from the previous track. In this case, the robot turned toward the nearest wall, clearly a mistake (it was a mistake in TrackLeftWallOffset() – since fixed). However, it was not a disaster, as the robot made the second turn before hitting the wall, and from there on it returned to normal left wall tracking mode.

Summary:

This first test of left wall tracking performance was very encouraging. The robot was able to continue tracking operations over several laps, including an instance where it recovered from an inadvertent ‘Stuck’ detection. The test could easily have continued until the batteries died, but had to be aborted when one section of the foam-core wall fell in on top of the robot – oops!

Also, this run pointed out the need for more focused telemetry. For this test I was only reporting the left-side and rear distances, but now I know I need to add the right-side measurements as well as the front and rear variance numbers.

09 March 2023 Update:

After cleaning up some messy initialization code, and improving telemetry readouts, I ran another complete left-side wall-tracking lap in my sandbox, as shown in the following short video:

09 March 22 Left-side wall-tracking lap

The telemetry for this run is shown below:

From the telemetry, it takes about 8 sec for all sensor hardware initialization. After that the left/right/front/rear distance arrays are initialized, and the initial front/rear variances are calculated. All this is summarized on line 26-27.

First leg: 12.866 – 15sec (3-5 sec in video)

Left-side tracking starts on line 34. First (line 37) the robot turns to its initial offset capture heading, moves to the desired offset distance (not shown) and then turns back to parallel the wall (line 39). Actual tracking starts on line 43 at 12.866 sec elapsed time (this corresponds to about 3 sec into the video)

At line 65 (about 15 sec elapsed time) the robot ‘sees’ the upcoming wall, stops and then backs up to 38cm (16.4 – 17.3 sec, 5-6 sec in video). At line 78 it makes a 90deg CW turn to line up with the current wall section, and then navigates down the wall (18.6-21.3sec, 8-11sec in video)

Second leg: 18.6 – 21.3sec (8-11 sec in video)

The robot ‘sees’ the upcoming wall on line 119 (21.010 sec, 11sec in video), backs up (lines 123-129, 11-12 sec in video) and makes another 90deg CW turn to follow the 3rd wall

The third and 4th legs are very similar to the first two, with the robot ending back where it started at 32.574 sec (23 sec on video), ‘seeing’ the upcoming wall at line 221. At this point I transmitted the ‘C’ character over the wireless link to enter manual control to terminate the run.

Summary:

This 4-leg run was pretty much perfect. I adjusted the MIN_FRONT_OBSTACLE_DIST_CM from 20 to 30cm, and this stopped the robot from banging its head against the walls – yay! Also, the telemetry readout changes made for a much more understandable output. I was happy to see that the front variance stayed well above 10,000 the whole time, but unhappy to see that the rear variance was essentially zero the entire time. The low rear variance is due to the fact that the rear VL53L0X sensor range is only about 100cm, and after that it always reports ‘819’. This is not a real problem – it just means that I can’t use the rear variance number to detect a ‘rear stuck’ condition unless it happens within a meter or so from a wall. Hmm, maybe I could use the information from both the front & rear variance numbers to create a more robust detection system.

Stay tuned,

Frank

Wall-E3 Replacing Mega 2560 With Teensy 3.5 Part VIII

Posted 19 February 2022,

At this point in the evolution of Wall-E3, all the hardware seems to be working, so it’s time to get serious about wall tracking. Last fall I made another run at wall tracking with Wall-E2, and wound up with an algorithm that would first capture the desired wall offset, and then track it ‘forever’. This worked great, but the approach is at odds with the general processing architecture. The current tracking architecture is set up as a loop, where all pertinent parameters are updated every loop period, and the appropriate action is taken. In the case of wall tracking, the ‘action’ was one left/right motor speed update. This allows rapid recognition of, and adaptation to, various ‘error’ conditions, like being stuck or about to run into something. The algorithm developed last fall does none of this, so it can’t react properly (or at all, for that matter) to things like an upcoming wall.

I’m starting to think I can use a hybrid approach – use the current capture/tracking algorithm pretty much as it stands from last fall, but have it check for ‘error’ conditions each time through its own internal loop. If any unusual conditions are detected, then force an exit from the tracking routine and another pass through the main ‘traffic director’ function ‘GetOpMode()’. The updated mode assignment will then percolate back down through loop() and cause the appropriate handling function to be called.

22 February 2022 Update:

As a start, I ported the ‘TrackLeft/RightWallOffset() functions to Wall-E3 and, after the normal number of screwups and mistakes, I got the left side tracking algorithm working, as shown in the Excel plot and short movie clip below:

Here’s the complete code for ‘TrackLeftWallOffset()’:

The above algorithm works great, but it runs in an infinite loop once it has captured the wall offset. Based on my above comments about a hybrid approach, I could add tests for stuck and/or front or back obstacles to this loop (instead of the current ‘while(true)’). This would cause TrackLeftWallOffset() to exit, and the GetOpMode() function could assign the appropriate mode, which would then cause the proper function to execute.

Or, I could eliminate GetOpMode() entirely and put it’s logic in ‘loop()’? Actually, looking at the loop() function in FourWD_WallE2_V12.ino, my last iteration with the Arduino Mega2560, I see that GetOpMode() is called at the start of loop(), and then the OpMode switch statement comes pretty much immediately afterwards. Here’s the code:

The MODE_CHARGING and MODE_HOMING cases are self-contained, so no changes would be needed for them. The MODE_WALLFOLLOW case is sub-divided into TRACKING_LEFT and TRACKING_RIGHT cases. If all the inline code in TRACKING_LEFT was replaced with TrackLeftWallOffset() and that of TRACKING_RIGHT with TrackRightWallOffset(), with these two functions augmented by the current if (bIsStuck) , if(bObstacleAhead) and if(bObstacleBehind) guard code (pretty much as it now stands), then that should work. I think I’ll give that whirl and see what happens.

To start the process, I created yet another project – WallE3_WallTrack_V3 (to preserve the currently ‘working OK on left side’ status of WallE3_WallTrack_V2) and try porting the GetOpMode() and loop() code from FourWD_WallE2_V12.

02 March 2022 Update:

I now have a ‘loop() only’ version of WallE3 running that properly tracks the left side. Everything is basically the same as before, except the loop() function, shown below:

The ‘IR HOMING’ and ‘CHARGING’ blocks are essentially unchanged, and the ‘WALL TRACKING’ block is much simpler. All ‘anomaly’ (robot stuck either forward or backward, robot approaching an obstacle ahead or behind, dead battery, etc are all handled internally to the two ‘TrackLeft/RightWallOffset()’ functions. Here’s the (potentially infinite) ‘while()’ loop:

As the code above shows, the while loop will continue to execute as long as the ‘errcode’ value is ‘NO_ANOMALIES’. Internally the ‘CheckForErrorCondx()’ function surveys the inputs from all sensors and attempts to detect any anomalous behavior. Any return value except NO_ANOMALIES causes the while() loop to exit. Each potential anomaly condition has its own handling function, which exits back to ‘loop()’ and the process starts all over again. I believe this is a much cleaner approach than I had before with the ‘GetOpMode()’ function.

I also took the opportunity at this point to fix a long-standing problem with the code. The front-facing LIDAR unit returns distances in Cm, while all seven VL53L0X time-of-flight distance sensors report in mm. Not only did this torture me mentally (let’s see – is it Cm or mm here?), but it caused the new ‘CalcRearVariance()’ function to crater, because the 10x larger numbers, when squared, caused the ‘uint16_t’ type to overrun and produce crazy variance numbers. So, I changed the GetRequestedVL53L0XValues() function to convert mm to Cm, changed all the variable names from xxxxMM to xxxxCm and carefully combed through the entire codebase, correcting the inevitable wash of ’10x’ errors. In the end though, I made the codebase much more consistent and understandable (I hope).

Stay Tuned,

Frank