After a bit of a hiatus, I finally got around to some basic wall-following tests with the new 4WD robot (aka ‘Wall-E2’), and they seemed to go fairly well, with of course the normal number of screw-ups and minor disasters. As the wife and I were planning weekend with the kids & grand-kids in St. Louis, and one of the grand-kids was also my fellow robot-master, I decided to take Wall-E2 along so he could strut his stuff in a different environment. While we were there, we got in lots of kitchen/dining room testing (turns out the breakfast room at their place has a wall layout just about perfect for the testing we were doing). During the testing, we ran down and killed off at least one significant, but very subtle bug (guess what happens when you send -15 to the 8-bit Arduino D/A) in the motor driver routines, so that was real progress, and we also investigated a couple of advanced ‘pre-turn’ algorithms that showed promise for more natural wall-to-wall intersection navigation. All in all we had a great time, and Danny got to see (and influence!) the current state-of-play for the 4WD robot.
After returning home, I decided to try and document Wall-E2’s behavior with and without the new pre-turn algorithm, as a prelude to investigating modifications that might retain the advantages of the pre-turn algorithm while avoiding some of the problems we discovered. So, I made the two short videos shown below. The first video shows Wall-E2’s baseline behavior, without the pre-turn maneuver enabled, and the second one shows the same situation, but with the pre-turn maneuver enabled.
In ‘normal’ operation, as shown in the first video above, Wall-E2 has a very simple instruction set. Follow the closest wall until it hits something. When an obstruction is encountered, it backs up, turns away from the closest wall, and then parties on. The idea of the ‘pre-turn’ is to give Wall-E2 more natural wall intersection behavior; instead of waiting until it hits the wall to react, the pre-turn maneuver anticipates the upcoming wall and makes the turn early. If done correctly, Wall-E2 should be able to navigate most wall-wall intersections, as shown in the second video above.
While this works great in the above situation, we discovered some significant ‘gotchas’ with this algorithm while testing it in Danny’s breakfast nook/Wall-E2 test range. Correct execution of the pre-turn maneuver assumes that Wall-E2 will be following the closest wall when the upcoming wall (the one on the other side of the upcoming corner) gets into the trigger window, but in several of our tests, Wall-E2 turned the wrong way, into the wall it was following instead of away from it. Upon closer observation we discovered this was due to Wall-E2 going by a nearby table leg at just the right distance from the upcoming wall. Just as Wall-E2 got into the trigger window, it switched control from the wall on the left to the table leg on the right, because (at that exact instance, Wall-E2 was closer to the table leg than it was to the wall. And, because in the pre-turn maneuver Wall-E2 is programmed to turn away from the followed (i.e. closest) wall, it dutifully turned away from the table leg – and smack into the wall – oops!
Another major gotcha with the current algorithm is that the pre-turn maneuver is executed in the foreground, so nothing else can happen at the same time. In particular, no sensor measurements are taking place, so the duration and/or magnitude of the turn can’t be adjusted (extended or truncated) based on the actual corner geometry.
So, although the pre-turn maneuver works great when it works, and it works most of the time, it has real problems in even mildly cluttered environments, and creates a 1-2 second ‘blind spot’ for the sensors. We may be able to use filtering/averaging to handle clutter, and we may be able to segment the pre-turn maneuver sufficiently so that it can be adjusted on-the-fly to accommodate different corner geometries – we’ll see.