Tag Archives: LIDAR

Wall-E goes back to basics; Ping sensors and fixed forward-facing LIDAR

Posted November 14, 2015

Back in the spring  of this year I ran a series of experiments that convinced me that acoustic multipath problems made it impossible to reliably navigate and recover from ‘stuck’ conditions using only acoustic ‘ping’ sensors (see “Adventures with Wall-E’s EEPROM, Part VI“).  At the end of this post I described some alternative sensors, including the LIDAR-lite unit from Pulsed Light.

In this same article, I also hypothesized that I might be able to replace all the acoustic sensors with a single spinning-LIDAR system using another motor and a cool slip-ring part from AdaFruit.  In the intervening months between April and now, I have been working on implementing and testing this spinning LIDAR idea, but just recently arrived at the conclusion that this idea too is a dead end  (to paraphrase  Thomas Edison  –  “I have not failed. I’ve just found 2  ways that won’t work.”).  I simply couldn’t make the system work.  In order to acquire ranging data fast enough  to keep Wall-E from crashing into things, I had to get the rotation rate up to around 300 rpm (i.e. 5 rev/sec).  However, when I did that, I couldn’t process the data fast enough with my Arduino Uno processor and the data itself became suspect because the LIDAR was moving while the measurement was being taken, and the higher the rpm got, the faster the battery ran down.  In the end, I realized I was throwing most of the LIDAR data away anyway, and was paying too high of a price  in terms of battery drain and processing for the data I did keep.

So, in my last post on the spinning-LIDAR configuration  I summarized my findings to date, and described my plan to ‘go back to basics’; return to acoustic ‘ping’ sensors for left/right wall ranging, and replace the spinning-LIDAR system with a fixed forward-facing LIDAR.  Having just two ‘ping’ sensors pointed in opposite directions should suppress or eliminate inter-sensor interference problems, and the fixed forward-facing LIDAR system should be much more effective than an acoustic sensor for obstacle and ‘stuck’ detection situations.

Over the last few weeks I have been reworking Wall-E into the new configuration.  First I had to disassemble the spinning LIDAR system and all its support elements (Adafruit slip-ring, motors and pulleys, speed control tachometer, etc).  Then I re-mounted left and right ‘ping’ sensors (fortunately I had kept the appropriate 3D-printed mounting brackets), and then designed and 3D-printed a front-facing bracket for the LIDAR unit.  While I was at it, I 3D-printed a new left-side bumper to match the right-side one, and I also decided to retain the laser pointer from the previous version.  The result is shown in the pictures below.

Wall-E after being stripped down for remodeling.

Wall-E after being stripped down for remodeling.

Oblique view showing the new fixed front-facing LIDAR installation

Oblique view showing the new fixed front-facing LIDAR installation, complete with laser pointer.

 

Side view showing both 'ping' sensors. Note the new left-side bumper

Side view showing both ‘ping’ sensors. Note the new left-side bumper

After getting all the physical and software rework done, I ran a series of bench tests to test  the feasibility of using the LIDAR for ‘stuck’ detection.  I was able to determine that by continuously calculating the mathematical variance of the last 50 LIDAR distance, I could reliably detect the ‘stuck’ condition; while Wall-E was actually moving, this variance remained quite large, but rapidly decreased to near zero when Wall-E stopped making progress.  In addition, the instantaneous LIDAR measurements were found to be fast enough and accurate enough for obstacle detection (note here that I am currently using the much faster ‘Blue Label’ version of the LIDAR-Lite unit).

Finally, I set Wall-E loose on the world (well, on the cats anyway) with some ‘field’ testing, with very encouraging results.  The ‘stuck’ detection algorithm seems to work very well, with very few ‘false positives’, and the real-time obstacle detection scheme also seems to be very effective.  Shown below is a video clip of one of the ‘field test’ runs.  The significant events in the video are:

  • 14 sec – 30 sec:  Wall-E gets stuck on the coat rack, but gets away again.  I *think* the reason it took so long (16 seconds) to figure out it was stuck was because the 50-element diagnostic array wasn’t fully populated with valid distance data in the 14 seconds from the start of the run to the point were it hit the coat rack.
  • 1:14:  Wall-E approaches the dreaded ‘stealth slippers’ and laughs them off.  Apparently the ‘stealth slippers’ aren’t so stealthy to LIDAR ;-).
  • 1:32:  Wall-E backs up and turns around for no apparent reason.  This may be an instance of a ‘false positive’ ‘stuck’ declaration, but I don’t really know one way or the other.
  • 2:57: Wall-E gets stuck on a cat tree, but gets away again no problem.  This time the ‘stuck’ declaration only took 5-6 seconds – a much more reasonable number.
  • 3:29:  Wall-E gets stuck on a pair of shoes.  This one is significant because the LIDAR unit is shooting over the toe of the shoe, and Wall-E is wriggling around a bit.  But, while it took a little longer (approx 20 sec), Wall-E did manage to get away successfully!

 

So, it appears that  at least some  of my original goals for a wall-following robot have been met.  In my first post on the idea of a wall-following robot back in January of this year, I laid out the following ‘system requirements’:

  • Follow walls and not get stuck
  • Find and utilize a recharging station
  • Act like a cat prey animal (i.e. a mouse or similar creature)
  • Generate lots of fun and waste lots of time for the humans (me and my grandson) involved

So now Wall-E does seem to follow walls and not get stuck  – check.   It still cannot find/utilized a charging station, so that one has definitely  not been met.  With the laser pointer left over from the spinning-LIDAR version, it is definitely interesting to the cats, and one of them has had a grand time chasing the laser dot  –  check.  Lastly, the Wall-E project has been hugely successful in generating lots of fun and wasting lots of time, so that’s a definite  CHECK!! ;-).

Next Steps:

While I’d love to make some progress on the idea of getting Wall-E to find and utilize a charging station, I’m not sure that’s within my reach.  However, I do plan to see if I can get my new(er) 4WD chassis up and running with the same sort of ping/LIDAR sensor setup, to see if it does a better job of navigating on carpet.  Stay tuned!

Frank

 

 

 

Re-working Wall-E, Part I

Posted October 25, 2015

About a month ago I posted an article describing ‘the end of the road’ for the spinning-LIDAR implementation on my 2-motor Wall-E wall-following robot (see the post here).  Since then I haven’t had the time (or frankly, the inclination) to start the process of re-working Wall-E.  However, today I decided to start by removing the spinning LIDAR and all the supporting pieces, stripping Wall-E back to bare-bones as shown in the photo below.

Wall-E stripped back to bare-bones.  Note removed spinning LIDAR parts on left.

Wall-E stripped back to bare-bones. Note removed spinning LIDAR parts on left.

The LIDAR itself (or its older silver-label cousin) will go back on, but in a fixed forward-looking configuration.

I will probably also take the time now to make  a couple of other improvements, such as replacing the left-side (right side in the above photo) blue plastic bumper with the improved red plastic one, and replace the charging jack so the 4WD and the 2WD versions can share the same charging adapter.

Stay tuned!

Frank

 

 

 

Building up the New 4WD Robot – Part 2

Posted 10/13/15

After getting the battery pack and charger module assembled and working properly, it was time to integrate it into the 4WD chassis, and add the motor drivers and navigation controller subsystems.  The battery pack/charger module was constructed in a way that would allow it to be mounted in the motor bay with the motors, giving it some protection and getting it out of the way of the rest of the robot.  The DFRobotics 4WD kit comes complete with a power plug and SPDT power switch, and these were used for charging and main robot power switching, respectively.

Battery pack being installed in the motor bay. Note power switch and charging plug on left

Battery pack being installed in the motor bay. Note power switch and charging plug on left

Battery pack installed in motor bay

Battery pack installed in motor bay

Battery pack installed in motor bay

Battery pack installed in motor bay

Maintenance access to motor bay and battery pack

Maintenance access to motor bay and battery pack

MotorBayClosed

Motor bay closed, with motor and power wiring shown

After getting the power pack installed and the motors wired, it was time to move on to the ‘main deck’ components – namely the Arduino Mega and the two dual-motor drivers.  We spent some quality time trying different component layouts, and finally settled on the one shown in the following photo.

Initial component placement on the main deck

Initial component placement on the main deck

The motor drivers and the Arduino were mounted on the main deck plate by putting down a layer of adhesive-backed UHMW (Ultra-high Molecular Weight) teflon tape to insulate the components from the metal deck plate, and then a layer of double-sided foam tape to secure the components to the UHMW tape.  In the photo above, the Arduino is mounted toward the ‘front’ (arbitrarily designated as the end with the on/off switch and charging plug), and the motor drivers are mounted toward the ‘rear’.

After mounting the motor drivers and the Arduino, we added a terminal strip at the rear for power distribution, ribbon cables from the Arduino to the motor drivers, and cut/connected the motor wires to the motor driver output lugs.  The result is shown in the photos below.

Ribbon cable initial installation

Ribbon cable initial installation

Ribbon cable initial installation

Ribbon cable initial installation

Final ribbon cable routing

Final ribbon cable routing

To test this setup, we borrowed a nice demo program created by John Boxall of Tronix Labs.  This demo was  particularly nice, as it used the exact same flavor of L298-based motor driver as the ones installed on our robot, so very little adjustment was required to get the demo code to run our 4WD robot.  After fixing the expected motor wiring reversal errors, and getting all the wheels to go in the same direction at the same time, we were able to video the ‘robot jig’ as shown below

 

So, at this point we have a working robot, but with no navigational capabilities.  Now ‘all’ we have to do is add the XV-11 NEATO LIDAR on the sensor deck, get it to talk to the Arduino, and then get the Arduino smart enough to navigate.

Stay tuned! ;-).

Frank and Danny

Building up the New 4WD Robot – Part 1

Back in May of this year  I purchased a 4-wheel drive robot kit from DFRobots as a possible successor to my then-current Wall-E 3-wheel (2 drive motors and a small castering nose wheel). I didn’t have time to do more than just assemble the basic kit (see this post), so it spent the intervening months gathering dust on my shelf.  Coincidentally, my wife arranged with our  son to kidnap her grand-kids for a week (giving our son and his wife a much-needed break, and giving us some quality grand-kid time), so I decided to brush off the 4WD robot kit as a fun project to do with them, in parallel with re-working Wall-E to remove its spinning LIDAR assembly and replace it with a hybrid LIDAR/Sonar setup.

The plan with the 4WD robot is to incorporate another spinning-LIDAR setup – this one utilizing the XV-11 spinning LIDAR system from the NEATO vacuum cleaner.  This system rotates at approximately 300 RPM (5 RPS), so there is a decent chance that will be fast enough for effective wall navigation (the Wall-E LIDAR setup couldn’t manage more than about 200 RPM and that just wasn’t good enough).

However, before we get to the point of determining whether or not the XV-11 LIDAR system will work, there is a LOT of work to be done.  At the moment, I can see that there are four major subsystems to be implemented

  • Battery Supply and Charger
  • Motor Controller integration
  • XV-11 LIDAR controller
  • Navigation controller

Battery Supply and Charger

In preparation for the project, I purchased 2ea 2000 mAH Li-Ion batteries and a supply of ‘basic Li-Ion charger’ modules from SparkFun.  In my previous work with Wall-E, I had devised a pretty decent scheme for charge/run switching using a small 2-pole, double-throw relay to switch the battery pack from series connection for running the robot to independent-parallel for charging, so I planned to use the same setup here.  After the usual number of screwups, I wound up with a modular battery pack system that could be tucked away in the motor compartment of the 4WD robot.

Battery pack showing charging modules and switching relay. The tan capacitor-looking component is actually a re-settable fuse

Battery pack showing charging modules and switching relay. The tan capacitor-looking component is actually a re-settable fuse

Battery pack showing charging modules and switching relay. The tan capacitor-looking component is actually a re-settable fuse

Battery pack showing charging modules and switching relay. The tan capacitor-looking component is actually a re-settable fuse

In the above photos, the tan component that looks very much like a non-polarized capacitor is actually a re-settable fuse!  I really did not want to tuck this battery pack away in a relatively inaccessible location without some way of preventing a short-circuit from causing a fire or worse.  After some quality time with Google on the inet, I found a Wikipedia entry for ‘polymeric positive temperature coefficient device (PPTC, commonly known as a resettable fuse, polyfuse or polyswitch).  These devices transition from a low to a high resistance state when they get hot enough – i.e. when the current through them stays above a threshold level for long enough.  They aren’t fast (switching time on the order of seconds for 5X current overload conditions), but speed isn’t really a factor for this application.  I don’t really care if I lose a controller board or two, as long as I don’t burn the house down.  Even better, these devices reset some  time after the overload condition disappears, so (assuming the batteries themselves weren’t toasted by the overload), recovery might be as simple as ‘don’t do that!’.

Motor Controller Integration

When I got the 4WD robot kit, I also purchased the DFRobots ‘Romeo’ controller module. This module integrates an Arduino Uno-like device with dual H-bridge motor controllers with outputs for all 4 drive motors.  Unfortunately, the Romeo controller has only one hardware serial port, and I need two for this project (one for the PC connection, and one to receive data from the XV-11 LIDAR).  So, I plan to use two Solarbotics dual-motor controllers, and an Arduino Mega as the controller

XV-11 LIDAR Controller

The XV-11 LIDAR from the NEATO vacuum has two interfaces; a 2-wire motor drive that expects a  PID signal from a FET driver, and a high-speed serial output that transfers packets containing  RPM, status, and distance  data.  A company called ‘Get Surreal’ has a really nice Teensy2.0-based module and demo software that takes all the guesswork out of controlling the XV-11, but its only output is to a serial monitor window on a PC.  Somehow I have to get the data from the XV-11, decode it to extract RPM and distance data, and use the information for navigation.  The answer I came up with was to use the Get Surreal PCB without the Teensy as sort of a ‘dongle’ to allow me to control the XV-11 from an Arduino Mega.  XV-11 serial data is re-routed from the Teensy PCB connector to Rx1 on the Arduino Mega.  The Mega, running  the same demo code as the Teensy, decodes the serial packets, extracts the RPM data and provides the required PID signal back to the driver FET on the Teensy PCB.  Not particularly fancy, but it works!

XV-11 Controller V1.1 without the Teensy 2. The original connectors were replaced with right-angle versions

XV-11 Controller V1.1 without the Teensy 2. The original connectors were replaced with right-angle versions

Side view of the XV-11. The right-angle connectors lower the height considerably

Side view of the XV-11. The right-angle connectors lower the height considerably

XV-11 connected to the 'controller dongle'

XV-11 connected to the ‘controller dongle’

The 'before' and 'after' versions of the XV-11 controller

The ‘before’ and ‘after’ versions of the XV-11 controller

During testing of the ‘XV-11 dongle’, I discovered an unintended consequence of the change from upright to right-angle connectors.  As it turned out, the right-angle connectors caused the pin assignments to be reversed – yikes!  My initial reaction to this was to simply pull the pins from the XV-11 connectors and re-insert them into the now-proper places.  Unfortunately, this meant that I could no longer go back to controlling the XV-11 from the original controller module – bummer.  So, I wound up constructing two short converter cables to convert the now-reversed XV-11 cables to the ‘normal’ sense for the Get Surreal controller module.  Nothing is ever simple….

Navigation Controller

The navigation controller for the 4WD robot will be an Arduino MEGA 2560.  This board was chosen because it has LOTS of digital/analog/pwm I/O and multiple hardware serial ports.  You can’t have too many I/O ports, and I need at least two (one for the USB connection the host PC, and one to interface to the NEATO XV-11 spinning LIDAR) hardware serial ports.  The downside to using the Mega is its size – almost twice the area of the Arduino Uno.

Arduino Mega 2560 for LIDAR I/O and navigation control

Arduino Mega 2560 for LIDAR I/O and navigation control

End of the Road for Wall-E’s Spinning LIDAR System :-(

30 September, 2015

Sad to say, but I believe I have ‘hit the wall’ on development of an effective spinning-LIDAR navigation system for Wall-E.  Even with a souped-up spinning platform (approx 3 rev/sec), I have been unable to reliably navigate a straight hallway, much less recover from stuck conditions.  In addition, the combination of drive motor currents and the increased current required to drive the spinning platform at the increased rate drains  the battery in just a few minutes.  So, I have reluctantly concluded that the idea of completely replacing the acoustic sensors with a spinning-LIDAR system is a dead end; I’m going to have to go back to the original idea of using the acoustic sensors for wall following, and a fixed-orientation LIDAR for stuck detection/recovery.

I started down the spinning-LIDAR road back in April of this year, after concluding a series of tests  that proved (at least to me) that the use of multiple acoustic sensors was not going to work due to intractable multi-path and self-interference problems.  In a follow-up post at the end of that month, I speculated that I might be able to replace all the acoustic sensors  with  a spinning-LIDAR system for both wall following and ‘stuck detection’.  At the time I was aware of two good candidates for such a system – the NEATO XV-11 robot vacuum’s spinning-LIDAR subsystem, and the Pulsed Light ‘LIDAR-Lite’ component.  I chose to pursue the Pulsed-Light option because it was much smaller and lighter than the XV-11 module, and I thought it would be easier to integrate onto the existing Wall-E platform.  Part of the appeal of this option was the desire to see if I could design and build the necessary spinning platform for the LIDAR-Lite device, based on a 6-wire slipring component available through AdaFruit.

In the time since that post at the end of April, I successfully integrated the LIDAR into the Wall-E platform, but only recently got to the point of doing field tests.  The first set of tests showed me that the original spin rate of about 120 RPM (about 2 RPS) was way too slow for successful wall following using my original ‘differential min-distance’ technique, and so I spent some time investigating PID control techniques as a possibility to improve navigation.  Unfortunately, this too proved unfruitful, so I then investigated ideas for increasing the LIDAR’s spin rate.  The LIDAR spinning platform is driven by a drive belt (rubber O-ring) attached to a pulley on a separate 120 RPM DC motor.  My original setup used a pulley ratio of approximately 1:1, so the motor and LIDAR both rotated at approximately 120 RPM.  By increasing the diameter of the drive pulley, I was able to increase the LIDAR rotation rate to approximately 180 RPM (about 3 RPS), but unfortunately even that rate was too slow, and the increased drive motor current rapidly drained the batteries – rats!

'The Last Spinning LIDAR' version.  Note the large gray drive pulley.  Gets the spin rate up to around 180 RPM, but at the cost of much higher battery drain.

‘The Last Spinning LIDAR’ version. Note the large gray drive pulley. Gets the spin rate up to around 180 RPM, but at the cost of much higher battery drain.

So, while I had a LOT of fun, and learned a lot in the process, it’s time to say ‘Sayonara’ to the spinning-LIDAR concept (at least for Wall-E – still plan to try the XV-11 module on the 4WD robot) and go back to the idea of using acoustic sensors for wall following and a forward-looking LIDAR for obstacle avoidance and ‘stuck’ detection.

Stay tuned!

PID Control Study for Wall-E

22 September, 2015

In my last post I described the results of some ‘field’ (long hallway) testing with Wall-E, with an eye toward validating my idea of using the ‘min distance angle’ as the primary input to my wall-following robot navigation algorithm.  While the initial results of static testing were encouraging, the results of a more realistic simulated wall following run where I manually pushed the robot along the hallway were even more discouraging.  I became convinced that the ‘min distance angle’ just didn’t have sufficient resolution for good steering, and getting better resolution would require a significant re-design of the tach sensor setup (in progress, but…).

So, I started thinking about using a PID (Proportional Integral Differential) controller using just the min distance as the input.  PID controllers can be quite effective dealing with complex electromechanical processes, but they can be tricky to ‘tune’ correctly.  After reading up on PID controllers in the Arduino world for a while, I ran across a nifty blog site managed by Brette Beauregard (http://brettbeauregard.com/).  Brette  is apparently a  PID god, and he has graciously shared his knowledge with us mere mortals in the form of a number of introductory articles on PID design, a new Arduino PID library, a nifty PID Autotune library, and an active PID Google Group.   Thanks Brett!  I’d nominate you for sainthood, but I think you went past that a while ago ;-).

Anyway, after reading through a lot of the introductory material and even understanding some of it, I decided to give the PID Autotune library and the associated example sketch a try.  I downloaded the library, and fired up the example sketch using a spare Arduino Uno I had laying around.  After the requisite amount of fumbling around, I started getting some recognizable output on the serial monitor, and after a while I even figured out how to enable the auto-tuning feature.  The following printout and associated Excel plots show the results before and after tweaking the PID tuning constants.

PID Autotune library example.  Data and Excel plot of before and after auto-tuning, with simulated input/output.

PID Autotune library example. Data and Excel plot of before and after auto-tuning, with simulated input/output.

From the plots it is pretty obvious that the auto-tuned PID parameters do a much better job of acquiring and tracking the setpoint.

This is pretty awesome stuff, and I definitely plan to try a PID controller for Wall-E my wall-following robot.  However, here in the real world there are a few flies in the PID ointment, especially with respect to auto-tuning.  Most significantly, the auto-tuning process takes about 9 cycles of input/output swings to come up with suggested tuning constants, and acquiring those 9 cycles without Wall-E wandering off into the middle of the living room or crashing into a wall (or even worse, being snagged by the dreaded ‘stealth slippers from hell’ (aka the wife’s fuzzy slippers).   I may just have to suck it up on this one and tune the PID constants manually, we’ll see.

Stay tuned!

Frank

 

Field-Testing the Improved Spinning LIDAR system

Posted 17 September, 2015

After getting the improved tachometer assembly  integrated into Wall-E’s spinning LIDAR setup, I decided to repeat the hallway  field testing that I performed back in July of this year (see this post for the details).  My ‘theory of navigation’ for wall following is that I should be able to determine Wall-E’s orientation relative to a nearby wall by looking at where the spinning LIDAR’s minimum distance measurement occurs relative to Wall-E’s ‘nose’.

Just as I did back in July, I placed  Wall-E a short distance away from a long straight wall, in three different orientations – parallel, 45-deg nose-in, and 45-deg nose-out.  For each of these orientations I let the spinning LIDAR ‘look’ at the wall for about 10 revolutions, and then I manually changed the orientation. The distance and angle (actually the interrupt number, but that is the same as the angle) values were recorded in a text log.

The LIDAR Field Test Area. Note the dreaded fuzzy slippers are still lurking in the background

The LIDAR Field Test Area. Note the dreaded fuzzy slippers are still lurking in the background

Wall-E oriented at approximately 45 degrees nose-out

Wall-E oriented at approximately 45 degrees nose-out

Wall-E oriented at approximately 45 degrees nose-in

Wall-E oriented at approximately 45 degrees nose-in

Wall-E in the 'parallel' configuration

Wall-E in the ‘parallel’ configuration

LIDAR distance measurements to a nearby long wall

LIDAR distance measurements to a nearby long wall

The text log was arranged as shown in the following screenshot.

A small portion of the data log for the wall orientation test. Note the line at the top describing the orientation sequence

A small portion of the data log for the wall orientation test. Note the line at the top describing the orientation sequence

Next, I wrote an Excel VBA script to parse the text log file and extract just the half of each revolution where the LIDAR was scanning the nearby wall, skipping over the half.  For each such scan, I searched the distance data for the minimum value, capturing that value and its associated interrupt number (i.e. angle).  All the extracted distance values and the min dist/min angle numbers were written to the spreadsheet, and then I plotted the mininum distance interrupt  number (i.e. angle) vs rev number for about 90 LIDAR revolutions.

If my theory holds water, then I should be able to see variations in the minimum interrupt number over time that corresponds to the orientation changes.  As shown in the following plot, that is exactly what happens.

Excel plot of the interrupt number corresponding to minimum distance vs rev number

Excel plot of the interrupt number corresponding to minimum distance vs rev number

As can be seen in the above plot, the Min-Dist-interrupt (MDI) starts out between 4 and 5, and stays there for the first 9-10 revolutions.  At about rev 11, it jumps to the 7-8 range, where it stays until about rev 20. The it drops back to 5-6 for 10 revs, and then drops again to the 2-3 range.  This pattern then repeats for the duration of the plot.  In my current spinning LIDAR configuration, Interrupt 1 starts at Wall-E’s tail, and proceeds along Wall-E’s left side to interrupt 10 at the nose.  So, an MDI of 4-5 should correspond to the parallel orientation, while a lower number should correspond to nose-out and a higher one to nose-in.  From the test condition description, Wall-E was placed parallel for 10 revs, then nose-in for 10, then nose-out for 10, then back to parallel, repeat.    It is clear from the plot that the actual behavior of the MDI matches the predicted behavior quite nicely – EUREKA!! ;-).

Although the above test results are quite encouraging, it is still not entirely clear that this technique can actually be used for effective navigation at practical travel speeds.  There is undoubtedly some correlation between the spinning LIDAR rotation rate and the maximum travel speed at which the LIDAR can provide information fast enough for effective steering.  For instance, the NEATO XV-11 spinning LIDAR system rotates at about 300 RPM (5 RPS), and it seems to travel no faster than about 1-2 m/sec.  This might mean that my 120 RPM (2 RPS) spin rate would only support travel speeds in the 0.5-1 m/sec range.  In addition, my current 18-degree resolution may be too coarse for effective steering.  Again using the XV-11 as a baseline, it has a resolution of 1 degree, 18 times mine.  With the much faster speed of the new V2 ‘Blue Label’ Pulsed Light LIDAR, I could probably double or even triple my current angular resolution, but 18X might be a bit much ;-).

Next up – analyzing the data from a simulated navigation test, where I manually pushed Wall-E along the hallway, simulating as close as possible how I think Wall-E might navigate, assuming the spinning LIDAR data is fast enough and accurate enough.  As I moved Wall-E along, I recorded the same distance and interrupt number data as before, so it will be interesting to see if this data continues to support my current ‘theory of navigation’ – stay tuned!

22 September 2015 Update:   The data from the simulated navigation test was a mess – nothing recognizable as a pattern.  The one thing it  did do was convince me that the ‘angle of least distance’ idea wasn’t going to work – and that something else was going to have to be done. What, I don’t know  yet…

Frank

 

Wall-E gets an improved Tachometer

Posted 16 September, 2015

Back in June of this year I posted about my initial efforts to implement a tachometer subsystem as part of a spinning-LIDAR system (see ‘LIDAR-Lite Gets its Own Motor‘),  This implementation used a ‘plugged gap’ technique for detecting the index position of the LIDAR, as shown below.

Diagram of the tach wheel for Wall-E's spinning LIDAR system

Diagram of the tach wheel for Wall-E’s spinning LIDAR system

The idea was that when the time between gap interrupts was more than 2 gap durations (assuming a relatively constant rotation speed), then the ‘plugged gap’ must be between the IR LED and the photo-diode.  This design  works OK, but has a couple of nagging drawbacks:

  • It depends on a fixed  open-loop timing delay; if the motor RPM varies enough, the fixed delay threshold might be too large or too small.
  • The plugged-gap technique removes two interrupt positions from the wheel, meaning that position information is missing during that 54-degree arc.

So, after getting my V2 ‘Blue Label’ LIDAR from Pulsed Light and figuring out how to use it (see this post), I decided to see what I could do about addressing  both the above problems.  At first I  thought I might be able to simply add a second photo-diode on the sensor side of the tach sensor assembly, coupled to the existing single IR LED via a slanted channel in the tach wheel, as shown below.  The idea was that when the interrupt was fired at the edge of the index gap, the value of the  ‘other’ sensor could be read – if the value was below a certain threshold (meaning more impinging IR energy), then that sensor must be lined up with  the index hole.  This meant that the index hole needed to be offset from the index gap, so the max energy receive position would coincide with the position at which the interrupt fired, which occurs at the gap edges,  not the center.

Tach Wheel V4

This turned out to be a miserable failure.  The IR LED’s have a very narrow illumination sector, and there wasn’t enough off-axis energy to reliably detect the index hole.

So, some five versions later, along with a complete redesign of the sensor assembly, I have what I think is a nicely working implementation.  The single LED slanted-hole design was scrapped in favor of a two-LED/sensor one, and the index sensing hole was replaced by a circumferential gap, as shown below.

Latest two-sensor/LED design. Note the circumferential gap is centered on one edge of the index gap, so the index sensor voltage will be minimum (max incident energy) when the gap-edge interrupt fires.

Latest two-sensor/LED design. Note the circumferential gap is centered on one edge of the index gap, so the index sensor voltage will be minimum (max incident energy) when the gap-edge interrupt fires.

 

For the tach sensor assembly, the single IR LED was replaced by two independent IR LED’s, each with its own 240-ohm current-limiting resistor (I tried running them in series with a single resistor, but that didn’t work very well either).  The original enclosed sensor slot was replaced by an exposed ‘trough’ with two retainer bars (the trough without the retainer bars didn’t work either).  See below for the ‘new improved’ tach sensor assembly.

Tach Sensor Assy2

Tach sensor assembly showing the mounting holes for the two 3mm IR LED’s . The thin gaps visible in the background are the corresponding channels into the photo-diode trough

 

 

Tach Sensor Assy1

Tach sensor assembly showing the photo-diode trough. The backs of the photo-diodes are glued to a thin plastic carrier that is captured by the retaining bars

After running some preliminary tests, I mounted  the new tach wheel and sensor assemblies and ran some O’scope tests to see how the new design worked.  I was  very pleased to see that it appears  to be working better than I could have hoped.  In the following O’scope photo, the top trace is the index sensor, and the bottom trace is the normal tach-gap sensor.  Both are 2 volts/cm and exhibit full 5-volt swings.

O'so;pe photo with the signal of interest highlighted. The top trace is the index sensor, and the bottom one is the tach-gap sensor.

O’so;pe photo with the signal of interest highlighted. The top trace is the index sensor, and the bottom one is the tach-gap sensor.  The index interrupt will occur at the first rising edge of the bottom trace.

The gap interrupt of interest occurs at the first rising edge of the bottom trace.  As can be seen from the photo, reading the value for the index gap sensor (top trace) at this point will retrieve a stable ‘0’ value, perfect for index gap detection!

The following photos show the new tach wheel and sensor assembly mounted on Wall-E, with the LIDAR assembly removed for clarity.

TachWheel2 TachWheel3

Stay tuned for more test results from this configuration!

Posted 17 September, 2015:

After re-installing the LIDAR and making all the right connections (not a trivial task, btw), I fired the system up using my ‘DAFAP_Plus’ (that’s. “Distance as Fast as Possible’ plus modifications for interrupt handling) sketch and took some index sensor measurements.  In the screenshot below, the ‘Sensor’ values are from the index sensor.  As expected, they are near the top of the 10-bit A/D range for interrupt numbers 1-19 (a reading of 700 implies about 3.5VDC).  However, the sensor reading for interrupt 20 is now  much better than it was before; before implementing the improved LED driver and new tach wheel layout, the max readings were about the same, but the minimum reading was occasionally  over 400 (i.e. about 2VDC) – making it harder to reliably discriminate between non-index and index gap  cases.  Now the index gap reading is a reliable ‘0’, providing a 3.5VDC differential – more than double the 1.5VDC  differential before – yay!!

Another item of note in the readout below is the ISR Ms value.  This is the time required to service the associated interrupt,  including the time required for the Pulsed Light ‘Blue Label’ LIDAR to take a distance measurement. Note that all of these times are in the single digit range – meaning I could probably double the number of tach wheel gaps (i.e. double the system angular resolution) if I wanted to.  Note that there is an extra value shown for each ‘interrupt 20’ line; this ‘Idx Ms’ value is the total time between index gap appearances, i.e. the total rotation time.  So, the LIDAR is rotating just a tad shy of 120 RPM (2 RPS), which should be fast enough for decent wall-following navigation.

Log from a 17 Sept 2015 test run; note the '0' sensor value at interrupt 20

Log from a 17 Sept 2015 test run; note the ‘0’ sensor value at interrupt 20

 

Frank

 

 

 

More work on the NEATO XV-11 LIDAR Module

Posted 08/24/2015

Progress on the Pulsed Light ‘Blue Label’ spinning LIDAR system has been put on hold for a few days pending the arrival of some 3mm IR LEDs  needed for an upgraded tach sensor wheel, so I am passing the time working on the alternative system, the XV-11 LIDAR module from the NEATO vacuum.

Back in May of this year I posted some results where I mated the Teensy 2.0 based Get Sureal XV-11 module controller (see the post here) and got very good results with a LIDAR ‘image’ of the walls of a cardboard box.  After this effort, I put the XV-11 aside for two reasons; first, I received a ‘Silver Label’ (V1) LIDAR-Lite unit from Pulsed Light and was having too much fun designing and implementing a spinning-LIDAR system, and second, I couldn’t figure out how to get the XV-11 data into my robot controller (an Arduino Uno).  The Teensy 2.0 XV-11 controller parses the data stream from the XV-11, routes it upstream to the USB host, processes commands from the USB host, and maintains the XV-11 rotation speed using a PID controller.  This is all well and good, but the Uno isn’t a USB host, and adding that capability would be a major PITA.  So, I put the whole thing on the back burner, hoping that inspiration would strike at some later date.

Now, almost three months later, I had some ideas I wanted to try to achieve the  goal of getting the XV-11 streaming data onto the Uno robot controller so that it could be used for navigation.

The Problem(s):

The XV-11 LIDAR module streams angle, position, data-quality, and RPM information over a serial connection to the Teensy 2.0 controller, and there is a LOT of it.  The XV-11 rotates at a nominal rate of 300 RPM, i.e. 5 RPS.  During every 200 msec rotation, it sends 360 (at least) data groups, where each data group contains pointing angle (0-360), distance (in cm, I think), a data quality metric, and RPM.  Each data group must be parsed to extract the needed information.  The Teensy firmware also provides a PWM waveform to control the XV-11’s rotation speed, based on the RPM values being reported over the serial port.

The Teensy 2.0 boasts two hardware serial ports, but one is used for the connection to the upstream USB host,  and the other one is used to receive the data from the XV-11.  So, no easy way to get the needed XV-11 data from the Teensy to the Uno – bummer :-(.  And, even if the Teensy had a third hardware serial port, the Uno only has one – and it is used to connect to its upstream USB host – double bummer :-(.

And, even if I could figure out a way to get the data over to the Uno, how was I going to keep the data stream from  swamping the Uno’s (or the Teensy’s) very finite resources.  With the spinning LIDAR system,  I only capture 18 data groups/revolution, and even this amount threatens to blow out the top of working memory.  There is no way it can handle the 360 data groups from the XV-11.  Moreover, those data groups are arriving at about twice the rate of the groups from the Pulsed Light spinning LIDAR system.

The (partial) Solution – Software Serial to the Rescue!:

In between times where I was actively working on the Pulsed Light spinning LIDAR project, I kept returning to the problem of how to get XV-11 data into the Uno, and in my wanderings through the online Arduino world I ran across references to ‘Software Serial’, where virtual serial ports could be implemented using two of the Arduino GPIO pins.  This sounded intriguing, but all the available libraries come with baggage of one sort or another; one can’t send/receive simultaneously, another can do that, but is sensitive to other interrupts…  Then, just the other day I ran across ‘SimpleSoftSerial’ a version written by ‘Robin2’ just for the Uno (see the post here).  The whole thing is just a ‘.ino’ file, not a library at all, and it does exactly what I want – YAY!  Having the ability to add another serial port to the Uno solves part of the problem, so I decided to see if I could get just this part going, and maybe figure out the data management issue at some future time.

Robin2 also kindly provided a pair of Arduino sketches to demo the ‘Simple SoftSerial’ capability. One part runs on a Uno (of course, as that is the target processor for this little hack) and the other ‘Partner’ program runs on (in his case) a Mega as it requires multiple hardware  serial ports.  I didn’t have a Mega handy, but I  did have the Teensy 2.0 that came with the Get Sureal XV-11 controller, and it has two hardware serial ports.  So, I disconnected the XV-11 LIDAR module from the Teensy, and temporarily re-purposed it for this test.

I loaded ‘DemoPartner.ino’ onto the Teensy, and ‘DemoSimpleSoftSerial.ino’ onto a spare Uno, connected the two using a pair of jumpers, and voila – it ‘worked’ but only in one direction.  I was able to see characters transmitted on the Uno  showing up on the serial monitor port on the Teensy, but not the other way around.  I had previously used my trusty O’scope to probe the pins of the connector from the Teensy to the XV-11 module, and  thought that I had determined which pin was Tx and which was Rx, but clearly something wasn’t adding up.  At first I thought I just had a loose connection with my jumper leads, but after some more fiddling, I became convinced that wasn’t the problem.  With my ‘scope,  I probed the actual Teensy board Tx pin (D3), and discovered that the Teensy  serial data was  there, but it wasn’t making it to the XV-11 connector pin!  Initally this seemed unlikely, as The Teensy/XV-11 combination was working fine – until  the realization hit me that the XV-11 is a transmit-only device – there is no serial traffic going from Teensy to XV-11, and therefore there is no need to have the Teensy’s Tx pin connected to the XV-11 connector!  After confirming this theory using a continuity checker, I bypassed the XV-11 connector entirely by soldering a 2-pin header onto the (fortunately side-by-side) Tx & Rx (D3 & D2 respectively) pins of the Teensy module.  Once I made this change, I started seeing bi-directional data transmissions as advertised.

The following photo shows the experimental setup, with the temporarily disconnected XV-11 module in the background.  I have also included a screenshot of the serial port monitors for both the Teensy module (running the ‘Partner’ sketch) and the Uno (running the ‘Demo’ sketch), on two separate instances of Visual Studio 2013/Visual Micro.

Experimental setup. Uno in foreground, Teensy and (temporariliy disconnected) XV-11 LIDAR module in background

Experimental setup. Uno in foreground, Teensy and (temporariliy disconnected) XV-11 LIDAR module in background

Screenshot showing serial port monitors from Uno (bottom) and Teensy (top).

Screenshot showing serial port monitors from Uno (bottom) and Teensy (top).

Now that I have demonstrated the basic Uno virtual serial port capability, I plan to try and use this capability to get XV-11 serial data into my Uno motor controller by piggy-backing on the Teensy 2.0’s serial connection to the XV-11.

My plan is to return  the Teensy  module back to its original configuration, connected to the XV-11 via its second hardware serial port and running the Get Sureal processing sketch.  Then I’ll put that same sketch  on the Uno, but modify it to use the virtual serial port set up via the ‘Simple SoftSerial’ capability.  If I do it correctly, I should be able to see the XV-11 data on both the Teensy 2.0 and Uno USB host serial monitors.

Stay tuned!

Frank

8/25/2015 Late addendum.  Tried that trick and it didn’t work :-(.  Turns out the virtual serial port isn’t anywhere near fast enough.  Advertised speed is 9600 bps, with some speculation that it will work at 14200.  Unfortunately, the XV-11 runs at 115200.  So, I’ll either have to abandon the virtual port idea (and the Uno processor!) or figure out a way of slowing the XV-11 output down, or something else entirely.  Bummer

More Pulsed Light ‘Blue Label’ LIDAR testing

Posted 08/18/2015

Well, I have to say that the Pulsed Light tech support has been fantastic as I have been trying to work my way through ‘issues’ with both the V1 ‘Silver Label’ and V2 ‘Blue Label’ LIDAR systems (see my previous posts here  and here).  I know they must be thinking “how did we get stuck with this guy – he seems to be able to break anything we send to him!”  I keep expecting them to say “Look – return both units and we’ll give you twice your money back – as long as you promise NOT to buy anything from us ever again!”, but so far that hasn’t happened ;-).

In my last round of emails with Austin (apparently one of two support guys.Bob is the other one, but he is on vacation, so Austin is stuck with me), he mentioned that he had found & fixed a bug or two in the V2 support libraries, and suggested that I download it again and see if that fixes some/all of the issues I’m seeing here with my ‘Blue Label’ V2 unit.

So, I downloaded the new library, loaded up one of my two newly-arrived ‘genuine’ Arduino Uno boards with their ‘Distance As Fast as Possible’ example sketch, and gave it a whirl.  After at least 45  minutes so far  of run time, the ‘Blue Label’ unit is still humming along nicely, with no hangups and no dropouts – YAY!!

The   ‘Distance As Fast as Possible’ sketch starts by configuring the LIDAR for lower-than-default acquisition count to speed up the measurement cycle, and increasing  the I2C speed to 400 KHz.  Then, in the main loop, it takes one ‘stabilized’ measurement, followed by 100 ‘unstabilized’ measurement.  The idea is that re-stabilization (re-referencing?) isn’t required for every measurement for typical measurement scenarios, so why pay the extra cost in measurement time.  This is certainly true for my wall-following robot application, where typical measurement distances are less than 200 cm and target surfaces are typically white-painted sheet-rock walls.

To get an accurate measurement cycle time, I instrumented the example code with ‘digitalWrite() calls to toggle Arduino Uno pin 12 at appropriate spots in the code.  In the main loop() section the pin goes HIGH just before the single ‘stabilized’ measurement, and LOW immediately thereafter.  Then (after an intermediate Serial.print() statement) it goes HIGH again immediately before the start of the 100-count ‘unstabilized’ measurement loop, and then LOW after all 100 measurements complete.  After another Serial.print() statement the loop() section repeats.

The following O’scope screenshots show the results.  The first one shows the single ‘stabilized’ measurement time, with the scope set for 0.2 msec/div. From the photo, it appears this measurement completes in about 0.8  msec – WOW!!!  The second one shows the time required for 100 ‘unstabilized measurements, with the scope set for 20 msec/div.  From this it appears that 100 measurements take about 140 msec – about 1.4 msec per measurement — WOW WOW!!

 

0.2 msec/div.  HIGH duration is time required for one 'stabilized' measurement

0.2 msec/div. HIGH duration of about 0.8 msec is time required for one ‘stabilized’ measurement

20msec/div.  HIGH duration of about 140 msec shows time required for 100 'unstabilized' measurement

20msec/div. HIGH duration of about 150 msec shows time required for 100 ‘unstabilized’ measurements

Hmm, from the comments in the code, the ‘stabilized’ measurements are supposed to take longer than the ‘unstabilized’ ones – but the scope measurements indicate the opposite – wonder what I’m getting wrong :-(.

I left the LIDAR and Arduino system running for most of a day while I played a duplicate bridge session and did some other errands.  When I got back after about 6 hours, the system was still running and was still responsive when I waved my hand over the optics,  but the timing had changed considerably.  Instead of 0.8 msec for the single ‘stabilized’ measurement I was now seeing times in the 3-6 msec range.  For the 100 ‘unstabilized’ measurements, I was now seeing around 325 msec or about 3.2 msec per measurement.  Something had definitely changed, but I have no idea what.  A software restart fixed the problem, and now I’m again looking at 0.8 msec for one ‘stabilized’ measurement, and 150 msec for 100 ‘unstabilized’ ones.

So, the good news is, the new V2 ‘Blue Label’ LIDAR is blindingly fast – I mean  REALLY REALLY FAST  (like  ‘Ludicrous Speed’ in the SpaceBalls movie).  The bad news is, it still seems to slow down  A LOT after a while (where ‘while’ seems to be on the order of an hour or so).  However, even at it’s ‘slow’ speed it is pretty damned fast, and still  way faster than I need for my spinning LIDAR project.  With this setup I should be able to change from a 10-tooth to at least a 12-tooth (or even a 24-tooth if the photo-sensor setup is sensitive enough) and still keep the 120 rpm motor speed.

Interestingly, I have seen this same sort of slowdown in my V1 (‘Silver Label’) LIDAR testing, so I’m beginning to wonder if the slowdown isn’t more a problem with the Arduino I2C hardware or library implementation.  I  can just barely spell ‘I2C’, much less have any familiarity with the hardware/software nuances, but the fact that a software reset affects the timing lends strongly exonerates the LIDAR hardware (the LIDAR can’t know that I rebooted the software) and lends credence to the I2C library as the culprit.

Stay tuned,

Frank