Monthly Archives: July 2015

EMI Problems with LIDAR and Wall-E

After getting the Pulsed Light spinning LIDAR system working on Wall-E, I added motor control and navigation code to my LIDAR test code, just to see how if  the LIDAR  could be used for actual navigation.  As it turned out, I discovered two problems; one was related to missed LIDAR distance measurements that got loaded into the nav table as ‘0’s, (see my previous post on this issue) and the other was that interrupts stopped occurring after some indeterminate time after the motors were enabled.  Of course, these glitches just had to occur while my control-systems expert stepson  Ken Frank  and his family were visiting.  I told him about the symptoms, and speculated that maybe noise from the motor control PWM pulse train was coupling into the high-impedance interrupt input  and overloading the interrupt stack with spurious interrupts.  This input is driven by the analog signal from the tach wheel sensor (IR photodiode), and the signal line runs along the same path as one of the motor drive twisted pairs.   Without any hesitation, Ken  said  “well, if you had converted that analog signal to digital at the source, you wouldn’t be having this problem”.  This was absolutely correct, and not a little bit embarrassing, as I distinctly remember teaching  him all about the perils of low-level analog signals in proximity to high-level motor currents!  I guess it’s better to receive one’s comeuppance from a loved family member and fellow EE, but it’s  still embarrassing ;-).

In any case, it’s now time to address the EMI problem.  I’m not absolutely sure that the issue is motor currents coupling into the analog sensor line, but it has all the earmarks; it doesn’t happen unless the motors are engaged, and the sensor line is in close proximity to one of the high-current motor drive  twisted-pairs for some of it’s length.  Moreover, I neglected to follow basic low-level analog handling protocol by using a twisted pair with a dedicated return line for this signal, so at the very least I’m guilty of gross negligence :-(.

Black loop is part of the analog signal run from photodiod sensor on left to Uno (not shown).  Note green/white motor drive twisted pair

Black loop is part of the analog signal run from photodiod sensor on left to Uno (not shown). Note green/white motor drive twisted pair

In the screenshot above, the thin black wire (visible against the white stay-strap mounting square background) is the analog output line from the tach wheel sensor circuit.  This line runs in close proximity to one of the motor drive twisted pairs for an inch or so (extreme right edge of the above image) until it peels off to the right to go to the Arduino Uno.

As shown below,  this circuit has an equivalent output impedance of about 20K ohms (20K resistor in parallel with the reverse bias impedance of the photodiode), so while it’s not exactly a low-level high-impedance output, it’s not far from it either.  The black wire in the photo is the connection from the junction of the 20K resistor and the photodiode to pin A2 of the Uno.

Although I have looked at the A2 input pin with an Oscilloscope (my trusty Tektronix 2236) and didn’t see anything that might trigger spurious interrupts, it doesn’t have the bandwidth to see really fast transitions.  And, as I was once told many many years ago in the TTL days, “TTL circuits can generate  and respond to sub-nanosecond signals”.  Although TTL has gone the way of the dinosaurs (and old engineers like me), the old saw is still applicable.

 

Portion of Digikey Scheme-It schematic showing tach sensor circuit

Portion of Digikey Scheme-It schematic showing tach sensor circuit

So, what to do?  Well, the obvious starting place is to replace the single wire signal run with a twisted pair, adding a dedicated return wire.  In the past, just replacing a single line with an appropriately terminated twisted pair has shown to be remarkably effective in reducing EMI coupling problems, so I’m hoping that’s all I have to do.  The following photo shows the modification

Single black tach sensor wire replaced with orange/black twisted pair

Single black tach sensor wire replaced with orange/black twisted pair

In the above photo, the orange/black twisted pair replaced the single-line tach wheel sensor signal line.  The orange wire is the signal wire and the black wire is the dedicated return line.  The return line is routed to a nearby ground pin on the Arduino Uno.  As an additional precaution, I installed  a 0.01  Î¼F cap between the signal input and the ground pin.

After these modifications, I fired up Wall-E with the motors engaged, and was relieved to find that tach wheel sensor interrupts appear to continue indefinitely, even with the motor drive engaged – yay!!

 

 

 

 

 

 

 

More LIDAR ‘Field’ testing with analysis

July 25,  2015

In my last LIDAR-related post (http://gfpbridge.com/2015/07/lidar-field-test-with-eeprom/), I described a test intended to study the question of whether or not I could use LIDAR (specifically the Pulsed Light spinning LIDAR system on Wall-E) to determine Wall-E’s orientation with respect to an adjacent wall, in the hopes that I could replace all the former acoustic sensors (with their inherent mutual interference problems) with one spinning LIDAR system.  In a subsequent field test where I used LIDAR for navigation, Wall-E fared very badly – either running into the closest wall or wandering off into space.  Clearly there was something badly wrong with either the LIDAR data or the algorithm I was using for navigation.

This post describes the results of some follow-on testing to capture and analyze additional LIDAR data from the same hallway environment.   In the last test, I used the Arduino Uno’s EEPROM to store the data, which meant I was severely limited in the amount of data I could capture for each run.  In this test I instead ran the program in DEBUG mode, with ‘Serial.print()’ statements at strategic locations to capture data.  To avoid contaminating the data with my presence, I ran a USB cable out an adjacent door.  I placed Wall-E in about the same place as in the previous post, oriented it parallel to the wall, and started collecting data  after I was safely on the other side of the adjacent door.  I collected about 30 seconds of data (50 or so 18-point datasets) to be analyzed.  The screenshot below shows some of the raw LIDAR data plus a few interesting stats.

Screenshot showing raw LIDAR data with some stats

Screenshot showing raw LIDAR data with some stats

Looking at the raw data it was immediately clear why Wall-E was having trouble navigating; I was using an algorithm that depended on the stability of the pointing direction (interrupt number) associated with the minimum distance value, and this was unstable to say the least.  The minimum distance value jumped between approximately 43 and 0, and the associated interrupt number jumped between  0 and either 14 or 15.  A  distance value of ‘0’ results from a LIDAR distance measurement failure where the corrected distance is less than zero.  Such values get replaced by ‘0’ before being loaded into the distance/angle array (and subsequently read out to the measurement laptop in this experiment).

So, what to do?  I decided to try some ‘running average’ techniques to see if that would clean up the data and make it more usable for navigation. To do this I wrote up some VBA code to perform an N-point running average on the raw data, and produced results for N = 1, 3, and 5, as shown below.

1-point running average (essentially just zero replacement with the preceding value)

1-point running average (essentially just zero replacement with the preceding value)

3-point running average, with zero runs longer than 3 replaced with preceding value(s)

3-point running average, with zero runs longer than 3 replaced with preceding value(s)

5-point running average, with zero runs longer than 5 replaced with preceding value(s)

5-point running average, with zero runs longer than 5 replaced with preceding value(s)

LIDAR distance 'radar' plots of raw, 1, 3, and 5-point running average

LIDAR distance ‘radar’ plots of raw, 1, 3, and 5-point running average

Looking at the above results and plots, it is clear that that there is very little difference  between the 1, 3, and 5-point running average results.  In all three cases, the min/max values are very stable, as are the associated interrupt numbers.  So, it appears that all that is needed to significantly improve the data is just ‘zero-removal’.  This should be pretty straight-forward in the current processing code, as all that is required is to  NOT  load a ‘bad’ measurement into the distance/angle table – just let the preceding one stay until a new ‘good’ result for that interrupt number is obtained.  With two complete measurement cycles per second, this will mean that at least 0.5 sec will elapse before another measurement is taken in that direction, but (I think) navigating on slightly outdated information is better than navigating on badly wrong information.

 

 

LIDAR Field Test with EEPROM

Posted 07/01/15

In my last post I described my preparations for ‘field’ (more like ‘wall’) testing the spinning LIDAR equipped Wall-E robot, and this post describes the results of the first set of tests.  As you may recall, I had a theory that the data from my spinning LIDAR might allow me to easily determine Wall-E’s orientation w/r/t a nearby wall, which in turn would allow Wall-E to maintain a parallel aspect to that same wall as it navigated.  The following diagram illustrates the situation.

LIDAR distance measurements to a nearby long wall

LIDAR distance measurements to a nearby long wall

Test methodology:  I placed Wall-E about 20 cm from a long clear wall in three different orientations; parallel to the wall, and then pointed 45 degrees away from the wall, and then 45 degrees toward the wall.  For each orientation I allowed Wall-E to fill the EEPROM with  spinning LIDAR data, which was subsequently retrieved and plotted for analysis.

The LIDAR Field Test Area.  Note the dreaded fuzzy slippers are still lurking in the background

The LIDAR Field Test Area. Note the dreaded stealth  slippers lurking in the background

Wall-E oriented at approximately 45 degrees nose-out

Wall-E oriented at approximately 45 degrees nose-out

Wall-E oriented at approximately 45 degrees nose-in

Wall-E oriented at approximately 45 degrees nose-in

 

 

Excel plots of  the three orientations.  Note the anti-symmetric behavior of the nose-in and nose-out plots, and the symmetric behavior of the parallel case

Excel plots of the three orientations. Note the anti-symmetric behavior of the nose-in and nose-out plots, and the symmetric behavior of the parallel case

In each case, data was captured every 20 degrees, but the lower plot above shows only the three data points on either side of the 80-degree datapoint.  In the lower plot,  there are clear differences in the behavior for the three orientation cases.  In the parallel case, the recorded distance data is indeed very symmetric as expected, with a minimum at the 80 degree datapoint.   In the other two cases the data shows anti-symmetric behavior with respect to each other, but unsymmetric with respect to the 20-to-140 plot range.

My original theory was that I could look at one or two datapoints on either side of the directly abeam datapoint (the ’80 degree’ one in this case) and determine the orientation of the robot relative to the wall.  If the off-abeam datapoints were equal or close to equal, then the robot must be oriented parallel.  If they differed sufficiently, then the robot must be nose-in or nose-out.  Nose-in or nose-out conditions would produce a correcting change in wheel speed commands.  The above plots appear to support this theory, but also offer a potentially easier way to make the orientation determination.  It looks like I could simply search through the 7 datapoints from 20 to 140 degrees for the minimum value.  If this value occurs at a datapoint less than 80 degrees, then the robot is nose-in; if more than 80 degrees it is nose-out.  If the minimum occurs right at 80 degrees, it is parallel.  This idea also offers a natural method of controlling the amount of correction applied to the wheel motors – it can be proportional to the minimum datapoint’s distance from 80 degrees.

Of course, there are still some major unknowns and potential ‘gotchas’ in all this.

  • First and foremost, I don’t know whether the current measurement rate (approximately two revolutions per second) is fast enough for successful wall following at reasonable forward speeds. It may be that I have to slow Wall-E to crawl to avoid running into the wall before the next correction takes effect.
  • Second, I haven’t yet addressed how to negotiate obstacles; it’s all very well to follow a wall, but what to do at the end of a hall, or when going by an open doorway, or …  My tentative plan is to continually search the most recent LIDAR dataset for the maximum distance response (and I can do this now, as the LIDAR doesn’t suffer from the same distance limitations as the acoustic sensors), and try to always keep Wall-E headed in the direction of maximum open space.
  • Thirdly, is Wall-E any better off now than before with respect to detecting and recovering from ‘stuck’ conditions.  What happens when (not if!) Wall-E is attacked by the dreaded stealth slippers again?  Hopefully, the combination of the LIDAR’s height above Wall-E’s chassis and it’s much better distance (and therefore, speed) measurement capabilities will allow  a much more robust obstacle detection and ‘stuck detection’ scheme to be implemented.

Stay tuned!

Frank

 

 

 

 

Field test prep – writing to and reading from EEPROM

Posted 6/30/2015

In my last post (see  LIDAR-in-a-Box: Testing the spinning LIDAR) I described some testing to determine how well (or even IF) the spinning LIDAR unit worked.  In this post I describe my efforts to capture LIDAR data to the (somewhat limited) Arduino Uno EEPROM storage, and then retrieve it for later analysis.

The problem I’m trying to solve is how to determine how the LIDAR/Wall-E combination performs in a ‘real-world’ environment (aka my house).  If I am going to be able to successfully employ the Pulsed Light spinning LIDAR unit for navigation, then I’m going to need to capture some real-world data for later analysis.  The only practical way to do this with my Arduino Uno based system is to store as much data as I can in the Uno’s somewhat puny (all of 1024 bytes) EEPROM memory during a test run, and then somehow get it back out again afterwards.

So, I have been working on an instrumented version that will capture (distance, time, angle) triplets from the spinning LIDAR unit and store them in EEPROM.  This is made more difficult by the slow write speed for EEPROM and the amount of data to be stored.  A full set of data consist of 54 values (18 interrupts per revolution times 3 values), but each triplet requires 8 bytes for a grand total of 18 * 8 = 144 bytes.

First, I created a new Arduino project called EEPROM just to test the ability to write structures to EEPROM and read them back out again.  I often create these little test projects to investigate  one particular aspect of a problem, as it eliminates all other variables and makes it much easier to isolate problems and/or misconceptions.  In fact, the LIDAR study itself is a way of isolating the LIDAR problem from the rest of the robot, so the EEPROM study is sort of a second-level test project within a test project ;-).  Anyway, here is the code for the EEPROM study project

All this program does is repeatedly fill an array of 18 ‘DTA’ structures,  write them into the EEPROM until it is full, and then read them all back out again.  This sounds pretty simple (and ultimately it was) but it turns out that writing structured data to EEPROM isn’t entirely straightforward.  Fortunately for me, Googling the issue resulted in a number of worthwhile hits, including the one describing ‘EEPROMAnything‘.  Using the C++ templates provided made writing DTA structures to EEPROM a breeze, and in short order I was able to demonstrate that I could reliably write entire arrays of DTA structs to EEPROM and get them back again in the correct order.

Once I had the EEPROM write/read problem solved, it was time to integrate that facility back into my LIDAR test vehicle (aka ‘Wall-E’) to see if I could capture real LIDAR data into EEPROM ‘on the fly’ using the interrupter wheel interrupt scheme I had already developed.  I didn’t really need the cardboard box restriction for this, so I just set Wall-E up on my workbench and fired it up.

To verify proper operation, I first looked at the ‘raw’ LIDAR data coming from the spinning LIDAR setup, both in text form and via Excel’s ‘Radar’ plot. A sample of the readout from the program is shown below:

Notice the ‘Servicing Interrupt 15’ line in the middle of the (distance, time, angle) block printout. Each time the interrupt service routine (ISR) runs, it actually replaces one of the measurements already in the DTA array with a new one – in this case measurement 15. Depending on where the interrupt occurs, this can mean that some values written to EEPROM don’t match the ones printed to the console, because one or more of them got updated between the console write and the EEPROM write – oops! This actually isn’t a big deal, because the old and new measurements for a particular angle should be very similar. The ‘Radar’ plot of the data is shown below:

LIDAR data as written to the Arduino Uno EEPROM

LIDAR data as written to the Arduino Uno EEPROM

LIDAR data as read back from the Arduino Uno EEPROM

LIDAR data as read back from the Arduino Uno EEPROM

As can be seen from these two plots, the LIDAR data retrieved from the Uno’s EEPROM is almost identical to the data written out to the EEPROM during live data capture.  It isn’t  entirely identical, because in a few places, a measurement was updated via ISR action before the captured data was actually written to EEPROM.

Based on the above, I think it is safe to say that I can now reliably capture LIDAR data into EEPROM and get it back out again later.  I’ll simply need to move the ‘readout’ code from this program into a dedicated sketch.  During field runs, LIDAR data will be written to the EEPROM  until it  is full; later I can use the ‘readout’ sketch to retrieve the data for analysis.

In particular, I am very interested in how the LIDAR captures a long wall near the robot.  I have a theory that it will be possible to navigate along walls by looking at the relationship between just two or three LIDAR measurements as the LIDAR pointing direction sweeps along a nearby wall.  Consider 3 distance measurements taken from a nearby wall, as shown in the following diagram:

LIDAR distance measurements to a nearby long wall

LIDAR distance measurements to a nearby long wall

In the left diagram the distances labelled ‘220’ and ‘320’ are considerably different, due to the robot’s tilted orientation relative to the nearby long wall.  In the right diagram, these two distances are nearly equal.  Meanwhile, the middle distance in both diagrams is nearly the same, as the robot’s orientation doesn’t significantly change its distance from the wall.  So, it should be possible to navigate parallel to long wall by simply comparing the 220 degree and 320 degree (or the 040 and 140 degree) distances.  If these two distances are equal or nearly so, then the robot is oriented parallel to the wall and no correction is necessary.  If they are sufficiently unequal, then the appropriate wheel-speed correction is applied.

The upcoming  field tests will be designed to buttress or refute the above theory – stay tuned!

Frank