Tag Archives: MPU6050

Help! I’m Spinning out of Control!!

I have been running ‘field’ (i.e. in my house) tests with WallE3, my autonomous wall-following robot. Unfortunately, WallE3 has demonstrated an unfortunate tendency to lose its mind and start spinning out of control – “around and around the robot goes, and when he stops, nobody knows!”. After a number of trials where this happened, I realized I’m going to have to figure out how to detect this condition so I can get WallE3 to recover properly.

Fortunately, WallE3 knows its relative heading, thanks to its onboard GY-521 MPU-6050 3 Axis Accelerometer/Gyroscope. So, I thought I should be able to detect the ‘spinning’ condition by monitoring the relative heading numbers; if the relative heading values traversed a full 360⁰ within a reasonably short period of time like 3-5 sec, then the robot should be stopped and a recovery algorithm of some sort implemented.

As usual, a seemingly simple algorithm turns out to not be quite as simple as it seems at first glance. The first thing I tried to do was to use my new robot telemetry visualization tool to go back through my recorded telemetry files to find some runs where spinning occurred. Unfortunately, I couldn’t find any – bummer! Not to worry, I decide I could use Excel to ‘invent’ a spinning event by generating a series of monotonically increasing heading values. Then I used Excel and VBA to work out an algorithm for ‘help, I’m spinning’ detection. Shown below is a screenshot of the Excel spreadsheet, and a screenshot of the VBA code that does the detection.

Now to see if this idea actually works ‘in the wild’ (or at least ‘in the robot’)

13 June 2023 Update:

I wanted to capture data from a real ‘spinning’ event to further test the above algorithm, so naturally WallE3 has refused to cooperate, even after several trial runs. So, being the sneaky person I am, I decided to add ‘#define HEADINGS_ONLY’ and associated code section to WallE3’s code base, so I can capture heading date while manually spinning the robot. This worked well, and because it is in a #define block, it gets compiled out for normal operations. After getting that working, I captured a bunch of heading data and dropped it into my Excel setup to see how my VBA code worked with ‘real’ data. As it turned out, this exposed a bug in the algorithm – I had forgotten to handle the case where the cumulative heading is negative, but with a magnitude greater than 360. The fix was to compare the absolute value to 360, as shown in the revised code below:

Stay tuned,

Frank

WallE3 Wall Tracking, Revisited

Posted 12/23/22

In the last couple of months I have made some significant improvements in WallE3’s capabilities. I started by completely re-doing the compensation algorithms for the seven ST Micro’s VL53L0X time-of-flight distance sensors (two each 3-element side-looking arrays and one rear-looking distance sensor). I followed this with improvements to both the ‘spin turn’ and ‘rolling turn’ features.

Next, I went back through my ‘MoveToDesiredLeft/Right/Front/RearDistCm()’ family of subroutines and made sure they were all working properly now with the much more accurate distance compensation algorithms. One interesting thing that came out of this effort was the realization that shorter measurement intervals (i.e. 50mSec vs 200mSec) produced an unintended side-effect of making ‘Stuck’ detections much more prevalent. This occurs because the appropriate (front or rear) 50-element distance array fills up much faster at 50mSec/measurement than it does at 200mSec/measurement, so identical (or nearly identical measurements will cause a stuck detection earlier (5 measurements/sec means a 50 element array will fill in 10 sec but 20meas/sec fills the array in 2.5sec. When I used 50mSec/meas in the ‘MoveToDesired…()’ routines, the robot would often exit the routine with a ‘stuck’ error code as it slowed down approaching the desired distance condition. These functions do fine with a more coarse time interval (eliminating the ‘stuck’ declarations), so I went back to 200mSec/measurement.

Now I am going to try to incorporate the above improvements into my previous wall track testing program, ‘WallE3_WallTrackTuning_V4’. As usual, I will start by creating ‘WallE3_WallTrackTuning_V5’ as a clone of ‘_V4’ and start making changes from there.

WallE3_WallTrackTuning_V5:

I am going to try and make WallE3_WallTrackTuning_V5 as ‘clean’ as possible, removing as much ‘dead’ code as possible and consolidating things like sensor measurement intervals.

Timing intervals:

Searching through the code for ‘elapsedMillis’ objects, I see the following global declarations:

Then I did a search for “MSEC” all upper case and found:

The front LIDAR sensor starts to generate errors for long distance measurements when the measurement interval falls below 200mSec

The VL53L0X time-of-flight sensors need a ‘measurement time’ of 50mSec or greater. This is handled by the VL53L0X array Teensy, but it means that UpdateAllEnvironmentParameters() shouldn’t be called more frequently than 20HZ.

The MPU6050 can support an update interval of 30mSec or greater, and this time is used for all turning operations.

Telemetry readouts should occur no more than once every 200mSec.

MoveToDesiredFront/Back/Left/RightDistCm()

Based on my recent work on these functions, it looks like PID = (1.5, 0.1, 0.2) will work for all cases, so all I have to do is modify the existing ‘OffsetDistKp/Ki/Kd’ values. Note that in my testing these were parameters to the function call instead of program constants, but now I can go back to just having the desired offset as the only parameter.

So, I copied each of the above functions to WallE3_WallTrackTuning_V5 from WallE3_FrontBackMotionTuning_V1, removed Kp,Ki,Kd from the sig, and replaced all occurrences with OffsetDistKp/Ki/Kd. I also ported the CorrDistForOrient() function, as it is required by the MoveToDesiredLeft/Right() versions

01 January 2023 Update:

After getting the ‘MoveTo…’ functions working, I discovered that ‘RotateToParallelOrientation()’ didn’t work well at all, and in fact found a note from my former self that the function was ‘fatally flawed’ – oops! So, I revisited my ‘WallE3_ParallelFind_V1’ part-task project to see if I could get it to work better now that VL53L0X distances are being reported as float vs integer objects, and after what I hope is much better sensor error compensation. As shown in this post, RotateToParallelOrientation() now works much better, albeit somewhat slowly, with PID = (20,4,0).

Offset Capture with ‘RotateToParallelOrientation’ ‘at end

02 January 2023 Update:

Starting to make some full-up left wall tracking runs, using the updated code from earlier work. In particular, I am trying to see if my older idea about combining an offset-driven steering angle modifier for the PID tracking algorithm will work. The ‘offset_factor’ incorporates the distance error into the reported steering angle, which in turn is used in the PID machine to drive the combined steering angle to 0.

Here’s an early run:

This worked, ‘sorta’. Part of the problem with this run is the robot’s orientation with respect to the wall at the start of the run. This is supposed to be parallel with the wall, but it obviously isn’t, and I don’t know why. Here’s the data from the ‘RotateToParallelOrientation()’ step

This certainly looks good – with a front/rear distance difference of only 0.3cm, and a steering value of 0.02. However, as shown in the following screengrab of the above video, the robot’s orientation just after the parallel find operation is anything but parallel

movie frame grab just after ‘RotateToParallelOrientation()’

I re-instrumented the ‘RotateToParallelOrientation’ function to print out 10 sets of front/back distances directly after completing it’s ‘ParallelFind’ operation, and made another run. The photo below shows the ending orientation, followed by the data

Robot orientation immediately after ‘RotateToParallelOrientation()’

According to the photo, the robot is definitely not oriented parallel to the wall. However, according to the telemetry data, it is (44.4 front, 44.2 rear, steerval = 0.02). Even curioser, the actual physical measurements taken using a tape measure show that the front/rear distances are about 47/44cm, or a steerval of about 0.3! Something is definitely wrong here.

Uncommented the #DISTANCES_ONLY define and re-ran, with the robot position/orientation unchanged:

In the above data, the front distance varies from 42.6 to 44.6cm with an average of 43.7cm, and the rear distance varies from 44.1 to 45.5cm with an average of 44.8cm.

So the program thinks the front/back distances are closer together than the tape measure does (44.5/45.0 vs 47/44). This is a pretty big discrepancy. Rotating the robot to be physically parallel with front/rear distances = 40cm, I get:

When the robot is physically parallel, the reported front distance varies from 38.2 to 39.2cm with an average of 38.8cm, and the rear distance varies from 36.7 to 38.3cm with an average of 37.9. The left steering value varies from 0.02 (38.8/38.1) to 0.14 (38.9/37.5) with an average of 0.09.

Well, it looks like the average reported distances and steering values are pretty close to reality, so maybe my original calibration efforts aren’t entirely screwed up. However, it is abundantly clear at this point that my current ‘RotateToParallelOrientation()’ algorithm isn’t reliable, due to very noisy distance value measurements.

01/09/23 Update:

After getting ‘RotateToParallelOrientation()’ working better (now it just uses the array front/rear distance measurements to calculate the off-parallel angle, and then does a ‘SpinTurn’ by that amount), I resumed the effort (see the 02 January Update above) trying to determine if my older algorithm for combining the raw steering value with an ‘offset adjustment factor’ based on the robot’s distance from the desired offset distance would now work better given the improvements I have made in VL53L0X sensor error compensation and off-parallel distance measurement compensation.

As it turns out, the answer seems to be ‘no’. After a multitude of runs with my test wall set up for two 30-45deg ‘breaks’, I couldn’t find any set of PID values that would allow the robot to track the wall – it always either took off for parts unknown, or crashed into the wall at some point.

So, back to the original algorithm of using the wall offset distance directly in the PID engine.

11 January 2023 Update:

I’m confused – not an unusual state for me to be in – but still…..

After all the above improvements, I still was unable to produce reasonable tracking performance using either the steering value or the offset distance as the parameter to be controlled. And, even more confusing, I have an entire post dedicated to demonstrating successful wall tracking using the orientation-angle-corrected distance to the wall as the input to the PID engine, with the desired wall offset as the setpoint, as shown here:

With this algorithm, I settled on PID(3,0,1) as the best parameter set, with the result shown in this short video (copied from the above post):

Wall tracking using corrected distance measure as input, and desired offset as the set-point

And then, I have another post demonstrating that using the steering value as input and 0 as the setpoint also works, as shown in this short video with PID(300,0,300)

Right-side wall tracking using steering value as input with PID = (300,0,300)

Here’s the data and short video from a run on my longer ‘4 meter’ test range with two 30º breaks:

Using steerval only. Note monotonically decreasing distance

Even more confusing, it appears that the earlier (September 2021) trial using the steering value input also used the measured center distance to modulate the steering value so the robot would tend to track the steering value but also trend toward the desired wall offset distance. Here’s the tracking code from FourWD_WallTrackTest_V3:

In the above code snippet, ‘Lidar_RightCenter’ is in mm, so WALL_OFFSET_TGTDIST_CM must be multiplied by 10 to match units.

At this point I am thoroughly confused, (but hopeful, since I have evidence from an earlier version of myself that something (actually two somethings) actually work. I believe the next step is to see if I can use my WallE3_WallTrackTuning_V5 code to consolidate everything down to something that works.

12 January 2023 Update:

I went back and loaded up WallE3_WallTrack_V2.ino and ran it on my 4m ‘range’ with two 30º breaks. The robot tracked amazingly well, as shown in the following telemetry output and Excel plot

WallE3_WallTrack_V2’s tracking algorithm uses the difference between the desired and measured offset distances to ‘tweak’ the steering value, as discussed above, so clearly this works – or at least doesn’t screw things up too badly. In the above telemetry output, the ‘Steer’ column is the steering value after the offset distance adjustment shown here

So at the point where the robot hit the minimum center distance of about 154mm, the steering value adjustment would be (154-400)/1000 = -0.246. The total steering value term at this point was 0.23, which means the ‘raw’ steering value was +0.016 and the offset distance error term accounted for ~97% of the total. This is good evidence that including the the distance offset term works.

My new new new plan is to focus on my October 2022 post that uses the orientation angle corrected offset distance as the input to the PID engine, and see if I can incorporate this, along with all my recent updates/bugfixes into WallE3_WallTrackTuning_V5

WallE3 Rolling Turn, Revisited

Posted 04 December 2022,

A few days ago I described my revisit to WallE3’s ‘parallel orientation search’ feature, a feature that had been more or less discarded due to poor performance. As described in this post and this post, I have been able to obtain much better accuracy and precision from the ST Micro VL53L0X sensor arrays.

With this in mind, I decided to revisit my old ‘Rolling Turn’ algorithm, to see if I could get it’s performance up to the point where it could be used instead of ‘Spin Turn’ for the ‘parallel find’ maneuver. The Spin Turn maneuver involves rotating one set of wheels backwards and the other forwards to effect a ‘spin in place’ action. The Rolling Turn maneuver involves rotating both sets of wheels in the same direction, but at different speeds. My going-in assumption was that a rolling turn should allow more accurate control of the turn rate, as there should be less acceleration involved.

In preparation for this project, I went back to the web and read some more about PID tuning. This time I found this well-written and informative post by ‘marco_c’ – Thanks marco!

I had dropped the original ‘RollingTurn’ function from the code some time ago, so I started this experiment by copying SpinTurn() and adapting it to move both sets of wheels in the same direction (forward in this case) instead of in opposite directions. The conversion from SpinTurn and RollingTurn took surprisingly little effort, as everything but the motor direction code is pretty much the same.

I created a part-task program by creating a new ‘WallE3_RollingTurn_V1’ arduino project in VS2022 Community, and then copying my WallE3_WallTrackTuning_V4. The WallE3_WallTrackTuning_V4 project is also a part-task test vehicle, and as such had almost all the required parameter entry code already in setup().

After working my way through the usual number of mistakes, I finally got the project working smoothly. I copied captured telemetry output to Excel and used it’s excellent plotting facilities to investigate possible PID triplet combinations. After a day or so of work, I finally homed in on a triplet PID(1.1, 0.1, 0) as providing a pretty nice, smooth turning action for a 180º turn at 30º/sec. Then I tested the same configuration using both a 60º/sec and a 90º/sec commanded turn rate. As shown in the plots below, the robot did well in all three of these conditions:

180º turn at 30º/sec
180º turn at 60º/sec
180º turn at 90º/sec

Here’s a short video showing the 180º turn at 90º/sec configuration:

180º turn at 90º/sec

My test program as described above only handles forward motion CCW & CW turns. If I decide to put this feature back into WallE3’s main codebase, I’ll have to expand it a bit to handle the backward motion case as well.

05 December 2022 Update:

I modified the RollingTurn() function to accommodate both forward and backward motion, and both CW & CCW turns, so four cases.

Also, as I was doing this, it occurred to me that I might want to revisit my ‘SpinTurn()’ function in light of my better understanding of PID tuning. Maybe I can get SpinTurn to operate a little more smoothly.

Stay tuned,

Frank

Wall-E3 Right Wall Following Trial

Posted 23 March 2022,

Earlier this month I was able to demonstrate a multi-lap left-side wall tracking run by Wall-E3 in my office ‘sandbox’. This post describes my efforts to extend this capability to right-side wall tracking.

Since I already had the left-side wall tracking algorithm “in the can”, I thought it would be a piece of cake to extend this capability to right-side tracking. Little did I know that this would turn into yet another adventure in Wonderland – but at least when I finally made it back out of the rabbit-hole, the result was a distinct improvement over the left-side algorithm I started with. Here’s the left-side code:

The above code works, in the sense that it allows Wall-E3 to successfully track the left-side wall of my ‘sandbox’. However, as I worked on porting the left-side tracking code to the right side, I kept thinking – this is awful code – surely there is a better way?

After letting this problem percolate for few days, I decided to see if I could approach the problem a little more logically. I realized there were two major conditions associated with the problem – namely is the robot’s initial position inside or outside the desired offset distance K? In addition, the robot can start out parallel to the wall, or pointed toward or away from the wall. Ignoring the ‘started out parallel’ degenerate case, this reminded me of a 3-parameter Karnaugh map configuration, so I started sketching it out in my notebook, and then later in a Word document, as shown below:

As shown above, I broke the 3-parameter into two 2-parameter Karnaugh maps, and the output is denoted by αT. After a few minutes it became obvious that the formula for αT is pretty simple – its either αR – αA1 or αR – αA2 depending on whether the robot starts out outside or inside the desired offset distance. In code, this boils down to one line, as shown at the bottom of the Karnaugh map above, using the C++ ‘?’ trinary operator, and choosing CW vs CCW is easy too, as a negative result implies CCW, and a positive one implies CW. The actual code block is shown below:

Here’s a short video of Wall-E3 navigating the office ‘sandbox’ while tracking the right-side wall.

So, it looks like Wall-E3 now has tracking ability for both left-side and right-side walls, although I still have to clean things up and port the simpler right-side code into TrackLeftWallOffset().

25 March 2022 Update:

Well, that was easy! I just got through porting the new right-side wall tracking algorithm over to TrackLeftWallOffset(), and right out of the box was able to demonstrate successful left-wall tracking in my office ‘sandbox’.

At this point I believe I’m going to consider the ‘WallE3_WallTrack_V3’ project ‘finished’ (in the sense that most, if not all, my wall tracking goals have been met with this version), and move on to V4, thereby limiting the possible damage from my next inevitable descent through the rabbit hole into wonderland.

Stay tuned,

Frank

Using Wire1 & Wire2 with the I2CDevLib & MPU6050 Libraries

While working on porting Wall-E2’s ‘second deck’ hardware (VL53L0X array and LIDAR) to Wall_E3, I started running into problems associated with using Wire1 on the main Teensy 3.5 master processor to communicate with the Teensy 3.5 slave processor that manages the seven VL53L0X time-of-flight distance sensors. As I worked to troubleshoot the issue, it soon became evident that “I wasn’t in Kansas anymore”, and in fact had once again gone down a rabbit hole into Wonderland, with nary a bread-crumb in sight. Basically, I have been trying to use both the ‘i2c_t3.h’ and ‘Wire.h’ Teensy to utilize multiple I2C buses (Wire1, Wire2, etc) over the last few years, without really understanding what I was doing. In the process I have created a spaghetti mess of conflicting I2C library file locations and configurations.

So, this post is my account of what went wrong, and the steps taken to get things working properly again.

The problem – utilizing multiple I2C buses:

Teensy 3.x processors provide for multiple I2C buses, a feature I used originally to manage the 7-element VL53L0X array located on the ‘second deck’ of Wall_E2, my autonomous wall following robot. With the addition of a Teensy 3.5 for the main robot processor, the multiple I2C bus problem is now relevant to the main processor as well. The main processor now uses Wire to talk to the second-deck VL53L0X array manager (Teensy 3.5), and Wire1 for the IR homing beacon detector/demodulator and the MPU6050 IMU. Consequently, the main processor must utilize (multi-wire) capable library functions.

The 7-element VL53L0X array manager (Teensy 3.5)

This worked great, even though I had tried my best to screw it up. The main project file has already been changed to use <Wire.h> and although it uses a local copy of I2C_Anything.h, that copy uses <Wire.h> as well. Also, VL53L0X.h (local folder copy) also uses <Wire.h>. So, I made the following changes:

  • Removed VL53L0X.h/cpp and I2C_Anything.h from project references and deleted the local file copies
  • Changed #include “file.h” to <file.h> for both (not sure this is necessarsy
  • Deleted the vl53l0x-arduino folder from the \Libraries folder so there would be only one version of VL53L0X.h/cpp available.
  • Re-scanned for libraries, did a File->SaveAll, and recompiled OK – yay!

So now at least the Teensy_7VL53L0X_Slave_V3 project has been cleaned up

Main Wall-E3 Processor (Teensy 3.5)

This is where all the trouble with multiple I2C busses started. The main processor has to talk to the VL53L0X array manager via I2C on Wire1, which means that not only does the main processor code need to utilize multi-bus functionality, but I2C_Anything (which internally uses Wire for bit-wise comms) does as well. In addition, interfacing to the MPU6050 requires the use of Jeff Rowberg’s I2CDevLib stuff, specifically MPU6050_6Axis_MotionApps_V6_12.h, I2CDev.h, I2C_Anything.h, and a Wire1 capable version of I2C_Anything.h. To make all this work, I made the following changes:

The #include for MPU6050_6Axis_MotionApps_V6_12.h, I2CDev.h is aimed at \Libraries\MPU6050\, which is a (old) copy of C:\Users\paynt\Documents\Arduino\Libraries\i2cdevlib\Arduino\MPU6050\. A suggestion in Jeff Rowberg’s ReadME file regarding the use of symlinks instead of actual copies led me to this ‘how-to’ page on creating symlinks in Windows. So I deleted the C:\Users\paynt\Documents\Arduino\Libraries\MPU6050\ folder and instead created a ‘hard’ symlink from there to C:\Users\paynt\Documents\Arduino\Libraries\i2cdevlib\Arduino\MPU6050\. Then I similarly deleted the C:\Users\paynt\Documents\Arduino\Libraries\I2Cdev\ folder and created a ‘hard’ symlink from there to C:\Users\paynt\Documents\Arduino\Libraries\i2cdevlib\Arduino\I2Cdev\.

Here are the cmdline commands for both operations:

and here is the result of dir /A in the C:\Users\paynt\Documents\Arduino\Libraries\ folder

showing that the hard links were actually created properly.

So now when I right-click on include “MPU6050_6Axis_MotionApps_V6_12.h”, the file opens properly, and the location is shown as in the C:\Users\paynt\Documents\Arduino\Libraries\MPU6050 folder even though the file actually resides in C:\Users\paynt\Documents\Arduino\Libraries\i2cdevlib\Arduino\MPU6050\. Similarly, for include “I2Cdev.h” the file opens properly and the location is shown as C:\Users\paynt\Documents\Arduino\Libraries\I2Cdev\ even though it is actually in the C:\Users\paynt\Documents\Arduino\Libraries\i2cdevlib\Arduino\I2Cdev\ folder

This all worked, except now I’m getting errors that say that ‘I2C_PINS_18_19’ (and all the other Teensy I2C-specific enums) can’t be found – argggggghhhhh! They don’t seem to be defined anywhere in the C:\Program Files (x86)\Arduino\hardware\teensy\avr\ folder tree either – I’m at a loss

Well, maybe not. I’m beginning to think that the enums are <i2c_t3.h>-specific, and a more basic style of initialization is used with <Wire.h>. I sent off a plea to the Teensy forum – we’ll see.

In the meantime, I tried a couple of simple experiments that determined pretty conclusively that the <Wire.h> style does indeed work, but in a different (more constrained?) way than with <i2c_t3.h>. I created a ‘Wire_Slave_Sender’ VS2022 project by copying the ‘slave_sender.ino’ example and loaded it onto a T3.2. Then I created a ‘Wire_Master_Reader’ VS2022 project by copying the ‘master_reader.ino’ example, and loaded it onto a T3.5. With this setup I was able to demonstrate that ‘Wire.begin()’ facilitated I2C comms on pins 18 & 19 (the default pinouts for Wire0) on the T3.5 (pins 18 & 19 on the slave didn’t change), and ‘Wire1.begin()’ facilitated the same thing, but this time using pins 37 & 38 (the default pinouts for Wire1). Here are the programs:

and here’s a sampling of the output:

OK, now that I understand the <Wire.h> vs <i2c_t3.h> issues, I still have at least one more hurdle to clear. It appears that ” MPU6050_6Axis_MotionApps_V6_12.h ” is an older version of the DMP-enabled code for the MPU6050, and “MPU6050_6Axis_MotionApps612.h” (no ‘V’, no underscores in version number) is the latest and greatest. However, using this version also requires the correct version of i2cdev.h (I think). In any case, I was able to change the #includes on my T35_WallE3_V5 project to “MPU6050_6Axis_MotionApps612.h” and “I2Cdev.h” and get the program to compile for a T3.5 target. Whether or not it will actually behave as required is TBD.

To pursue this issue, I set up a simple T3.5 – MPU6050 plugboard configuration, and loaded an old MPU6050 test project – “Teensy_MPU6050_DMP6_V3”. This project uses the older “MPU6050_6Axis_MotionApps_V6_12.h” code (located in the project folder) along with I2Cdev.h/cpp, helper_3dmath.h, and MPU6050.h/cpp all in the project folder. It compiled right out of the box, and I was able to demonstrate successful interfacing with the MPU6050.

Next, I changed #include “MPU6050_6Axis_MotionApps_V6_12.h” to #include “MPU6050_6Axis_MotionApps612.h” whereupon it blew a whole bunch of compile errors. I was able to confirm that, due to the ‘hard’ symlink magic, the compiler thinks the “MPU6050_6Axis_MotionApps612.h” file is at …\Arduino\Libraries\MPU6050 rather than way down in the i2cdevlib tree – yay!

At this point, rather than continuing to modify the Teensy_MPU6050_DMP6_V3 project, I decided to create a Teensy_MPU6050_DMP6_V4 project and make all the changes there, so that when I screw up and get lost I can go back and start all over (ask me how I know to do this….). So I changed the include back to #include “MPU6050_6Axis_MotionApps_V6_12.h” and verified it still compiled OK, then I created a new project called Teensy_MPU6050_DMP6_V4 and copy/pasted the entire .ino file into it.

When I tried to compile the new project, it blew a bunch of errors about MPU6050_Base:: functions not being found. This sounds like the i2cdev.h/cpp that is being found is the later one, so I went ahead and changed #include “MPU6050_6Axis_MotionApps_V6_12.h” to #include “MPU6050_6Axis_MotionApps612.h” without doing anything else. This time it didn’t blow any errors about MPU6050_Base:: functions but did blow some similar to ‘I2C_PULLUP_EXT was not declared’. I think this is due to using <Wire.h> in the #include chain rather than <i2c_t3.h>, so I changed

With just that one change, the project now compiles for a Teensy 3.5 target – woohoo! In addition, I didn’t have to add any references to the project, which I have had to do in almost every other case – double woohoo!

Then I uploaded this project to the T3.5, and it actually worked – the MPU6050 is generating valid azimuth values – triple woohoo!

So now I have a working program using the latest/greatest DMP-enabled driver for the MPU6050, and a much simpler #include file/reference setup. Compare this:

to this:

Just for grins, I modified the _V4 project to see if I can move the MPU6050 from Wire (pins 19,18) to Wire1 (pins 37,38). This requires changing MPU6050 mpu to MPU6050((uint8_t) 0x68, &Wire1) and Wire.begin to Wire1.begin(). This worked like a champ, so at this point I think I have figured out all I need to know on this subject.

20 January 2022 – One last piece of the puzzle:

I use Nick Gammon’s wonderful I2C_Anything library (just two template functions, but…) a lot, but it doesn’t support multiple I2C buses. So, I decided to see if I could modify his template functions to accept a default argument that if present, specifies which I2C bus to use. Here’s the modified file, temporarily renamed to ‘I2C_AnythingMultiWire.h’

I used my previously constructed ‘Wire_Slave_Sender.ino’ and ‘Wire_Master_Reader.ino’ projects to test this, and it seems to work just as advertised. In ‘Wire_Slave_Sender.ino’ I use:

Which automagically uses ‘Wire’ because the 2nd argument is missing from the call, and in ‘Wire_Master_Reader.ino’ I use:

Which causes Wire1 to be used. Here are both demo programs in their entirety, along with the full ‘I2C_AnythingMultiWire.h’ file:

I made a ‘pull request’ to the I2C_Anything github repo so that everyone who uses it can benefit, but in the meantime feel free to use the I2C_AnythingMultiWire.h file.

25 January Update: Just one more ‘one last piece of the puzzle’

When I was using the ‘i2c_t3.h’ library, I noticed that I didn’t have to use external pullup resistors as long as I used the ‘I2C_PULLUP_INT’ treatment in the Wire.begin() call. This was somewhat contrary to the general run of the posts on the Teensy forum, so it was a bit disconcerting. However, I ran a series of experiments that clearly showed that I2C between two Teensy 3.5 processors didn’t need external pullups, and O’scope waveform analysis showed no difference between using external 2.2KΩ pullups and no external pullups with the ‘I2C_PULLUP_INT’ option.

However, when I switched to the ‘Wire.h’ library, all this changed. I had to use external 2.2KΩ pullups – and this was verified via O’scope analysis. This usually isn’t a big deal, but it turns out that in my case it’s going to be a real PITA to add the pullups. my hardware configuration uses all point-to-point jumpers and no PCB, so there just isn’t any easy way to do this.

You would think that, since there is obviously a way to enable the pullups on the I2C lines (obviously, because that is exactly what the i2c_t3.h library does), there must be a way to do the same thing with the Wire.h library. You would think that, but I’ll be darned if I can figure it out. I have posted this issue to the Teensy forum, but so far no luck finding a solution.

OK, I may have found a clue. Buried deep in i2c_t3.cpp is the following macro:

it is this macro that actually casts the magic spell over the currently defined SCL & SDA pins to enable (or disable) internal pullups.

29 January 2022 Update:

After a lot of forum and Google searching, I decided to try some small experiments regarding how to set an ‘open-drain’ or ‘input-pullup’ configuration on a Teensy 3.5 GPIO pin. Here’s the code:

And here’s some of the output:

In particular, I was able to indirectly measure the pin pullup resistor value by tying the pin to GND through a 10KΩ resistor. The voltage at the junction was about 0.78V, so the voltage divider equation Vr2 = V *R2/(R1+R2) when solved for Vr2 = 0.78V and R2 = 10K gives R1 ~33K, which agrees well with the known pullup resistor value for the Teensy 3.5.

From this it seems that I should be able to initialize the appropriate pins for I2C with ‘wirex.begin()’ and then follow that with the code to set the pins for input_pullup. We’ll see.

I uploaded a very basic I2C ‘master’ sketch to my T3.5, as shown:

And verified with an O’scope that the SDA & SCL pins were active, and that external pullup resistors were required.

When I added ‘pinMode(SCL1, INPUT_PULLUP);’ just after Wire1.begin(), the I2C activity was disabled – no signal at all on either line. I tried some other combinations of pinMode() and digitalWrite(), but nothing changed – clearly the use of pinMode() and/or digitalWrite() overwrites at least some of the required configuration for I2C output.

So, next I plan to try some direct port control and see if that will do the trick.

From Kurt E’s spreadsheet, SCL1 & SDA1 (pins 37/38) are PortC pin 10 & 11 respectively. So,

PORT_PCR_MUX(n): selects the ALTernate function for the pin in question. PORT_PCR_MUX(1) just selects the pin as a GPIO. For instance, to select T3.5 pin 37 as I2C1 SCL, we would use PORT_PCR_MUX(2). Thus, the code line might look like:

where ‘PORT_PCR_ODE’ is the defined constant that selects the ‘Open-Drain-Enable’ bit in the Port Control Register for Port C, bit 10, which is connected to T3.5 pin 37. ‘PORT_PCR_ODE’ is defined in Kinetis.h as:

The ‘0x00000020’ part, when translated to binary is: 0000 0000 0000 0000 0000 0000 0001 0000 << selects the 5th bit in the 32-bit Port Control Register.

PORT_PCR_MUX(n) is defined in Kinetis.h as:

so PORT_PCR_MUX(2) –> (uint32_t)(((2 & 7) << 8)) –> (uint32_t)(((0010 & 0111) << 8)) –> (uint32_t)(((0010) << 8)) –> (uint32_t)(0010 << 8) –> 0000 0000 0000 0000 0000 0000 0000 0010 << 8 –> 0000 0000 0000 0000 0000 0010 0000 0000 –> 0x200

Based on the above information, I thought that I might be able to accomplish my goal by using the following construct in setup():

Unfortunately, this did not work; I2C activity on pins 37/38 ‘flatlined’ and that was that. However, after letting my mind work on the problem while the rest of me slept, I had a new thought when I woke up this morning. Maybe the problem with the above construct is the ‘PORT_PCR_MUX(1)’ fragment. Maybe selecting the default GPIO port overwrites the prior I2C function selection?

So, I decided to try again this morning, using ‘PORT_PCR_MUX(2)’ (I2C function selected) instead. The code looks like this:

And, “son of a gun” – it worked! pins 37/38 activity continued, and physical pullups aren’t required – YES!

Here’s the full program:

I2C activity visible on scope with 2.2K pullup resistors (foreground under green wire jumper) disconnected

I went ahead and connected my plug-board ‘master’ with the Teensy 3.5 on my ‘second-deck’ plate to test end-end I2C comms without external 2.2KΩ pullups. I loaded the Wire library master_reader example on the plugboard Teensy 3.5, and the slave_sender example on the ‘second-deck’ Teensy 3.5. Then I connected Wire1 on the master to Wire (19/18) on the slave. I modified both the master and slave examples with the

On the affected lines (37/38 on the master, 19/18 on the slave). When I ran the examples, I got the following output:

I ran the above experiment with both ends modified for internal pullups, just one end modified, and one or both modified with external 2.2K pullups. the link worked perfectly for all these cases. Her’s a photo of the setup:

master/slave I2C connection with 6″ jumpers. Note 2.2K resistors NOT used

Frank

Wall-E3 Replacing Mega 2560 With Teensy 3.5 Part III

Posted 04 January 2022

At the conclusion of my last post on this subject, I had refurbished the charging station IR transmit subsytem with a new perfboard setup to replace the ugly terminal strip implementation. The next step in ‘bringing up Wall-E3’ is to verify IR Homing functionality. The IR transmitter is modulated by a very stable 520.8Hz square wave produced by the Teensy 3.2 waveform generator, and this signal is demodulated by a Teensy 3.2 on the robot (see this post for the details). This technique has been working for years with the Wall-E2 robot, and since I simply moved the entire IR homing subsystem from Wall-E2 to Wall-E3, I have some hopes that it will work correctly (fingers crossed!). Here’s the test setup:

The first step was just to verify that both phototransistors were receiving the IR signal, as shown in the scope photo below:

At this point I also verified that rotating the robot caused the left and right detector signals to vary appropriately, so that all works. The next step is to verify that the demodulator output still behaves properly. To do this, I ported the appropriate code from my Wall-E2 project into T35_WallE3_V4.

06 January 2022 Update:

When attempting to verify IR Demodulator output from the onboard Teensy 3.2, all I got was zeros across the board. After investigating some more, I came to the conclusion that the main Teensy 3.5 was for some reason unable to even talk to the IR Demod Teensy via I2C. After the usual amount of fumbling around and searching the Teensy forum, I began to think that the problem was caused by the lack of proper pullup resistors on the I2C bus connecting the two. In the case of Wall-E2, I had pullup resistors mounted on the Wixel shield board, but this did not get transferred over to Wall-E3. I was also misdirected a bit by the IC2_PULLUP_INT defined value for Wire.begin calls – this, at least in my mind, implied that the internal pullups available in the ARM processor would work with two Teensy processors – apparently not. So, in order to run this issue down, I created a small test bed consisting of a T3.5 ‘master’ and a T3.2 ‘slave on a plug-in board as shown below:

I2C Master/Slave Test Setup. Note 2.2K pullup resistors above ‘master’ T3.5

Then I loaded the ‘Basic_Master’ and ‘Basic_Slave’ i2c_t3 library sketches and verified that the master and slave were communicating properly. Then I removed the pullups and changed the ‘Wire.begin()’ statements to use I2C_PULLUP_INT to see if they would still communicate – and, contrary to some posts on the Teensy Forum they did! I grabbed some scope photos of the SDA & SCL lines with and without external 2.2K pullups, as follows:

With external 2.2K pullups
Without external pullups

As can be seen from the above photos, there is no perceivable difference between the waveforms for the ‘with’ and ‘without’ external pullups cases. So, for at least this very simple Teensy-to-Teensy case, pullups don’t appear to be required.

Next, I changed the configuration to replace the ‘Slave’ T3.2 with the T3.2 mounted on the IR detector housing, as shown below:

Slave T3.2 replaced by T3.2 on IR detector housing. MPU6050 not connected to I2C

Then I loaded T35_WallE3_V4 onto the ‘master’ T3.5 and re-ran the experiment. This time the T3.5 detected the T3.2 and produced valid output, as shown below:

plot of IR detector module output while moving IR xmt back and forth past detector hood

As can be seen from the above, the computed ‘IRHomingValTotalAvg’ value varies with the alignment of the IR transmitter module. Moreover, since this value is computed using the N-path bandpass filter algorithm implemented earlier, it also verifies that the code is running properly on the IR detection/homing module, as well as the corresponding code in the main controller Teensy 3.5. Yay!

While the above validates the T3.5 main controller and the I2C bus between it and the T3.2 IR Demodulator module, it doesn’t explain why it didn’t work on the actual robot. The only difference between the configuration tested above and the configuration on Wall-E4 is the presence of the MPU6050 module on the same I2C bus. So, I added this piece back into the circuit as shown below:

MPU6050 added to I2C bus

With this configuration, the main controller failed to connect to the IR demod module, just as it did on Wall-E3. So, at least the failure is consistent with previous work. To further investigate, I looked at the I2C bus waveforms with and without the MPU6050 module on the bus, as shown below:

With the MPU6050 added to the I2C bus. Note the horizontal time scale (5 uSec/div)
Without the MPU6050 added to the I2C bus. Note the horizontal time scale (5 uSec/div)

After some more head-scratching, I finally narrowed the issue down to the double-layer connector on the MPU6050 module, where I found not just one, but two bad solder connections – oops! After repairing the connections, I reran the test with the MPU6050 on the I2C bus, and this time I got the proper behavior; case closed, and it only cost me most of a day! 🙁

07 January 2022 Update:

I re-installed the IR Homing/MPU6050 subsystem on Wall-E3 and verified proper operation. Next step will be to try some homing experiments.

Stay tuned,

Frank

Teensy, MPU6050 and Rowberg’s I2CDev Library, Take Two

Posted 17 November 2021,

This post describes my efforts, once again, to figure out how to get a Teensy running with Jeff Rowberg’s I2CDev library for the purpose of interfacing to a MPU6050 6-axis IMU with on-board DMP.

After some initial teething problems, I have been using the MPU6050 for years on my various autonomous wall-following robots using Arduino Mega 2560 controllers with great success. However, when I tried switching over from Arduino UNO/Mega controllers to the Teensy 3.x controller, Jeff Rowberg’s library wouldn’t compile at all. At the time I got the library to compile by replacing #include Wire.h with #include i2c_t3.h in i2cdev.h/cpp. See this post from almost two years (December 2019) ago for the gory details.

Unfortunately, two years later when I tried to change out the Mega 2560 for a Teensy 3.5 in Wall-E3 my latest autonomous wall-following robot, I ran into compile problems again – rats! Looking at the code, I saw that the maintainers had accepted my pull request to add a ‘I2CDEV_TEENSY_3X_WIRE’ identifier to I2Cdev.h to switch from <Wire.h> to <i2c_t3.h> as I had done previously, but I still couldn’t get the library to compile with a Teensy 3.x target.

So, back to basics. Fortunately, I had kept all the old versions of the required i2cdevlib files, so my two-year old project (Teensy_MPU6050_DMP6_V2.ino) still compiled and ran properly – whew. So now to figure out why it doesn’t compile with the newest version of the libraries.

So I created a new VS2019 Arduino project called Tensy_MPU6050_DMP6_V3, copied my code and other files from two years ago, and trimmed the program down to just the code required to print out MPU6050 headings every 200 mSec.

And here is a sample of the output:

This program, using the i2cdevlib files from two years ago, compiles and runs fine. The things that have changed in i2cdevlib since then. Comparing i2cdev.h as used in this program with the current i2dev.h, I see

Changelog diffs
I2CDEV_TEENSY_3X_WIRE for convenience in switching from Wire.h to i2c_t3.h
I2CDEV_TEENSY_3X_WIRE used to switch to i2c_t3.h (as opposed to manually as before)
added ‘void *wireObj=0’ parameter to end of every I2Cdev method signature

And the difference in the i2Cdev.cpp files:

‘void wireObj’ added to all i2Cdev class method implementations
All instances of ‘wire.’ are replaced by ‘useWire->’ where *useWire is defined as a TwoWire object

In the i2Cdev class methods that actually talk to the MPU6050 over the i2c bus (readBytes, readWords, writeBytes, writeWords) the new version uses a pointer to a ‘TwoWire’ object called ‘useWire’ instead of Wire. The ‘useWire’ object is defined at the top of each of these methods with the lines

but the declaration (invocation?) of the ‘TwoWire’ class is guarded by an #ifdef as shown

Declaration of ‘Wire’ as a ‘TwoWire’ object

Which means it isn’t included if ‘I2CDEV_IMPLEMENTATION == I2CDEV_TEENSY_3X_WIRE’ is used. In my old version of I2Cdev.cpp, this didn’t matter, because all the class method declarations & definitions use the ‘Wire.’ object technique for invoking a ‘i2c_t3’ class method. However, the new code uses the ‘useWire->’ pointer technique which does require the ‘TwoWire’ declaration, only it’s not available for ‘I2CDEV_IMPLEMENTATION == I2CDEV_TEENSY_3X_WIRE’ – ouch!

Comparing the old and new MPU6050.h files, the only changes of significance are:

  • The name of the class was changed from MPU6050 in the old file to MPU6050_Base in the new one, with ‘, void * wireObj = 0’ added as the last parameter in the constructor
  • GetCurrentFIFOPacket, setFIFOTimeout, and getFIFOTimeout methods were added
  • The entire #ifdef MPU6050_INCLUDE_DMP_MOTIONAPPS20 section was removed
  • void *wireObj and uint_32t fifoTimeout objects were defined

The new GetCurrentFIFOPacket method added by Homer Creutz replaces the inline method I used in my older code, and does a nicer job to boot, and the wireObj declaration makes possible changing all the method calls from ‘Wire.’ to ‘useWire_>’ style.

Comparing the old and new MPU6050.cpp files, the only changes of significance are:

  • The names for all method implementations are changed from MPU6050:: to MPU6050_Base::, with a ‘,wireObj(wireObj)’ parameter added at the end of the constructor method and most other methods as well
  • The implementation code for GetCurrentFIFOPacket, setFIFOTimeout, and getFIFOTimeout methods were added

Comparing MPU6050_6Axis_MotionApps_V6_12.h (the old version) to MPU6050_6Axis_MotionApps612.h (the new version), there were a lot of changes:

  • The implementation code was split out from the header file into a new MPU6050_6Axis_MotionApps612.cpp file
  • A new MPU6050_6Axis_MotionApps612 : public MPU6050_Base class is declared and all the dmpxxx methods are now part of this class.
  • All the hard-coded MPU6050 DMP firmware image is gone (in the cpp file?)
  • At the very bottom is the line ‘typedef MPU6050_6Axis_MotionApps612 MPU6050;’ declaring ‘MPU6050’ to be a object of type MPU6050_6Axis_MotionApps612

The new MPU6050_6Axis_MotionApps612.cpp file now contains the DMP firmware image and the definitions for all the methods declared in the .h file.

So, it looks like the whole problem with changing over from the old setup to the new one may be just the fact that the declaration (invocation?) of the ‘TwoWire’ class is guarded by an #ifdef that disables the line for ‘I2CDEV_IMPLEMENTATION == I2CDEV_TEENSY_3X_WIRE’. So tomorrow (neeeeeeedddddd, sleeeeepppp!) I’ll try again with the new setup, with the additional condition added to the #ifdef line.

So, I created yet another VS2019 Arduino project called ‘Teensy_MPU6050_DMP6_V3’, identical in every way to ‘Teensy_MPU6050_DMP6_V2’ except this project includes MPU6050_6Axis_MotionApps612.h instead of MPU6050_6Axis_MotionApps_V6_12.h. To make things simpler, I copied all the reference files from the i2cdevlib folder into the project’s local folder, as shown below:

After editing I2Cdev.h as shown below

to utilize the i2c_t3.h version of the Wire library, I compiled it for Teensy 3.2 and got the following output:

As shown, all the compile errors reference a missing declaration for TwoWire, which I hope means that adding the ‘I2CDEV_TEENSY_3X_WIRE’ identifier to the guard around the code that declares TwoWire in i2Cdev.cpp.

I edited i2Cdev.cpp to add the ‘I2CDEV_TEENSY_3X_WIRE’ identifier to the guard around the code that declares TwoWire in I2Cdev.cpp, and edited I2CDev.h to do the same thing for the guard around the ‘class TwoWire’ declaration, and recompiled. This time the output was:

So I commented out the ‘

line and recompiled, still getting way too many errors – time to punt.

20 November 2021 Update:

As mentioned above, I decided to punt on the idea of incorporating the newest I2CDevlib files, as I keep getting lost trying to sort out the ‘Wire/TwoWire/useWire’ mess.

However, I did discover that the new library versions work just fine with an Arduino UNO or MEGA2560 as the target. So, I created an Arduino project called ‘Arduino_MPU6050_6Axis_MotionApps612’ and used it to tune the rate-controlled turn PID for the new robot. After the requisite number of false-starts and screwups, I was able to get pretty decent 45º/sec and 90º/sec turns with:

So, at this point I have an Arduino program ( Arduino_MPU6050_6Axis_MotionApps612) targeting the Arduino UNO that works fine with the new versions of the i2cdevlib libraries, and an older Arduino program (Teensy_MPU6050_DMP6_V3) targeting the Teensy 3.x that works fine with the 2-year old versions of the i2cdevlib libraries. The next step I think is to move ahead with the Teensy-targeted program using old libraries on the assumption that eventually Jeff Rowberg and company will either fix the current libraries to work properly with Teensy, or show me the error of my ways so that I can fix the Teensy compilation problem.

Another Try at Wall Offset Tracking, Part II

Posted 22 June 2021

In my previous post on this subject, I described my effort to improve Wall-E2’s wall tracking performance by leveraging controlled-rate turns and the dual VL53L0X ToF LIDAR arrays. This post describes an enhancement to that effort, aimed at allowing the robot to start from a non-parallel orientation and still capture and track a desired wall offset.

The previous post showed that when starting from a parallel orientation either inside or outside the desired wall offset, the robot would make a turn toward the offset line, move straight ahead until achieving the desired offset, then turn down-track and start tracking the desired offset. The parallel starting condition was chosen to make things easier, but of course that isn’t realistic – the starting orientation may or may not be known. In previous work I created an entire function ‘RotateToParallelOrientation()’ to handle this situation, but I would rather not have to do that. It occurred to me that I might be able to eliminate this function entirely by utilizing the known characteristics of the triple-VL53L0X array. The linear array exhibits a definite relationship between ‘steering value’, (the difference between the front and rear sensor values, divided by 100) and the off-perpendicular orientation of the array. At perpendicular (parallel orientation of the robot), this value is nominally zero, and exhibits a reasonably linear relationship out to about +/- 40º from perpendicular, as shown in the following plot from this post from a year ago.

Array distances and steering value for 30 cm offset. Note steering value zero is very close to parallel orientation

As can be seen above, the ‘Steering’ value is reasonably linear, with a slope of about -0.0625/deg. So, for instance, the calculated value for an offset of 30cm would be -0.0625*30 = -0.1875, which is very close to the actual plotted value above for 30º

So, it should be possible to calculate the off-perpendicular angle of the robot, just from the measured steering value, and from that knowledge calculate the amount of rotation needed to achieve the desired offset line approach angle.

The desired approach angle was set more or less arbitrarily by

and this assumes an initial parallel orientation (i.e. 0º offset in the above plot). If, for instance, the initial steering value was -0.1875, indicating that the robot was pointing 30º away from parallel, then the code to compute the total cut (assuming we are tracking the wall on the left side of the robot) would look something like this:

So, for example, if the robot’s starting orientation was offset 30º away from the wall, and 20cm inside the desired offset line of 30cm, we would have:

cutAngleDeg = WALL_OFFSET_TGTDIST_CM – (int)(Lidar_LeftCenter / 10.f);

cutAngleDeg = 30 – 10 = 20

adjCutAngleDeg = cutAngleDeg – 30 = -10º, so the robot would actually turn 10º CCW (back toward the wall) before starting to move to capture the desired offset line.

If, on the other hand, the robot was starting from outside the desired offset (say at 60cm), but with the same (away from wall) pointing angle, then the result would be

cutAngleDeg = WALL_OFFSET_TGTDIST_CM – (int)(Lidar_LeftCenter / 10.f);

cutAngleDeg = 30 – 60 = -30

adjCutAngleDeg = cutAngleDeg – 30 = -30 – 30 = -60º, so the robot would actually turn 60º CCW (back toward the wall) before starting to move to capture the desired offset line.

26 June 2021 Update:

I modified my test program to just report the rear, center, and front VL53L0X distances, plus the steering value and the computed off-parallel angle, using the -0.0625 slope value obtained from the above plots. Unfortunately, the results were wildly unrealistic, leading me to believe something was badly wrong somewhere along the line. So, I redid the plots, thinking maybe the left side VL53L0X sensors were different enough to make that much of a difference. When I plotted the steering value vs off-parallel angle for 10, 20, 30 & 40cm wall offsets, I got a significantly different plot, as shown below:

As can be seen in the above plot, the slope of steering values to off-parallel angles is nice and linear, and also quite constant over the range of wall offset distances from 10 to 40 cm; this is quite a bit different than the behavior of the right-side sensor array, but it is what it is. In any case, it appears that the slope is close to 1.4/80 = 0.0175, or almost twice the right-side slope derived from the previous right-side plots.

Using the value of 0.0175 in my GetSteeringAngle() function results, and comparing the calculated off-parallel angle with the actual measured angle, I get the following Excel plot.

As can be seen in the above plot, the agreement between measured and calculated off-parallel angles is quite good, using the average slope value of 0.0175.

The above plot shows the raw values that produced the first plot.

So, back to my ‘FourWD_WallTrackTest’ program to develop Wall-E2’s ability to capture and then track a particular desired offset. In my first iteration, I always started with Wall-E2 parallel to the wall, and calculated an intercept angle based on the difference between the robot’s actual and desired offsets from the near wall. This worked very nicely, but didn’t address what happens if the robot doesn’t start in a parallel orientation. However, when I tried to use the data from a previous post, the results were wildly off. Now it is time to try this trick again, using the above data instead. Because the measured steering value/off angle slopes for all selected wall offsets were essentially identical, I can eliminate the intermediate step of calculating the appropriate slope value based on the current wall offset distance.

So, I modified my ‘getSteeringAngle()’ function to drop the ‘ctr_dist_mm’ parameter and to use a constant 0.0175 slope value, as shown below:

With this modification, I was able to get pretty decent results; Wall-E2 successfully captured and tracked the desired offsets from three different ‘inside’ (robot closer to wall than the desired offset) orientations, as shown in the following short videos:

‘Inside’ capture, with initial orientation angle > desired approach angle
‘Inside’ capture with initial orientation angle < desired approach angle
‘Inside’ capture with negative initial orientation angle

04 July 2021 Update:

After succeeding with the ‘inside’ cases, I started working on the ‘outside’ ones. This turned out to be considerably more difficult, as the larger distances from the wall caused considerable variation in the VL53L0X measurements (lower SNR?), which in turn produced more variation in the starting and ‘cut’ angles. However, the result does seem to be reasonably reliable, as shown in the following videos.

‘outside’ capture with initial outward angle
‘outside’ capture with initial inward angle
‘outside’ capture with small outward angle.

05 July Update:

After getting the left-side tracking algorithm working reasonably well, I ported the ‘TrackLeftWallOffset()’ functionality to ‘TrackRightWallOffset()’. After making (and mostly correcting) the usual number of mistakes, I got it going reasonably well, as shown in the following short videos:

Right wall tracking, starting inside desired offset, oriented toward wall
Right wall tracking, starting inside desired offset, oriented away from wall
Right wall tracking, starting outside desired offset, oriented toward wall
Right wall tracking, starting outside desired offset, oriented away from wall

Here is the complete code for my wall capture/track test program:

Here is a link to the above file, plus all required library & ancillary files.

Stay tuned,

Frank

Another Try at Wall Offset Tracking

Posted 22 June 2021

About nine months ago (October 2020) I made a run at getting offset tracking to work (see here and here). This post describes yet another attempt at getting this right, taking advantage of recent work on controlled-rate turns. I constructed a short single-task program to do just the wall tracking task, hopefully simplifying things to the point where I can understand what is happening.

One of the big issues that arose in previous work was the inability to synch my TIMER5 ISR with the PID library’s ‘Compute()’ function. The PID library insists on managing the update timing internally, which meant there was no way to ensure that Compute() would be called every time the ISR ran. I eventually came to the conclusion that I simply could not use the PID library version, and instead wrote my own small function that did the Compute() function, but with the timing value passed in as an argument rather than being managed internally. This forces the PID calculations to actually update in synch with the TIMER5 interrupt. Here’s the new PIDCalcs() function:

As can be seen from the above, this is a very simple routine that just does one thing, and doesn’t incorporate any of the improvements (windup suppression, sample time changes, etc) available in the PID library. The calling function has to manage the persistent parameters, but that’s a small price to pay for clarity and the assurance that the output value will indeed be updated every time the function is called.

With this function in hand, I worked on getting the robot to reliably track a specified offset, but was initially stymied because while I could get it to control the motors so that the robot stayed parallel to the nearest wall, I still couldn’t get it to track a specific offset. I solved this problem by leveraging my new-found ability to make accurate, rate-controlled turns; the robot first turns by an amount proportional to it’s distance from the desired offset, moves straight ahead until the offset is met, and then turns the same number of degrees in the other direction. Assuming the robot started parallel to the wall, this results in it facing the direction of travel, at the desired offset and parallel to the nearest wall. Here is the code for this algorithm.

Here are a couple of videos showing the ability to capture and track a desired offset from either side.

In both of the above videos, the desired wall offset is 30 cm.

Stay Tuned,

Frank

Turn Rate PID Tuning, Part III

In my previous post on this subject, I described my efforts to control the turn rate (in deg/sec) of my two-wheel robot, in preparation for doing the same thing on Wall-E2, four wheel drive autonomous wall following robot.

As noted previously, I have a TIMER5 Interrupt Service Routine (ISR) set up on my four wheel robot to provide updates to the various sensor values every 100 mSec, but was unable to figure out a robust way of synchronizing the PID library’s Compute() timing with the ISR timing. So, I decided to bag the PID library entirely, at least for turn rate control, and insert the PID algorithm directly into the turn rate control, and removing the extraneous stuff that caused divide-by-zero errors when the setSampleTime() function was modified to accept a zero value.

To facilitate more rapid test cycles, I created a new program that contained just enough code to initialize and read the MP6050 IMU module, and a routine called ‘SpinTurnForever() that accepts PID parameters and causes the robot to ‘spin’ turn forever (or at least until I stop it with a keyboard command. Here’s the entire program.

This program includes a function called ‘CheckForUserInput()’ that, curiously enough, monitors the serial port for user input, and uses a ‘switch’ statement to execute different commands. One of these commands (‘q’ or ‘Q’) causes ‘SpinTurnForever()’ to execute, which in turn accepts a 4-paremeter input that specifies the three PID parameters plus the desired turn rage, in deg/sec. This routine then starts and manages a CCW turn ‘forever’, in the ‘while()’ block shown below:

This routine mimics the PID library computations without suffering from library’s synchronization problems, and also allows me to fully instrument the contribution of each PID term to the output. This program also allows me to vary the computational interval independently of the rest of the program, bounded only by the ability of the MPU6050 to produce reliable readings.

After a number of trials, I started getting some reasonable results on my benchtop (hard surface with a thin electrostatic mat), as shown below:

Average turn rate = 89.6 deg/sec

As can be seen in the above plot, the turn rate is controlled pretty well around the 90 deg/sec turn rate, with an average turn rate of 89.6 deg/sec.

The plot below shows the same parameter set, but run on carpet rather than my bench.

Average turn rate = 88.2 deg/sec

Comparing these two plots it is obvious that a lot more motor current is required to make the robot turn on carpet, due to the much higher sideways friction on the wheels.

The next step was to see if the PID parameters for 90 deg/sec would also handle different turn rates. Here are the plots for 45 deg/sec on my benchtop and on carpet:

Average turn rate = 44.7 deg/sec
Average turn rate = 43.8 deg/sec

And then 30 deg/sec on benchtop and carpet

Average turn rate = 29.8 deg/sec
Average turn rate 28.8 deg/sec

It is clear from the above plots that the PID values (5,0.8,0.1) do fairly well for the four wheel robot, both on hard surfaces and carpet.

Having this kind of control over turn rate is pretty nice. I might even be able to do turns by setting the turn rate appropriately and just timing the turn, or even vary the turn rate during the turn. For a long turn (say 180 deg) I could do the first 90-120 at 90 deg/sec, and then do the last 90-60 at 30 deg/sec; might make for a much more precise turn.

All of the above tests were done with a 20 mSec time interval, which is 5x smaller than the current 100mSec time interval used for the master timer in Wall-E2. So, my next set of tests will keep the turn rate constant and slowly increase the time interval to see if I can get back to 100 mSec without any major sacrifice in performance.

28 May 2021 Update:

I went back through the tests using a 100 mSec interval instead of 20 mSec, and was gratified to see that there was very little degradation in performance. The turn performance was a bit more ‘jerky’ than with a 20 mSec interval, but still quite acceptable, and very well controlled, both on the benchtop and carpet surfaces – Yay! Here are some plots to show the performance.

Average turn rate = 29.7 deg/sec
Average turn rate = 28.4 deg/sec
Average turn rate = 44.4 deg/sec
Average turn rate = 43.0 deg/sec
Average turn rate = 89.7 deg/sec
Average turn rate = 86.6 deg/sec

31 May 2021 Update:

I made some additional runs on benchtop and carpet, thinking I might be able to reduce the turn-rate oscillations a bit. I found that by reducing the time interval back to 20 mSec and increase the ‘D’ (differential) parameter. After some tweaking back and forth, I wound up with a PID set of (5, 0.8, 3). Using these parameters, I got the following performance plots.

Average turn rate = 87.3 deg/sec
PID = (5,0.8,3), 20mSec interval, 90 deg/sec

As can be seen in the Excel plot and the movie, the turn performance is much smoother – yay!

Stay tuned!

Frank