ESP32-CAMs Distance Measurement Study

Recently I have returned to working with WallE my autonomous wall-following robot, and I started thinking again about the issue it has with reflective surfaces. At the same time I ran across a post about using two ESP32-CAM modules for distance measurements, and I started to wonder if I could do something like that with WallE. I already have a visible red laser on the front, so maybe the two ESP32-CAM’s could use the laser ‘dot’ for distance measurements? Would this technique have the same problem with reflective surfaces?

I just happened to have two ESP32-CAM modules in my parts bin, so I thought I would give this idea a try and see how it goes. I know next to nothing about image processing in general and about the ESP32-CAM in particular, so if nothing else it will be a learning experience!

After a bit of web research, I got my Visual Studio/Visual Micro development environment configured for ESP32-CAM program development and for the ‘AI Thinkier EP32-CAM(esp32_esp32cam)’ target, and found a couple of examples that came with the newly-installed library. The first one I tried was the ‘CameraWebServer’ example (C:\Users\Frank\Documents\Arduino\Libraries\arduino-esp32-master\libraries\ESP32\examples\Camera\CameraWebServer\camera_pins.h), which turns the ESP32-CAM module into a webserver that can be accessed over the local network using any browser. The example provides for still images and real-time streaming – nice! However, I wasn’t interested in this capability, so after looking around a bit more I found an example that just takes still images and writes them to the SD card. I modified the code to convert the captured JPEG into BMP888 format so I could look at the individual color channels in isolation. I set the capture size to 128×128 pixels and capture a JPEG frame. The JPEG frame is just 2352 bytes, but the BMP888 conversion expands to 49206 bytes (128 x 128 x 3 = 49152, plus 48-byte header + 6 bytes at end, I think). Here’s the code at present:

and here are the JPEG and BMP888 versions of the 128×128 pixel image captured by the camera:

Picture29.jpg
Picture29.bmp

Then I copied Picture29.bmp to another file byte by byte, zeroing out the Green & Blue bytes so that only the red channel was non-zero. However, when I viewed the resulting file, I got the following image:

Picture29_red.bmp

This doesn’t make any sense to me, unless the byte ordering in a BMP888 file is BGR or BRG instead of RGB. However, when I researched this on the web, all info I found indicated the byte order in an RGB888 file is indeed R, G, B. It’s a mystery!

Here’s the code that produced the above results:

I posted the ‘why is my red channel blue?’ question to StackOverflow, and got the following comment back from

I think your problem is with the reference that you found. ISTR the colour order for RGB888 24 bits per pixel BMP is actually Blue, Green, Red. So your all “red” image will indeed appear blue if you have it backwards. See Wiki BMP & DIB 24 bit per pixel. BTW you can get some funny effects converting all red or all blue images from JPEG to BMP since the effective resolution at source is compromised by the Bayer mask sampling.

Well, at least I’m not crazy – my ‘red’ channel WAS actually the ‘blue’ channel – yay! Per the wikipedia article, the actual byte order is “… blue, green and red (8 bits per each sample)”

17 February 2025 Update:

After figuring out the BGR sequence, I moved on to the idea of locating a red laser ‘dot’ on a black background; here’s the experimental setup:

Experimental setup for ‘red dot on black background’ test

And here is the 128×128 pixel image captured by the ESP32-CAM.

So now I needed to find the coordinates for the red dot in the black field. Rather than deal with the tedium of writing and debugging the search routine in Arduino, I decided to suck the image data into Excel, and write a VBA script to find the ‘dot’, as shown below:

This produced the following Excel spreadsheet (scale adjusted to show entire 128×128 pixel layout):

128×128 RGB pixel data with max value highlighted

For comparison purposes, I have repeated the ESP32-CAM image here:

So, it seems pretty clear that I can correctly extract pixel values from the ESP32-CAM image and find the laser dot – at least in this contrived experiment with a non-reflective black background. Also, it appears at first blush like the upper left-hand corner of the ESP32-CAM image corresponds to R1C1 in the Excel spreadsheet.

The next step is to move the ‘dot’ to a significantly different location on the target and see how that effects the location of the max value in the grid – we need this to determine the orientation of the Excel data relative to the image data; maybe I got lucky, and maybe not 😉

02 March 2025 Update:

After setting this project aside for a few weeks, I figured out how to get the ESP32-CAM system to repeatedly grab images, convert them to BMP, and find the maximum red pixel value in the scene. Here’s the code:

When I ran this code in the following experimental setup, I was able to roughly map the row/column layout of the image, as shown:

As shown, the (0,0) row/column location is the upper right-hand corner of the image, and (127,127) is located at the bottom left-hand corner. At the 20cm spacing shown, the image boundaries are about 85mm height x 100mm width.

The next step will be to mount two ESP32-CAM modules on some sort of a frame, with the laser mounted halfway between the two.

06 March 2025 Update:

As part of my evil plan to use two ESP32-CAM modules to optically measure the distance to a laser red dot, I needed the two modules to talk to each other. The ESP32-CAM modules don’t really have the same sorts of two-wire communications facilities as do the various Arduino and Teensy modules, but I discovered there is an ‘ESP-NOW’ feature that provides ‘packet’ communications between ESP32 modules using the wireless ethernet channel. I found this tutorial that explains the feature, along with demo code for determining the MAC for each unit and a separate program to demonstrate the technique. I modified the demo code to just repeatedly send a set of fake sensor values back and forth to demonstrate to my satisfaction that this technique would work for my intended application. Here’s the code:

And here’s some typical output from the two ESP32-CAM units:

From one device:

From the other device:

A couple of ‘user notes’ about this demo program and it’s application to two different devices:

  • The MAC address display program has to be run twice – once for each unit to get that all-important information.
  • The demo program also has to be run twice, but the MAC address used for each device is the address for the ‘other’ device.
  • As can be seen from the output, I simply used fake sensor data. However, I made sure to use different sets of values (10,20,30 on one and 20,40,60 on the other) so I could verify that the data was actually getting from one to the other.
  • The user must be careful to make sure the two devices are programmed correctly. I found it really easy to program the same device twice – once with the MAC & data for the other unit, and again with the MAC and data for the unit being programmed (which will not work). I wound up with clip-on labels on the two cables going to the two different devices, and then making sure the Visual Studio programming port was correct for the device I was programming. Doable, but not trivial.

21 March 2025 Update:

I broke a finger playing b-ball two days ago, so my typing speed and accuracy have suffered terribly; such is life I guess.

Since my last update I designed and printed a fixture to hold two ESP-CAM modules and a laser diode so I could run some distance experiments. Here’s a photo of the setup:

10 to 80cm distance setup. Note I’m using only one ESP-CAM module

I modified the firmware to simply print out the max value in the scene, along with the row/col coordinates for the max value. The firmware continues to save a red-only image as well. Here’s the hand-written results:

the numbers at the end of each measurement are the .bmp file suffixes (from picture_red58.bmp to picture_red87.bmp).

And here are the representative red-only photos (one per distance) for the selected measurement:

10cm: 114 @ (40,65) picture58_red.jpg
20cm: 241 @ (72,7) picture62_red.jpg
30cm: 215 @ (66,23) picture64_red.jpg
40cm: 215 @ (65,31) picture68_red.jpg
50cm: 225 @ (57,16) picture74_red.jpg
60cm: 199 @ (64,40) picture79_red.jpg
70cm: 255 @ (33,49) picture84_red.jpg
80cm: 255 @ (36,68) picture85_red.jpg

From the data and the photos, it is easy to see that the laser ‘dot’ doesn’t come into the view of the camera until the 20cm distance, and after 60cm the ‘dot’ is washed out by the normal overhead lighting. In between (20 – 60cm) the ‘dot’ can be seen to progress from the far left-hand edge of the scene toward the middle.

26 March 2025 Update:

I made another run, this time with two cameras, as shown in the following photos:

two ESP32-CAM modules mounted on the same frame, with red dot lase mounted on centerline

If my theory is correct, I should be able to see the location of the red dot move horizontally across the images, from left to right for the left cam, and right to left on the right cam. Unfortunately this wasn’t evident in the data. I loaded the above data into Excel and plotted it in various ways. The best I could come up with was to plot row & col locations from each camera vs distance, hoping to see a linear change in either the row or column values. The plots are shown below:

From the above plots, I could see no real progession in the row values, but if I used a lot of imagination I could sort of see a linear decrease in the column values for the left camera and a much less distinct linear increase in the column values for the right camera.

For completeness, I have included the actual camera images used to produce the above data:

Looking at all the above images, I can’t discern *any* real horizontal shift in the position of the red dot. In addition, at 70cm, the reflection of the laser dot off the table surface is just as bright as the reflection off the target, leading to frequent mis-identification of the maximum location.

Conclusion:

Well, this was a nice try and a fun project, but there’s no escaping the conclusion that this ain’t gonna work!

Debugging XCSoar’s ‘Mapgen’ Website

Posted 07 December 2024

I recently came back to Condor virtual soaring after several years away, and also started using XCSoar on a tablet as an auxiliary navigation tool. One challenge in doing this is getting the scenery (.XCM) files associated with the various Condor sceneries. Some years ago ‘Folken’ (folken@kabelsalat.ch) created the https://mapgen.xcsoar.org/ web app to facilitate this process. The website accepts a map name, email address, and map bounds information and produces the corresponding .XCM file, which can then be dropped into XCSoar for navigation support – neat! Map bounds can be defined three ways – as manually entered max/min lat/lon values, as a rectangle on a dynamic world map, or as a waypoint file (.CUP or .DAT).

Unfortunately, as I and several others have found, the web app doesn’t actually support waypoint file bounds; it produces a ‘unsupported waypoint file’ error whenever a waypoint file is submitted. The developer has been unwilling/unable to work the problem due to other demands on his time, so I decided to take a shot at finding/fixing the problem.

First attempt: ‘backend code’ assessment: https://github.com/paynterf/XCSoarMapGenDebug

Last February (Feb 2024) I looked through the github repository, and because I am totally clueless regarding modern (or any age, for that matter) web app development, I decided to concentrate on the ‘backend code’ to either find the problem(s) or exclude this code as the culprit. To do this I created the above repo, and eventually worked my way through most of the backend code, without finding any issues – oh well.

Current attempt: Build and run the web app (ugh!):

After ignoring this problem again for almost a full year, I decided to take another shot at this. Folken’s website has a Readme that details the process of setting up a web server on a Debian linux box, and since I happen to have an old laptop with Debian installed, I decided to give this a whirl. The Readme describes how to use an ‘Ansible Playbook‘ to build and provision an XCMapGen web app. I tried this several times, and to say I got ‘a bit disoriented’ would be the understatement of the century. After having to reload Debian on my laptop several times, I reached out to Folken for help. Amazingly, he actually answered and was (and is) quite helpful. He told me he had gotten away from Ansible and was now using Docker for website build and provisioning. In addition, he gave me detailed steps on how to use Docker to bring up an XCSoarMapGen website on my Debian laptop. Here’s his email back to me:

Of course I had never heard of Docker (or Flask, or Cherrypy, or Ansible or…..), so I was in for some serious research work. Eventually I got the XCSoarMapGen website running at ‘localhost’ on my Debian linux laptop, and for me that was quite an achievement – from web ignoramus to web genius in 234 easy steps! 😎

After some more help from Folken, I got to the point where I could watch the activity between the web page and the backend code, but eventually I decided that this wasn’t getting me anywhere – I really needed to be able to run the backend code in debug mode, but of course I had no idea how to do that. After lots more inet searches for ‘Debugging web applications’ and similar, I found (and worked through) a number of tutorials, like Marcel Demper’s “Debugging Python in Docker using VSCode” and Debugging flask application within a docker container using VSCode, but I was never able to actually get my localhost XCSoarMapGen app to run under debug control – rats!

However, what I did get from the tutorials was the fact that the tutorials both referred to Flask (a software tool I’d never heard of before), while Folken’s XCSoarMapGen app uses CherryPy (another software tool I’d never heard of before). So this sent me off on another wild-goose chase through the internet to learn about debugging with Cherrypy.

After another hundred years or so of web searches and non-working tutorials, I never figured out how to actively debug a Cherrypy app, but I did figure out how to insert print (actually cherrypy.log() statements into the code and see the results by looking at the ‘error.log’ file produced by (I think) Cherrypy. Here’s the ‘parse_seeyou_waypoints()’ function in the code:

And here’s some output from ./error.log:

So, the run/debug/edit/run cycle is:

  • Make changes to the source code, insert/edit cherrypy.log() statements
  • Rebuild the affected files and restart the XCSoarMapGen website at localhost:9090 on my linux box with ‘sudo docker-compose up –build’
  • Reconnect to the error log output with ‘sudo docker exec -it mapgen_mapgen-frontend_1 bash’ followed by (at the #prompt) ‘tail -f ./error.log’

Here’s the startup command and resulting output:

And here’s the commands (in a separate terminal) to start the logging output:

So now I can run the web app at localhost:9090 on my linux box, and watch the action via cherrypy.log() statements – cool!

After a while I had narrowed down my search to the ‘parse_seeyou_waypoints(lines, bounds=None)’ function in the ‘seeyou_reader.py’ file (repeated here for convenience).

Here’s the email I sent off to Folken:

After sleeping on this for a while, I realized there were two other mysteries associated with this file.

  • I noticed the waypoint printout from the temporary for loop at the start of the function showed the letter ‘b’ prepended on each waypoint line, and that isn’t what’s actually in the waypoint file. I have no idea how that letter got added, or even if it is real (could be some artifact of the way linux handles string output) – it’s a mystery.
  • I realized there was a disconnect between the syntax of the call to ‘parse_seeyou_waypoints()’ in ‘parser.py’ and the actual definition of the function in ‘seeyou_reader.py’. In parser.py the call is ‘return parse_seeyou_waypoints(file)’, but in seeyou_reader.py the definition is ‘def parse_seeyou_waypoints(lines, bounds=None):’ So parser.py is sending a file object, but seeyou_reader.py is expecting a ‘lines’ object (presumably the list of lines read in from the waypoint (.CUP) file).

Just to make sure I wasn’t blowing smoke, I did a ‘git grep parse_seeyou_waypoints’ in the repo directory and got the following:

This shows there is exactly one call to parse_seeyou_waypoint(), so it’s clear that what is being sent is not what is expected. I’m not sure that this problem is the reason that waypoint files aren’t being processed, but is sure is a good bet.

So, what to do? It looks like both the parse_winpilot_waypoints() and parse_seeyou_waypoints functions are called with a ‘file’ parameter, and expect a ‘lines’ parameter, so it’s probably best to do the file->lines conversion in parse_waypoint_file(). Just as a sidenote, I noticed that ‘parse_winpilot_waypoints()’ doesn’t include a ‘bounds = None’ default parameter – wonder why?

08 December 2024 Update:

Well, I ran around in circles for quite a while, trying to get my head around the issue of ‘mixed signals’ – where the call from parser.py is: shows

but the definition of parse_seeyou_waypoints() in seeyou_reader.py is

To add to the mystery, it is apparently OK to treat the ‘lines’ argument as a list of strings, so:

and this loop correctly prints out all the lines in the selected .CUP file (albeit with a leading ‘b’ that I can’t yet explain). So clearly there is some Python magic going on where a ‘file’ object gets turned into a ‘lines’ object on-the-fly!

09 December 2024 Update:

To try and clear up the ambiguity between ‘file’ and ‘lines’, I placed the following code into the ‘parse_waypoint_file(filename, file = none)’ function definition in parser.py

When I refreshed the web app, I immediately got the printout of all the lines in the file, even though the ‘Waypoint File:’ box showed ‘no file selected’. I suspect this is because the file information from the last selection is cached.

I closed and re-opened the web site (‘x’ed out the tab and then retyped ‘localhost:9090’ in a new tab), and then re-started the web app. This time the only readout was ‘At the top of the index.html function, with params = {}’.

Then I entered a map name and my email address clicked on the ‘Browse’ button and selected the ‘Slovenia.cup’ file. This did not trigger the code in parser.py.

Then I clicked on the ‘Generate’ button and this triggered the ‘lines’ display. Then I entered ‘F5’ to regenerate the page, and the line list printout was triggered again. So, I think it’s clear that the site chaches entry data.

OK, I think I might have gotten somewhere; so the line I added in parser.py/parse_waypoint_file(filename, file=None):

Works as expected, and loads the ‘lines’ object with a list of lines in the file. In addition, I could now change the call to ‘parse_seeyou_waypoints’ as follows:

Thereby (at least for me) getting rid of the headache I experienced every time I looked at the disconnect between the way parse_seeyou_waypoints was called from parse_waypoint_file and the way it is defined in seeyou_reader.py. When I run this configuration, I get the full waypoint list printout in both parse_waypoint_file() and parse_seeyou_waypoints() – yay!

However, we are still left with the original problem, which is that .CUP (and probably .DAT) files aren’t getting processed properly. I am now starting to believe that the ‘b’ character prepended on all the lines read in from the waypoint file is actually there. If that were in fact the case, that might well explain why the processing loop quits after line 1 (or maybe after line 2 – not entirely sure). When viewed in a normal text viewer like Notepad++ in windows, or in https://filehelper.com/view, I see:

but when I use ‘cherrypy.log(‘line%s: %s’ % (wpnum, line)) in a loop to display the lines in a linux terminal window, I get:

10 December 2024 Update:

So, the problem with the leading ‘b’ turned out to be an issue with the binary-to-ascii decoder used in cherrypy.log() statements. Because no decoder was specified, I got no decoding – hence the leading ‘b’. Once I bought a clue from yet another post to StackExchange, I prefixed any log statements with something like

The ‘ISO-8859-2’ decoder was used because some waypoints use eastern European accent marks.

At the end of the day today, I had worked my way through a number of other minor and not-so-minor problems (most caused by stupidity on my part), and arrived at the point where the code is properly processing the Slovenia3.cup file, at least as far as extracting field elements from the lines, as shown below:

As shown in the printout for row 26, the eastern European accent marks are being handled properly. The ‘parse_seeyou_waypoints()’ function responsible for this is shown below: Note that not all of this function is enabled yet – that’s next!

11 December 2024 Update:

I uncommented the rest of the parse_seeyou_waypoints(lines, bounds=None) function, and except for an oddity regarding the default parameter ‘bounds’, it all seemed to function OK. When parse_waypoint_file() calls parse_seeyou_waypoints() it doesn’t use the ‘bounds’ argument, so it is set to ‘None’ at the start of that function. And, there is nothing in the function to initialize ‘bounds’ to anything, so I’m not sure why it was included in the first place. The ‘bounds’ object is referenced twice, as follows:

but since ‘bounds’ is never initialized and is always ‘None’, these two ‘if’ statements always fail. This is definitely fishy – maybe the original coder had something in mind that never got included?

As I understand things so far, the purpose of ‘parse_seeyou_waypoints(()’ is to create a list of waypoint objects and return it to the calling function with

Since the calling code in ‘parse_waypoint_file()’ is:

I think this means that whatever calls ‘parse_waypoint_file()’ receives the now-filled waypoint list. This appears to be the calling code in ‘server.py’:

So server.py calls parse_waypoint_file with the filename and the file object (pointer?). parse_waypoint_file extracts a list of lines from the file, and then passes that list to (for the .CUP file case) to parse_seeyou_waypoints(), and gets a waypoint list back, which is then calls the ‘.get_bounds()’ method in the waypoint list class. The ‘get_bounds() method is declared in the ‘WaypointList’ class as follows:

When instrumented as shown above to print out the final min/max lat/lons, I got the following:

Comparing the above figures with the actual Slovenia map in Condor2, they cover the measured map extents quite nicely – yay!

12 December 2024 Update:

While looking through the code in server.py, I ran across the following ‘if’ statement:

The intent of the above boolean expression is to warn the user that the extension of the selected file is neither ‘.DAT’ nor ‘.CUP’. So, if the extension is .DAT then the expression is false immediately, and therefore the warning is not emitted. If the extension isn’t .DAT, then the second half of the expression is evaluated, and it is true only if the file extension is NOT .CUP. So, it works but it sure is confusing. A much better way of writing this would be

At the end of the day I had things running pretty well, and I added a line in server.py to print out the min/max lat/lon values calculated from parsing the ‘slovenia3.cup’ file, resulting in the following output:

18 December 2024 Update:

At this point, I have the website working to the point where it can successfully parse either a .CUP file or a .DAT file and print the bounds in the ‘error’ field on the web page. In order to get to this place, I actually wrote a small python script to convert back and forth between .CUP and .DAT formats, so that I could test MapGen using the same exact waypoints in both formats. The calculated bounds, of course, should also be identical.

Here’s the bounds output from Slovenia3.dat:

01 January 2025 Update:

I figured out how to create a ‘Pull Request’ back to the original mapgen repro, and Folken actually looked at it – wow! After a few back and forths, I made the changes he requested and made a new commit to my repro (which he can now see via the PR).

STM32 Firmware Debug Study

Posted 10 November 2024

Last month I tried ‘Klipperizing’ my Flashforge Creator Pro 2 (FFCP2) IDEX 3D printer, and it was an unmitigated disaster. After uploading the Klipper firmware, the printer refused to boot up, and I eventually I had to buy and install a new motherboard to regain functionality. Since then I have discovered that my original motherboard seems to be undamaged, but I can’t get it to boot into the FFCP2 firmware.

So, I have embarked on a quest to figure out how to restore FFCP2 functionality to my original STM32-based FFCP2 motherboard.

I started on this journey with one of the ‘blue pill’ devices I happened to have in my parts drawer. They are generally based on the STM32F1 series, so hopefully not different enough from the STM32F407 to matter.

To start with, I connected up my laptop to the ‘blue pill’ board using a ST-LINK clone and was able to program it via VS2022/VsMicro with the ST-LINK upload option selected, as shown in the following screenshot (note – this was done with the ‘blue pill’ jumpers set as shown in this photo):

And here is part of the ‘verbose’ build output:

I also tried some of the different upload modes advertised in the vMicro menu, as shown in the following conversation from the vMicro forum:

After receiving this input, I installed the JRE, confirmed it was actually there, and then tried the ‘STM32DuinoBootloader’ option again using the USB connector. It still failed, with the output shown below:

After passing this along, it was suggested I try this trick again, but after launching VS2022 in ‘Administrator’ mode. This made no difference – got the same error.

After some more thought and discussion, I came to the conclusion that the reason this was failing is because the ‘blue pill’ devices don’t have any (or at least, the proper) bootloader installed. This situation is discussed here, and also here

As an experiment, I changed the jumper back to the default location (same side for both jumpers) and tried again – same (bad) result.

After this, I also tried the ‘HID Bootloader 2.0’ upload method, also using the USB connector. It failed, with the following output:

This all led me to believe that my ‘blue pill’ devices either have no bootloader loaded, or have the wrong version.

Back to the books. From the original vMicro forum reply I went to their ‘STMicroelectronics STM32 Overview‘ page, and from there to the stm32duinio ‘Arduino_Core_STM32‘ and Serasidis ‘STM_32_HID_Bootloader‘ github sites.

Upload methods site:

I had real trouble understanding correlating the information on this site with my observations when working with my ‘blue pill’ devices. Apparently when I was able to program the device with the ST-LINK adaptor I was using the ‘SWD’ method, described on the Overview site as:

12 November 2024 Update:

Based on what I have learned so far, STM32* MCU’s aren’t naturally compatible with the Arduino ecosystem. However, there are several workarounds that allow Arduino programs to work on STM32 devices. There apparently are at least two hardware-facilitated methods for uploading Arduino programs to STM32 devices; one is by using a ST-LINK device (STM or ‘clone’) connected to a ‘SoftWare Debug’ (SWD) port if one is available, and another is by using a FTDI(Future Technology Devices International) USB-Serial adapter device connected to a MCU serial port.

In addition to the ‘hardware-facilitated’ workarounds, there are at least two different software implementations that allow Arduino programs to be uploaded via the USB port. Both of these require that a ‘bootloader’ be installed into the STM32* MCU. One implementation is the ‘Maple’ bootloader, which comes in two flavors – the ‘original Maple bootloader’ and a modification of the original Maple bootloader called ‘STM32duino-bootloader’, or ‘bootloader 2.0’.

Serial Adaptor Method

The FTDI (serial adaptor) method requires that the STM32* MCU be restarted in ‘native bootloader’ mode before attempting to program the device. This is accomplished (in the case of ‘blue pill’ devices) by moving the BOOT0 jumper from the ‘1’ setting to the ‘0’ setting, as shown below, and then pressing and releasing the RESET button:

Then the program can be uploaded via the Arduino IDE (in my case I’m using the Visual Studio 2022 Visual Micro extension for Arduino, so my ‘look and feel’ will be different).

I found a really good tutorial for this ‘serial’ mode here. It was created in 2018, so it is a bit out of date with respect to the state of development of arduino-compatible bootloaders allowing program upload via USB, but is by far the clearest, most readable treatment of FTDI-based serial adaptor program uploads. I copied the wiring diagram shown below from this tutorial, in case it goes away at some point:

The process for upload using Arduino and a serial adaptor for program upload described here assumes you have the Arduino IDE installed and have the STM32 family of boards installed in the Arduino IDE. The procedure for installing the board information varies depending on the Arduino IDE version (I’m using Arduino 2 with the Visual Micro extension to Visual Studion 2022).

  • Wire up the blue pill in accordance with the above diagram, and connect a USB cable from the adaptor to your PC. Note the port number associated with this connection
  • Select the ‘serial’ upload method and the port number from above, as shown in the screenshot below
  • Move the blue pill BOOT0 jumper from ‘0’ to ‘1’ and press/release the RESET button. This places the MCU in ‘Program’ mode using the built-in uploader.
  • Compile/Upload the desired Arduino program. I strongly suggest you start with a simple ‘blink’ program. You should see the upload progress from 0 to 100%. If you don’t see upload progress, you have something wrong.
  • Move the BOOT0 jumper from ‘1’ back to ‘0’ and press/release RESET. Moving the jumper places the MCU back in ‘user’ mode and pressing/releasing RESET will start your user program running. Note that in my experience, the user program will start right away, even with the BOOT0 jumper in the ‘1’ position, but you must actually move the jumper or the next time you cycle power or press/release the RESET button the MCU will come back up in ‘Program’ mode and your user program will not run.
‘Serial’ upload method and ‘COM15’ selected for program upload

The output from a successful compile/upload cycle is shown below:

13 November 2024 Update:

OK, now I have learned how to upload Arduino programs to my ‘blue pill’ STMF103C -based boards. I can program it using an ST-LINK adaptor, and I can program it using a FTDI serial adaptor. Both of these options rely on STMicro’s internal bootloader to transfer a program binary to flash memory.

After successfully programming both my ‘blue pill’ devices, I decided to try my luck with my 3D printer motherboard. This board has both serial (UART) and SWD (ST-LINK) connectors, and I chose the SWD connector option. My first try at this failed, at which point I used vMicro’s Visual Micro Explorer to check for a STM32F40xx board selection, found the ‘STM32F4xx’ selection, and installed it.

This then shows up as ‘STM32 Discovery F407’ in the board selection entry field.

With this configuration, I was able to program a variation on my blue pill ‘blink’ program to direct a square wave to the buzzer on the motherboard. Amazingly, this worked like a champ, proving that my motherboard has not been bricked at all – Yay!!

Here’s the compiler/uploader output:

Looking through the above output, I realized that this line:

which points to ‘stlink_upload.bat’ shown below:

Is where ‘all the magic’ happens. After the user program is compiled into a binary (in this case ‘BluePill.ino.bin’) this file is passed to an open-source version of STM32’s ST-LINK program, which then writes the binary file to STM32 flash memory starting at location 0x8000000.

I think this means that I could just as easily use ST-LINK on my PC to upload BluePill.ino.bin to 0x8000000.

YESSSSS! Using STM’s ST-LINK on my laptop (for some reason I can’t get STM32CubeProgrammer to work) I uploaded BluePill.ino.bin to the FFCP2 board, and it worked!

Next, I tried uploading the original FFCP2 firmware onto the device, hoping that I would then have *two* working FFCP2 motherboards. Unfortunately, although the upload succeeded, and I was able to verify that the contents of the MCU’s flash memory were identical to the binary file I got from FlashForge Tech support, I saw no indication that the program was actually running (even though no actual printer hardware was connected, I had expected that at least the display and the buzzer would be active).

Alas, now I can no longer connect to the board using ST-LINK 🙁 I fear my journey is over, and not in a good way 🙁🙁🙁🙁

Starting Over with Windows 11

posted 04 August 2024

I recently purchased a new Dell XPS15-9530 with Windows 11 installed, and I have spent the time since that purchase trying to get Windows 11 to work like I want it to, and Windows 11 has spent that same amount of time trying to get me to work like it wants me to – GRRR!

Here are some of the things I want to change from the basic Win 11 Home package I received.

  • Win 11 photo viewer sucks, and the photo viewer from Office 2010 rocks. In the old viewer, I can move from photo to photo with left and right arrows, and I can manipulate the photo multiple photos at the same time.
  • The right-click context menu in the file explorer view now has multiple pages of context menu items, most of which aren’t useful. The ‘preview’ option, which I use a lot, is buried at the bottom of the second page
  • Win 11 insists on storing my files in the ‘OneDrive’ (cloud) folder, and I hate that. Even if I ‘unlink’ my PC from ‘OneDrive’, it still tries to put stuff on the cloud – grr. See this link for information on how to adjust this
  • I now have multiple ‘Documents’ folders with multiple icons, and none of them point to my Documents folder.
  • Win 11 insists on using the first 5 characters of my email address as the name of the primary user folder (‘C:\users\[primary user name]’) and I want it use my first name for this. See this link for some information on this. Also, this link seems to imply that I might be able to ‘change my primary alias’ in my Microsoft account, (which might then change the default user account?). I was able to create a fake Outlook account (Frankabcede@outlook.com) and (although I didn’t do it this time) make it the primary alias. In theory, if I do this and then start over with Windows 11, I should wind up with ‘Frank’ as my default user account, and C:\users\Frank as my default user folder.
  • Win 11 insists on saving screenshots taken with Shift-Windows-S key combination to a screenshots folder in C:\Users\paynt\OneDrive\Pictures\Screenshots, even though I have unlinked my PC from OneDrive.
  • The private LAN connection between my old and new PC’s seems to come and go with the wind. At one point I got it working by setting it to ‘not use passwords’ or something like that.

So, for the nth time, I’m starting over, and this time I plan to document all the steps, so when I have to do this again (on the Nth+1 redo), I’ll have a little bit better roadmap. To prepare for the ‘redo’, I printed out the list of apps currently installed on the new PC, as shown in the screenshot below:

4 August 2024 list of apps on new Win 11 PC

When I look at the ‘Home’ file explorer display on my old PC, I see the following:

‘Home’ display on my old PC

Which shows that Downloads, Pictures, Videos, Desktop, Music and ‘Documents’ have ‘Stored Locally’ shown – so apparently, I got that done correctly on my old PC. When I do the same thing on my new PC, I get the following:

‘Home’ display on new PC

Resetting to factory defaults while keeping personal files:

I followed the steps shown in this link to restore to factory defaults while keeping personal files intact. Unfortunately when it came back up again, it still had ‘C:\users\paynt’ as the default folder, along with another one labelled Frank.Frank_9350, wherever the heck that came from.

Trying again, but this time I chose the option to download the OS from the web rather than restoring from a local copy.

This didn’t work either, so I elected to reset from web download, including ditching all my files and accounts (everything is backed up on my NAS, so shouldn’t be an issue).

On this run-through, I opted to not restore from my previous PC, instead opting to ‘set up as a new PC’. We’ll see how this goes. Also, I used my new ‘Frank_Paynter@outlook.com’ as my email address for my Microsoft account. Hopefully that will result in ‘C:\users\Frank’ (first 5 characters of email address) as my default user folder

Decided to skip ‘Let’s customize your experience’ and ‘Use your phone from your PC’. Accepted ‘Always have access to your recent browsing data’, skipped PC Game Pass, and then it went into updates.

Success! (with a small ‘S’). The default user folder is named ‘Frank’ instead of ‘paynt’, and there is only one of them. Also, Desktop, Downloads, Documents, Music, and Videos are ‘Stored Locally’. Unfortunately, ‘Pictures’ are still stored on OneDrive.

So, I found this:

How do I Unsync a picture folder from OneDrive?

Open OneDrive settings (select the OneDrive cloud icon in your notification area, and then select the OneDrive Help and Settings icon then Settings.) Go to the Account tab. Select Choose folders. In the Choose Folders dialog box, uncheck any folders you don’t want to sync to your computer and select OK.

And UNchecked all the folders. The first time I tried this, I couldn’t UNcheck the pictures folder, and there was a message “we are unable to stop synching some folders” After I searched on this, I found another page that said:

May 11, 2021 — Can’t stop syncing folder · Right-click OneDrive blue cloud icon in the system try, click Settings. · Go to Backup tab and click Manage Backup.

So, I did that and told Windows to stop backing up any folders to OneDrive. Then I was able to UNcheck the pictures folder (and all the other ones too), so hopefully I am almost fully weaned from OneDrive at this point. Curiously, when I went back to the ‘Choose Folders’ page to verify that everything was still UNchecked, it took a while (a minute or two) for the page to come up. When it did, however, everything was still UNchecked – Yay!

And, another success! When I took a screengrab of the ‘Choose Folders’ page, the storage location turned out to be “C:\Users\Frank\Pictures\Screenshots” – Yay Yay! I also confirmed it’s not actually necessary to bring the screengrab up to center screen and select ‘Save’, as screengrabs are automatically saved to the above folder – Yay Yay Yay!

Next, I unlinked this PC from OneDrive, using the procedure below:

To unlink your OneDrive account from a PC, you can do the following:

  1. Select the OneDrive cloud in your notification area to open the OneDrive pop-up
  2. Select the OneDrive Help and Settings icon
  3. Select Settings
  4. Go to the Account tab
  5. Select Unlink this PC

This actually worked, and now the OneDrive (cloud) icon has disappeared from the left side of File Explorer entirely – Yay Yay Yay Yay!

Windows 11 Pink Border on File Explorer

Apparently, Windows 11 has a weird sense of humor, as I have found that the border of the file explorer (and maybe others) dialog box is colored pink when it is selected, and gray when it isn’t. I hate the pink color, and naturally (because Windows 11) it can’t be changed! I found this page, where it says:

Windows 11 File Explorer uses Mica effect in the titlebar and toolbar and that’s why we can’t set any color in the titlebar using Personalization settings. In Windows 10, we could set any color in File Explorer’s titlebar by changing the accent color in Personalization settings. So, following the steps on this page, I downloaded ExplorerPatcher and tried to use it to get rid of the pink border around file explorer windows, but either windows 11 or ExplorerPatcher has changed, as this trick didn’t work -Rats!

Among other posts on the i-net, I found this one complaining about ‘pink everywhere’. The response by a ‘Microsoft expert’ contained a link to a ‘known problem in win 11’, bu the link is broken. Otherwise there was a long dissertation about display drivers (which I ignored because I haven’t changed the drivers on my laptop and they worked fine with the original win 11 install).

Finally, while just randomly changing things on the color settings dialog, I switched the ‘Transparency effects’ OFF, and voila! The pink border around file explorer windows was removed! Halleluiah! Here’s a screenshot of this particular dialog with the ‘Transparency effects’ switch highlighted:

File Sharing on Local Network:

Before I reset my new laptop, I had file sharing (somewhat) working between my new laptop, my old laptop, and my wife’s laptop, so I was hopeful that I could get it working again. In ‘Advanced Network settings’ I enabled ‘Network discovery’ and ‘File and printer sharing’ for private networks (and disabled them for public ones). I also disabled the ‘Password protected sharing’ option and enabled ‘Public folder sharing’. Here’s a screenshot of the setup:

New laptop sharing setup

Then I verified the above settings were the same for my old laptop (they were). In File Explorer I navigated to the C:\Users\Frank\Documents folder and in ‘Advanced Network Settings’ set it to share with full control by ‘Everyone’ as shown below:

Then I did the same thing with C:\Users\Public.

When I went back to my old laptop to verify sharing, I noticed that that the ‘Documents’ folder wasn’t shared, but the ‘Public’ folder was properly shared with full control for ‘Everyone’. That might explain why I was having problems with local network sharing before. In any case, I set up sharing for ‘Documents’ and ‘Public’ the same as the new laptop. Then I restarted both laptops.

When the laptops came back up, I double-clicked the network icon on both. On my old laptop I could see the NAS and Jo’s laptop, but not my new one. When I did the same on the new laptop, I couldn’t see any other devices, but there was a popup message at the top of the explorer window to the effect that network discovery had not been turned on, and to ‘click here’ to do so. I clicked, and after that I could see all the devices on my local network. I’m not sure why this happened, as I was sure I had already enabled network sharing, as shown in the ‘Advanced Network Settings – Advanced Sharing Settings’ screenshot above (maybe I didn’t click on OK?).

So, after rebooting both laptops, I can access folders on my new laptop from my old laptop, but not the other way around. I successfully copied a ~2MB folder from old to new Documents folders, but I can’t go the other way – strange. I worked through a ton of potential fixes for this, all without success. So, I’ve decided to bend to the inevitable and just go with the flow here.

Applications:

Windows Office – installed OK

Upgrade to Win 11 Pro – Per advice from CoPilot, navigated to Settings->System->Activation->Change Product Key -> Click on ‘Change’ -> enter generic Windows 11 Pro product key (VK7JG-NPHTM-C97JM-9MPGT-3V66T), and clicked OK. That was all there was to it. First time I’ve actually benefited from AI!

Activate Application Guard: Done

AJC Active Backup & AJCSync4: Done

Arduino & Teensyduino: According to this Teensy page, I Installed Arduino IDE 2.3.2, copied in the URL, and then installed teensy-specific software as described. Everything seemed to go well, with last line of log = ‘Platform teensy:avr@1.59.0 installed

Bridge Composer: Downloaded the 30-day trial, Installed and activated using emailed activation key

CopyTransControlCenter/CopyTransPhoto: For uploading videos from wife’s iphone – Done

DipTrace non-professional Standard License: Done

Movavi Video Editor 2024: Installed and activated using emailed activation key, but I don’t like the dark background – fix later

Notepad++: Done

P-touch Editor: Done

Prusa Slicer 2.8.0: Done

TeraTerm: Done

TrackIR5: Tried to install but was stopped by McAfee. Uninstalled McAfee – Done

Visual Studio 2022 Community Edition: Done

Wixel Configuration Utility: Done

Get Legacy Office Photo Viewer Back:

This site has the procedure for getting the old photo viewer back as a stand-alone app. Following the link to this site, I downloaded Microsoft SharePoint 2010 installer and launched it. Then I selected ‘Customize’. Then I set all options to ‘Not available’ except for ‘Microsoft Office Picture Manager’, which I set for ‘Run from My Computer’ (see screengrab below).

All options except ‘Microsoft Office Picture Manager’ set to ‘Not Available’

Then I clicked on ‘Install Now’ to install Picture Manager as a stand-alone app.

The next step is to restore the ‘Preview’ context menu option for photos. I found this site:

Procedure for restoring ‘Preview’ option to context menu for photos

However, I found that ‘Default Apps’ had been moved to Settings -> Apps -> Default apps. From there select ‘Photos’, and then set’ Microsoft Office 2010′ as the default app for each photo extension (.jpeg, .jpg, .png). This worked great – and as promised, the ‘Preview’ option appeared on the context menu (unfortunately it appeared on the ‘second page’ so you have to first select ‘Show more options’ to see it).

Restore Windows 10 context menu with ‘Preview’ item near top:

Now that I have the old Office Photo Manager back, the next trick is to move the ‘Preview’ context menu item to the ‘front’ page of the context menu. After some research, it appears that the easiest way to do this is to simply restore the Windows 10 context menu style. This involves adding a key to the registry. There are a number of ‘HowTo’ videos on this – pick one. After editing the registry, this is my new context menu for photos:

Windows 10 context menu, with ‘Preview’ 3rd from top

End Game:

At this point I think I have things pretty well recovered, without all the crap about multiple ‘Document’ folders and wrong-named user folders. I’ll let this play out for a while and make any other adjustments as necessary. Hopefully I can now settle into my new laptop without cringing every time it opens a File Explorer window

Stay Tuned,

Frank

Untangling gl_Left/Rightspeednum global/local variables

Posted 30 May 2024

While looking through the code for another reason, I discovered that I have committed the mortal sins of using the same name for a global variable, a local variable and a function definition parameter. Originally I defined global variables gl_Leftspeednum & gl_Rightspeednum thusly:

But then some years later in my code I see:

They’re everywhere! yikes!

So, what to do? The original (bad) idea was to have these variables ‘global’ so any part of the code could ‘see’ the current motor speeds. This was BAD because that also meant that any part of the code could change the motor speed (even if it shouldn’t) , and figuring out who did that would be a nightmare. This is where I should have started thinking about building a ‘motor’ class to hide all this – but I didn’t, so….

Also, using a global symbol name in a function definition is at least moronic if not suicidally stupid – does that overwrite the original declaration? To add insult to injury, the function definitions above use ‘int’ as the type rather than ‘uint_16’, so does that mean that motor speed can be negative, but just inside that function – ouch, my head hurts!

Alright – since I didn’t do the right thing and encapsulate this stuff in a motor class, and I don’t want to have to rewrite the entire 7K+ line program (at least not yet), I need to figure out a short-term non-idiotic fix (or maybe just close my eyes and have another beer?)

OK, so the functions involved in this debacle are:

  • void SetLeftMotorDirAndSpeed(bool bIsFwd, int speed)
  • void SetRightMotorDirAndSpeed(bool bIsFwd, int speed)
  • void RunBothMotors(bool bisFwd, int gl_Leftspeednum, int gl_Rightspeednum)
  • RunBothMotorsBidirectional(int leftspeed, int rightspeed)
  • void RunBothMotorsMsec(bool bisFwd, int timeMsec = 500, int gl_Leftspeednum = MOTOR_SPEED_HALF, int gl_Rightspeednum = MOTOR_SPEED_HALF)
  • void RunBothMotorsMsec(bool bisFwd, int timeMsec, int gl_Leftspeednum, int gl_Rightspeednum)
  • void MoveReverse(int gl_Leftspeednum, int gl_Rightspeednum)
  • void MoveAhead(int gl_Leftspeednum, int gl_Rightspeednum)

SetLeft/RightMotorDirAndSpeed(bool bIsFwd, int speed):

This declaration should probably be (bool, uint16_t) as negative speed values aren’t allowed. I changed the speed declaration from ‘int’ to ‘uint16_t’ and the program still compiles OK. The ‘speed’ argument gets passed to ‘AnalogWrite’ which is declared as AnalogWrite(int pin, int value).

RunBothMotors(bool bisFwd, int gl_Leftspeednum, int gl_Rightspeednum):

RunBothMotors() is called just once in the code, by RunBothMotorsMsec(). RunBothMotorsMsec() in turn is called just four times – three times by HandleExcessSteervalCase() and once by RunToDaylight(). In all four cases the speed arguments are positive constant integers <= 1000 (Teensy analog output resolution is set at 12 bits –>4096). It looks like RunBothMotors() and RunBothMotorsMsec() should declare their speed arguments to be uint16_t

RunBothMotorsBidirectional(int leftspeed, int rightspeed)

RunBothMotorsBidirectional(int leftspeed, int rightspeed) just calls SetLeftMotorDirAndSpeed() however, the speed arguments can be positive or negative, so the ‘int’ declaration is required in this case. The sign of the speed input argument is converted to the appropriate direction flag value and a negative input speed is converted to a positive value for the SetLeftMotorDirAndSpeed() call.

RunBothMotorsMsec(bool bisFwd, int timeMsec, int gl_Leftspeednum, int gl_Rightspeednum)

All this function does is call RunBothMotors(), then delay for the requested amount of time, then stop the motors. Note that RunBothMotors() does not check the speed arguments for range or sign.

MoveReverse(int gl_Leftspeednum, int gl_Rightspeednum):

MoveReverse() is used extensively in ‘CheckForUserInput()’, but only twice elsewhere ( both times in IRHomeToChgStn()).

MoveAhead(int gl_Leftspeednum, int gl_Rightspeednum):

Similar to MoveReverse(), but used more outside ‘CheckForUserInput()’. Once in ExecuteRearObstacleRecovery(), once in TrackLeftWallOffset(), once in TrackRightWallOffset(), once in IRHomeToChgStnNoPings(), once in IRHomeToChgStnNoPingsPID(), twice in IRHomeToChgStn().

int gl_Leftspeednum, int gl_Rightspeednum:

These symbols are everywhere in the code, in a global variable declaration, in the signature of many of the motor functions, and in the code itself as local variables in the functions that have those symbols in the signature.

As an experiment I commented the global uint16_t definitions out and re-compiled. I got a bunch of ‘was not declared in this scope’ errors, but they were all like the following snippit:

in the above code a local int16_t variable is declared because the result could be negative. Then the local variables are constrained into the range (0-255) and then loaded into the gl_Left/Rightspeednum global vars, and also passed to MoveAhead(). This only occurs in the two TrackLeft/RightWallOffset() functions.

gl_Left/Rightspeednum global vars are also used in the ‘OutputTelemetryLine()’ function

So it looks like the usage in the above snippet is actually OK. The global vars wind up being loaded with the latest left/right speed values just before those values are sent to the motor driver. The usage in the telemetry output functions are also OK, as they just print the current left/right speed value

gl_Left/Rightspeednum used in function declarations:

I re-educated myself on the fact that formal function declarations don’t actually need parameter names – just the type declarations, so:

could just as easily be written:

so maybe my use of the gl_Left/Rightspeednum names for these parameters wasn’t quite so scary bad as I thought. Still, defining the same symbol name in two different contexts as two different types (uint16_t and int) is demonstrably a bad idea, even if one of the symbol usages is ignored by the compiler (after all, this usage is what resulted in my current freakout). I changed these to ‘uint16_t leftspeednum’ and ‘uint16_t rightspeednum, in both the formal declaration at the top of the program (reqd for default parameter declaration) and the ‘inline’ declaration.

I wound up changing the following lines:

In addition, there are a number of places where the output from the PIDCalcs() function is added to or subtracted from the current speed to produce the next speed value, but the initial adjustment is to a ‘uint16_t’ variable. This is problematic because the initial adjustment can result in a negative value being loaded into a uint16_t variable, with unexpected (if still well-defined) behavior. The fix for this is to change the type of the ‘local’ variable to ‘int’ vs ‘uint16_t’ to accommodate the potential for negative values, and only load the result into the global ‘uint16_t’ variable when it is certain the result is positive. This resulted in the following changes:

After all these edits, the program still compiles cleanly. As to whether or not it behaves cleanly, that is still a very open questions. Only time will tell!

Stay tuned,

Frank

Python Script for Challenging Invalid Voter Registrations

Posted 31 May 2024

The folks at TrueTheVote.org (the organization that used cellphone geotracking to expose widespread voter fraud during the 2020 election) put together a database to expose huge numbers of invalid voter registrations across the country. Most of these invalid registrations are due to the voter having moved out of their original voting district/county, but not removed by the responsible election board. While this seems pretty innocuous (and was, in earlier, less troubled times), this now represents a huge opportunity for fraud in the upcoming 2024 election.

Although the TTV folks have the data, they can’t do much about it without the help of concerned citizens who actually vote in those regions because local laws require that any voter challenge be raised by a voting citizen in that particular region.

So, TTV generated a website called ‘IV3’ which allows concerned citizens from anywhere in the U.S. to create an account and query the IV3 database for problematic voter registration records for their voting district/county. For instance, I live and vote in Franklin county, Ohio and my page on the IV3 site looks like this:

If I click on ‘View Active’, I get a page displaying the first record that matches the criteria, i.e. a voter still registered in Franklin county but who has since moved to an address outside of the County, as shown below:

If I want to challenge this voter’s registration in Franklin county, I would click on ‘Challenge this record’, which would display ‘Cancel’ and ‘Submit’ buttons as shown below:

Clicking on the ‘Submit’ button would remove the record from the ‘Active’ list and place it on the ‘Challenged’ list, which could then be exported in .CSV format for submission to the Franklin county board of elections.

It sometimes takes more than 100 seconds for the site to display a new record after each challenge submission, so this gets old pretty fast. After several days of plugging along while working on other things, I had managed to challenge about 600 records from the more than 42,000, a mere ‘drop in the bucket’. So, I started to wonder if I might be able to automate this a bit with a Python script; a web-bot of sorts.

After some research, I discovered a web-page automation API called ‘Selenium’ that could be called from a Python script, so I started learning how to use Selenium to do what I wanted. After the usual number of mistakes and appeals to StackOverflow for guidance, I got a working Python script together, as shown below:

Note that in order to use this script, you must have Python3 and the Selenium extension installed on your computer.

Even though I used ‘FranklinCountyOhioChallenges’ as the name of the main function, this script should be usable for any other location (or you can simply change the name, as long as the two occurrences in the script have identical names).

After getting the script working, I can now run the script to challenge any number of voters with a very simple command, as shown below:

On my windows system (and I’m pretty sure this holds for **nux systems as well) all I have to do to run another batch of the same size is to click on the ‘up-arrow’ button once and then hit ‘Return’. If a different batch size is desired, it’s ‘up-arrow’, edit the batch size, then ‘Return’.

I have found that doing a batch size of 100 takes about 90 minutes, so I can do several of these during the day while working on other things, and then I generally do a batch of 500 overnight. This allows me to do at least 1000 or so each day, so it will still take me around 42 days to challenge all the 42K or so registered voters who have moved out of the county. Your mileage may vary, of course :).

Each time I get a thousand or so challenges done, I click on the ‘View My Challenges’ button on the main page, and then on the ‘Export’ button as shown below, to download the challenges into a .CSV file that is directly readable in Excel (or any other modern spreadsheet program). I then use Excel to print out the entire batch (using Portrait mode and scaling to ‘fit all columns on one page’). Then I fill out and sign the cover form required by the Franklin County Board of Elections, attach the printed out challenge records, and physically submit the form and data to the BOE. As courtesy I also email the .CSV file to the responsible officer there, and so far they seem to appreciate the effort.

22 June 2024 Update:

My script started failing on me a few days ago, and I couldn’t see why. After using the issue as my ‘going to sleep puzzle’, I realized I could go back to my old manual process and see if it worked. If it did, then something in my script was bad. If it failed, then something had changed on the iv3 website.

As it turned out, IV3 had added a new ‘View Moved and Registered’ button, and moved all the qualifying records (which, it turned out, was all of them) into the new database. So, when I clicked on my normal ‘View Active’ button, I got ‘No Records Found’, which of course also killed my script :(.

So, the fix was to direct my script to the new button instead, and then all was well. I have updated the above script to the new version.

Stay tuned,

Frank

Improved Pill/Caplet Dispenser

Almost three years ago I designed and fabricated some pill/caplet dispensers for the half-dozen or so prescription meds I have managed to accumulate over the last decade or so. A while ago, one of my prescriptions changed its tablet to a much smaller size, so I decided to update my design while fabricating a replacement dispenser.

Between the last project and this one I’ve been playing with OnShape, a web-based 3D CAD package, so I thought I would use it to see if I could do better than last time. I really like OnShape because it uses a 2D ‘sketch’ based design philosophy, which makes tweaks and/or modifications much easier – change a few 2D sketches, and the entire design changes along with it.

The previous design implemented a smooth collar that was a press-fit for the pill bottle cap which turned out to be kind of clunky. This time I thought I might try implementing internal threads on the collar so instead of a press fit it would simply screw on like the original cap, and I discovered that a ‘ThreadCreator’ extension existed for OnShape – neat!

So, I worked my way through the process, and came up with the following design, available to anyone with a free OnShape account

This design has internal threads for a 37mm cap with the standard 4.7mm thread pitch, so it will screw directly onto the pill bottle, ‘eliminating the middleman’. Here are some photos of the finished product:

And here is a short video showing the dispenser in action:

10 June 2024 Update:

Last night I attempted run two more prints of this model, as I have two additional pill bottles of the same diameter with older pill dispensers, but the prints failed catastrophically – bummer! I rounded up the usual suspects (bed temp, model arrangement, Z-axis tuning, etc, and finally managed to get another print going, at least through the raft and first few layers. After bitching and moaning about this for a while, it occurred to me that if I had documented the layout and settings more aggressively from the first print, I wouldn’t have wasted all those hours last night and today. So, once I’m sure I have a consistent print configuration, I will document it here.

I got a good print with the following settings:

  • Flashforge Creator PRO II ID
  • Left Extruder – Red PETG, 240C, 80C Bed
  • Right Extruder AquaSys120 240C, 80C Bed
  • 2-layer raft using support filament (AquaSys120)

See the following images for the full setup:

After a 3-hour side-trip into the guts of the Flashforge to clear an extruder jam, I was able to get the second print underway. As I write this it is about 6% finished, but all the way through all the support material parts (so it should finish OK).

17 June 2024 Update:

Not so fast! I realized that the threaded portion of the dispenser cap, while functional, was very poorly printed due to the lack of supports (and, as I found out later, also due to the resolution setting). In addition, the side walls of the V7 slide box were too thin and broke apart easily. After modifying the design, I attempted another print using the above settings, but the PVA dissolvable filament simply refused to stick to the print bed – arrrrrgggggghhhhh!

After going through the whole extruder & bed temperature search routine again yesterday, including replacing the heated bet PEI layer and even putting down blue painter’s tape with no success, I was perusing google-space for clues and kept running across reports where dehumidifying the PVA filament worked. I didn’t see how that would help me, as we control the relative humidity in our house to about 50% +/-, but hey, what did I have to lose at this point?

So, before going to bed I dug out my filament dehumidifier rig and left my filament in it overnight (and until about noon the next day for a little over 12 hours). Then I tried some prints and although not successful at first, the results were encouraging. I finally got two really good prints with the following setup:

  • Slicer resolution: ‘0.15m OPTIMAL’ setting in Prusa Slicer
  • Right (PVA) extruder: 220C
  • Left (PETG) extruder: 240C (this was constant throughout)
  • Bed: 40C
  • Layer of blue painter’s tape on top of the PEI substrate

So, I think the big takeaway from this episode is: PVA must be explicitly dehumidified BEFORE each print session. Otherwise the PVA will not stick to the print bed, no matter what you do.

Stay tuned,

Frank

07 July 2024 Update:

After getting the threaded pill bottle dispenser cap working, I decided to try my luck with my two 57mm twist-lock pill bottles. The twist-lock cap geometry was considerably harder to design. Rather than trying to design and print everything as one piece, I decided to separate the dispenser piece from the cap mating piece, as shown below:

Bottle Cap Mating Ring
Pill Dispenser body and slide
All three pieces together. Note that the cap mating ring fits into the dispenser body

Fix for Inadvertent Crimson Trace Laser Activation with M&P Bodyguard 380 in ‘Sticky’ Holster

Posted 05 May 2024

My daily carry pistol is the M&P Bodyguard 380 in a ‘Sticky’ brand Holster, as shown below:

I carry this in my jeans front pocket, and it works great. I regularly practice smoothly drawing the pistol, activating the Crimson Trace laser, and getting the gun on target. Unfortunately after a year or so of use I started seeing occurrences where the laser wouldn’t activate, and investigation showed that the laser battery was dead. The first time this happened I just wrote it off to the normal battery life, but the second and third times were definitely too close together to be a battery life issue. I finally figured out that the laser was being inadvertently activated in the holster – clearly not a good solution. The good news is, it made me more determined than ever to not count on the laser. Now I practice with and without the laser (although I much prefer the ‘with’ scenario).

Thinking about the problem, I inferred that the ‘Sticky’ holster, when new, comparatively stiffer when new than after hundreds of cycles of inserting and removing it from my jeans pocket, and of course hundreds of dry-fire draw and shoot repetitions. Eventually the holster gets pliable enough so the normal inward pressure from my jeans pocket is enough to activate the laser at some point (and once is enough, as once activated it will probably stay ON until battery exhaustion). As you might imagine, replacing the battery in the laser module is a major PITA, as the pistol itself must be disassembled, and then the laser module removed to access the battery. Then the procedure must be run in reverse to re-assemble everything, and now the laser alignment must be checked and adjusted as necessary (another major PITA).

Thinking about solutions, I contemplated 3D-printing a holster insert that would replace the original holster stiffness (and I might still do this). However, I was struck by the idea that the real solution to the holster material pressing in on the laser activation button is to remove the holster material around the button; then the holster material thickness becomes an additional guard around the button. Instead of being the culprit, it now becomes the solution – cool! Here’s another photo showing the ‘Sticky’ holster with a (unfortunately crude) hold around the pistol’s laser button:

‘Sticky’ holster with material over laser activation button removed

After – once again – replacing the laser battery, I plan to run with this setup for a while and see how long the laser batteries last this time.

17 July 2024 Upate:

From May of this year until now I had no problem with laser battery life, but yesterday I found the batteries low/dead again. This indicated I was still getting inadvertent laser activation even with the laser button cutouts shown above. So, I decided to see if I could improve on the design a bit.

I went into OnShape, my 3D design tool of choice, and designed a hollow ‘holster bump’ as shown in the screenshot below:

The idea, as shown below, is to protect the laser activation button on each side of the gun, so it can’t be activated when in the holster

I then used hot glue to affix them temporarily to the gun to confirm their positions coincided with the holes I had previously mad in the holster. After this, I glued the bumps into the holes with superglue. We’ll see how this works.

Stay tuned,

Frank

Printing NY Times Crossword Puzzles Using Across Lite & AutoIt Script

Posted 05 May 2024

I and my wife are crossword puzzle addicts. To feed our habits, I signed up for the NY Times crossword puzzle archives and downloaded the Friday, Saturday and Sunday puzzles (the Mon-Thurs puzzles were too easy) for each week between January 2015 and December 2022.

Originally I would print out a puzzle as required by opening/printing the puzzle file using Across Lite, but this got old in a hurry. So, I decided I would create a program to automagically print an entire folder’s worth of puzzles in ‘batch mode’ utilizing two-sided printing – yay! Looking around for the best/easiest way to accomplish this, I ran across an application called ‘AutoIt’, specifically created as a shell-script generator to run Windows (or Mac) applications and system functions.

It took a while to work my way through the command reference and examples, but eventually I was able to create an AutoIt shell script to do what I wanted; The script prompts the user for a directory containing Across Lite *.puz files, and offers to print them all in equal N-puzzle batches, thereby allowing the user to print a batch and then move the printed puzzles from the output tray to the input tray for a double-sided result.

This worked great, with the only downside being that the user’s PC cannot be used for anything else while the script is running, as it grabs the mouse cursor to launch Across Lite and print the current .puz file.

Here’s the script, as it stands now in May of 2024 (saved on my system as C:\Users\Frank\Documents\Personal\Crosswords\Print Pending\240504 PUZ Print Script.au3):

And here is the output log for printing the contents of the ‘C:\Users\Frank\Documents\Personal\Crosswords\Print Pending\2015’ folder containing 156 puzzle files, printed in four batches of 39 files each:

Return of the Robot – sort of

After some time away from my autonomous wall-following robot, I have started spending time with it again. The first thing that happened was I tried a long-term run in my home, only to find that the ‘mirrored-surface’ feature I added some time ago caused the robot to enter an infinite loop, even when encountering a non-mirrored surface – oops! This eventuality was such a bummer that I stopped working with the robot for several months.

When I worked up the courage to address the problem, the first thing I did was to back out the ‘mirrored-surface’ code, reverting back to the state of affairs represented by the ‘WallE3_QuickSort_V5.ino’ Arduino project. This required quite a bit more work than I had anticipated; I had thought my process of incremental builds would shield me from that – NOT! Eventually I was able to use some ‘diff’ tools to work my way through the process.

After getting the code squared away without the ‘mirrored surface’ code, I decided to take my robot out for a walk – well actually it was the robot taking me for a walk ‘in’ for a walk around the house. Here’s a link to the (not-so-short) video showing the action. I have included this as a link to the video file on my Google Drive site, because it’s too long to fit on my WordPress blog page.