This project was proposed by a friend of mine. He asked if I could make an automatic launching system for a drag car. That was a challenge I couldn’t pass up. First, I had to learn something about the rules and regulations and strategies of drag racing. If you don’t know about drag racing strategy, you might assume (like I did) that there is not much too it. In fact, there is a lot more to drag racing strategy than “step on the gas and go” and I still can’t say I have a firm grasp. But I learned what I needed to for this project, and that’s enough.
After giving it some thought and doing some research, I came up with the basic design. A USB camera is mounted on the car and focused on the Christmas tree (which, I learned, is the common name for structure that holds the signal lights). The video is processed on a Raspberry Pi 3B+ (I did this before the Raspberry Pi 4 existed) and releases the transmission brake at just the right time. Instead of a real transmission brake, the car uses the technique explained in the video below. Basically, with this 4L80E transmission, it’s possible to do some minor rewiring so that when the driver presses a button, it locks up between gears. When the button is released, it drops into gear and accelerates.
Instead of a button and relay, I have a transistor (APT45GP120BG) controlled by the Pi. There is actually still a button for the driver, but it functions as a safety. As long as the button is held, the Pi has control of the transmission. Releasing it restores control to the car’s computer. The button can be safely released as soon as the Pi “releases the brake” since it has essentially already given control back to the car. This is a picture of the first prototype before I finished wiring it.
I made the sheet metal box by hand. Not for this project, but a couple of years earlier. It turned out to be about the right size, if not a little too large. It fits easily on a car seat or floor and has plenty of room inside.The box is hardwired to the vehicle’s 12VDC and steps it down to 5V for the Pi and screen (the two USB ports inside the box). The safety button is shown coiled up on the right. The Pi is normally mounted somewhere accessible to the driver, but is shown here inside the case. It has a cheap 5″ touchscreen attached to it. It’s not great but it works well enough for the price. That’s really all there is to the hardware. It’s pretty minimal.
The Raspberry Pi 3B+ is hardly a great candidate for this (I think I just had one laying around at the time and decided to try it). I wanted to optimize the platform as much as I could. I used vanilla Raspbian but a Linux kernel I compiled with the PREEMPT_RT patch applied. The OpenCV version that shipped with Raspbian was super old. Fortunately, it could be installed from source! Unfortunately, it takes several days to compile on a Raspberry Pi… or maybe it ran out of memory… I don’t remember now, but for whatever reason it didn’t work. I was able to cross-compile it although I remember it being a bit difficult. But in the end I had Raspbian with a “real-time” kernel and OpenCV 4.0.1. Finally, time to process video!
Except I didn’t have video to process, or access to a drag strip. Fortunately, the Internet truly has everything. I may have watched the below video more times than anyone else.
I tried using the Haar Cascade Classifier to detect the Christmas tree but that did not work well. I used a “pre-trained” model, in that I ran the training on an AWS server with several GPUs. I then loaded the trained model onto the Pi. It ran slowly but did sort of work at times. It was far from reliable though, so after a lot of effort, I abandoned this approach. I quickly discovered all this was unnecessary anyway.
What I wound up doing was to create a calibration process. The camera records a video (of a test run, for example). One frame of the video is selected and that frame is used to calibrate the colors and relative locations of the lights. Then the camera is trained on the lights and filters specific colors. The lamp position information from the calibration file is used to select ROI (regions of interest) for processing. When the final yellow lights have been detected, all information has been gathered and we can predict the least amount of time that must pass before the light turns green. The error comes down the time between frames where we don’t know exactly when the light came on. This can be updated every time we detect a new light up until the final yellow light. The timing of course is important, which is why I bothered to patch the kernel. The PREEMPT_RT patch lets you set a higher priority for POSIX threads than the unpatched kernel.
The next step of course was to hook up the video camera and try it live. I still didn’t have access to a drag strip and wasn’t committed enough to buy a Christmas tree (although I explored it). Fortunately, the scale does not matter!
I built a small scale Christmas tree with an Arduino and white and yellow LEDs. It has the same timing as a full-size one. I mounted it on one end of a 2×4 and the camera on the other end. I ran the signal out wire from the transistor to the Arduino. That way it can measure the time between when the light turns green and when the signal is received. I ran the calibration for the small tree and the detection worked great on the first try!
I’m calling the project DRT Eliminator. DRT is for “driver reaction time,” but it doesn’t quite it eliminate it. The first version of the algorithm was written to be robust for testing. The first version of “live mode” was something like 5 times faster. This one has been on the back shelf for a while now, but I think was getting about 0.5 second delays. The Pi hardware might be underpowered, but there is still a lot of room to improve the efficiency of the code. The whole thing is written in C++ of course.
I also built a second prototype with a bit nicer packaging. That one is standing by for installation in a car for testing at a real drag strip.