Urban DIY Robocar Build Notes

Intro

Here's my log of a build of a 1:28 scale self-driving RC car built to complete something akin to the DARPA Urban Challenge, in my living room.

This will be something like a DonkeyCar, but driving at much slower speeds through a cardboard urban environment. It should have figurine pedestrian/cyclist detection and traffic light handling, as well as localization, dealing with other vehicles, etc.

Parts

Initial build

Just enough to verify I can drive-by-wire at slow speeds.

PartPrice
WLToys K989$50
L298N motor driver H Bridge$7
Gear Motor with encoder 12v 977 RPM$16

I already have a 12V battery (which will replace the 7.4V that comes with the car) and RPi.

Build

Step 1: Drive-By-Wire

The WLToys K989 is a ready-to-run (RTR) car, so there's a lot of stuff it comes with that I don't want/need and must strip out: remote, transmitter hardware, integrated ESC.

Since I'm using a different motor, I plan to replace the battery with a 12V one and remove the old motor. The new motor will be driven by the H Bridge, connected to the 12V battery and controlled by PWM from the RPi. It's not immediately clear to me if the new motor will actually fit in the space available, so I'm crossing my fingers and hoping.

The steering servo will stay, but will be plugged directly in to the RPi. Hopefully the generated PWM signals from the RPi are good enough for these purposes. If not, I'll need a servo driver controller.

Picture of the K989's steering servo connector

The steering servo looks like a standard servo including the connector, so this should be easy.

10/1 Update

After playing with the received car, change of plans. The built-in brushed motor can actually move slow enough for my purposes, so I'll be leaving it in place (simplifies the build greatly) and instead building a drive-by-wire on top of the existing hardware. I tried to devise ways to control the existing ESC+receiver combo (it appears to have pinouts to reprogram it), but after an hour I gave up and ordered a standalone ESC and servo driver. This brings the build closer to the DonkeyCar setup, but it should remain fairly straightforward and reproducable with simple instructions. I'll find some other project to use the L298N and gear motor in.

10/6 Update

Drive-By-Wire success! By using the standalone ESC and servo driver, I was able to control traction and steering out of the box with DonkeyCar code. The only minor hiccup was that the servo motor actually needs another V+ pin connected on the PCA9685 in addition to the VCC, which is intuitive if it's not 12:30am. Here's the traction motor working -- I'm using my Exceed DonkeyCar as a primary because I don't have scaffolding printed yet for the WLToys, nor do I have another Pi ready to go.

I'm in desperate need of a workbench, too. But the good news is that it looks like getting a WLToys car hooked up with DonkeyCar requires just an ESC, and otherwise will cost notably less than a magnet car setup.

12/2 update: Localization plans

While I still need to finalize the chassis for the car, I have drive-by-wire working on a standalone unit and it's time to come up with concrete plans for how this should work. Projects like DonkeyCar are primarily motivated with staying in lane lines and traversing a track as quickly as possible. For this project, I need to stay in lane lines, but also track the location of the car to know where to expect stoplights, where to turn, etc.

Lane line tracking can be done by using a segmentation network and figuring out where the lane lines are in frame, then using a steering PID to correct positioning to the center. For determining where the car is on the map, we have some new challenges posed by a desire to limit hardware (no sonar, laser range finder, or indoor 'GPS' equipment). Here's what I'm thinking so far for an optical-only localization scheme (coarse):

Place visually distinct (easy to segment out of a camera frame) markers around the course. Use traditional CV or a CNN to find the x, y, and size of the marker in frame. Pass this into a CNN trained on marker location in frame and true vehicle position. The vehicle's view would look something like this:

Localization view from the car's camera, showing a striped marker on the side of a road.

And the top-down view of the course would look something like this:

Top-down view of city roads, with a car at a marked pixel location and several striped markers on the map.

Training the network would be done by tracking the X,Y position of a visually distinct marker on top of the car with an overhead camera, and paired with views from the car's camera at that moment in time. The overhead camera would only be required for training and, if all works, the car will be able to (coarsely) localize itself within the map by using the trained frames.

We'll see if this works once I get the city built. Stay tuned.