Wednesday, June 7, 2017

Indoors Navigation Robot ("Driving Drawers") Post 2

This post is about the path planning algorithm for the robot.

Because the robot is a non-holonomic, non-radially symmetric, non-zero turn radius type of vehicle, the path planning problem was fairly difficult.

I was struggling to find a readymade solution to the following problem: plot a curve between obstacles, with a constraint on the sharpest allowable curvature in the path.

For a setting with no obstacles, with a minimum turn radius specified, and where starting and ending state are specified as coordinate and angle in 2D space, I found Reeds-Shepp, which was already implemented in Python on GitHub. https://github.com/ghliu/pyReedsSheppReeds-Shepp Curves

I briefly considered using Rapidly Exploring Random Trees to search for solutions in this problem space, but decided on something different instead after reading some literature and testing some code. I think some of the variants I've seen published of RRTs would work nicely, but I didn't want to implement them in Python from scratch. Two repositories I looked at when making this decision:
https://github.com/iamprem/rmphttps://github.com/ArianJM/rapidly-exploring-random-trees

I decided to go with A* for search. To prepare my problem for that, I first discretized the map of obstacles (represented by polygons) using a visibility graph. I built the visibility graph using another Python repository pyvisgraph. I spent several days hunting down bugs in the code, coming up with one case of rounding error and one case of logical error that were causing problems in my setup. I opened up a few "Issues" through the GitHub portal and the author will be able to fix them. I considered doing a pull request but for just a few lines of code like that I figured the Issues would suffice. After fixing the bugs I provided support for interior polygons and adding obstacles to the visibility graph without recomputing, so I guess I should probably clean it all up and do a real pull request eventually.

I also had to discretize the robot angle at this point. I decided to go with 4 of what I called 'cones': North, East, South, West. The code has this parameter as K, so I call them K-cones (you can just as easily specify five cones, but then they can be harder to refer to casually when describing the solution). Since Reeds-Shepp takes in an exact angle, when computing the cost of a path from (start point, start cone), to (goal point, goal cone), I try all four combinations of clockwise and counterclockwise limits of the start cone and goal cone. Then I take the worst score and use that as the cost. Paths intersecting the obstacle map are ignored in this step, so they don't count towards the concept of the worst score.

Then I searched over this space with A*, and found solutions in the form of a list of (location, cone). But to resolve this into a navigable path still required resolving the cones (North, East, South, West) into angles. If there are n steps in the path, and we choose to sample d regularly spaced angles within a cone to find the best overall path length, then the size of the space is d raised to the power n. It isn't easy or necessarily possible to make n any smaller than it already is--so keep d small. I found that even for d=2 (checking just the most clockwise and counterclockwise limit of the cone) could find a solution. The difference between best total path length found were minimal anyhow.

I used Shapely to handle geometry related tasks like union-ing polygons together, checking for intersections, and so forth. I used matplotlib for drawing things to the screen.

You had to read all that, so here are some pictures!


An example solution after the angles have been resolved.

An easy solution, where the goal is immediately reached from the start point with a single Reeds-Shepp curve.

Here's a messy picture of all the edges in the graph for a particular setup. The pink things represent the cones.

This picture that I saved at some point in my work represents paths from points on the visibility graph that intersect the walls of this empty room. The intersecting sections are shown in red.
The strongest thing I can say for my hacked together system, "A* over Reeds-Shepp with K-Cones" is that is seems to work in all the cases I need it to work. I can't say anything about the optimality of the solution or even about whether a solution will be found if it exists. Computing the visibility graph and edge costs on the visibility graph when expanded for the 4 cones on each point takes about ten seconds on my laptop, for what that is worth. It can be saved and reloaded for future use, assuming the obstacle map hasn't changed.

I'm taking a break from this project to prepare for the Advanced Topics in CS class that I will be teaching this summer (high school grade levels, but college level materials). When I come back to this project, I will take the hardware from the last post, the path finding from this post, and a few bits of math I've worked out in my notes, to put the whole thing together into a robot that navigates from room to room using ArUco tags

Indoors Navigation Robot ("Driving Drawers") Post 1

[Foreword: you may notice the fonts are mismatched in this post. Or maybe you noticed that the images in all the posts in this blog, when viewed on a larger screen (as opposed to a small mobile device screen), are placed haphazardly within the body of the post. This is because Blogger leaves much to be desired when it comes to the WYSIWYG post editor. In part because WYS ("What you see") is not always WYG ("what you get") in the context of Blogger's post editor, but also because there are very limited options for aesthetic image layouts in the body of a post. I am planning a Jekyll based webpage for this blog in the general future that will fix these problems. Until then, this is a cheap and easy way to make sure I keep a blog at all.]


I had an idea for a chest of drawers that would drive to my location in a space (assumes no stairs).

Here's the first part of my progress on that project.

Physical Build

I started by dissecting my collection of Roombas and taking the wheels from two different models and hot gluing them, along with some support material, to the base of a plastic chest of drawers.








I purchased a Raspberry Pi 3 and a PiCam, and while I waited I put together a 3D pan-tilt design from Thingiverse. I'll provide a link (link to thing), but keep in mind I do not recommend this design. Unless your 3D printer is very precise, you'll have to do quite a bit of filing and sanding to get it to go together. The pan servo will absorb all the impact whenever it hits something, and will fail (mine did). If you can find a ring of the right thickness and diameter to stick in between the orange disc and blue plate in the photo, the problem is mitigated (I used a bearing I found in a bin--way overkill as it isn't even acting as a bearing--but a quicker solution than cutting my own ring on the lathe).




Not entirely certain of the layout I wanted, I just taped everything in place with masking tape. The battery is stored underneath, in the topmost drawer of the robot. It's a 30,000 mAh battery I bought for use with my smart phone. It has a port that will source 2.5A, which is needed by the Raspberry Pi. I paid about $75 for this model; you should be able to find comparable batteries by other brands if that one is not available (sometimes, when an item is out of stock, a few vendors will offer it for an inflated price, so beware. The price on Amazon for this model was briefly $399 before dropping to $69.99 again). 
I pulled the Arduino Mega out of a project I'm not working on at the moment, though it is of course overkill for this application. I wasn't sure how many sensors and actuators I wanted on board, so this 54 I/O pin Arduino allows for quite a bit of room to grow the project. The Raspberry Pi 3 itself only has one PWM enabled pin available to me, so the Arduino is convenient for handling all the low level stuff. It talks to the Raspberry Pi over the USB. The micro servos are powered from the Arduino Mega which are in turned powered off the Raspberry Pi. The micro servos stall current is low enough for this to be possible with the Arduino Mega.




 The Roomba wheel motors are safe at 12 volts (the Roomba battery voltage), so I put another battery in the system just for them. The battery is a 3 cell Lithium Polymer battery, which measures in at roughly 11.1 volts when the battery needs to be recharged and 12.6V when the battery is fully charged. The motor drivers are L298 chips on those red-colored Sparkfun breakout boards, with the heatsinks mounted to them.







So at this point the robot was driving, but only in a straight line. Turns would drag at least one wheel and make a terrible noise. Only very slight turns worked. This was fairly predictable, but trying to make it work anyway was very much in keeping with my glue-and-tape, then iterate style of prototyping. Jesse helped me put together a steering mechanism in a very short amount of time. It worked, so that evening I took the robot out for an inaugural journey around Sector67, using it as a sort of telepresence robot as I controlled it from my desk.




Then I broke the steering mechanism gearmotor by switching it back and forth too fast when I got stuck in a corner. The gear before the final output shaft broke into a bunch of tiny pieces.



I replaced it with the gearmotor in the image above on the right that has the partly blue casing. Now that I had a working robot again, it was time to work on the high level path planning and code. I'll put that in the next post.



Sunday, May 7, 2017

FRC Machine Shop with Sector67

In April Sector67 volunteered to run a machine shop for both FIRST Robotics Seven Rivers Regional and St. Louis Worlds events. It had been a while since I participated in FRC in any form (though I did mentor an FTC team this year, which is another FIRST competition).

Here are some photos. Since I only took photos when there was a break in the work orders coming in to the shop (usually because there was some mandatory attendance event for teams), this photo set gives the impression that we were a lot less busy than we actually were.

Seven Rivers Regional one view of the machine shop

Seven Rivers Regional game field

Teaching lockpicking at the table photo left


St. Louis Worlds Machine shop front side, 3D printer and laser cutter with fume extractor

More stuff in the St. Louis machine shop setup

View of the St. Louis playing fields in the convention center arena



Monday, April 3, 2017

Giant Display and Home for my First Arduino


The extra large (6.5") seven segment display array pictured above has been at my desk for a long time now, but I never posted about it. Remembering how it came together is a fun walk through memory lane.

Central to this project is my very first Arduino. My boyfriend at the time bought it, spent an afternoon on it with a friend, and then decided he didn't want to do anything else with it and asked me if I wanted it. It is an Arduino Diecimila, and this must have been around 2008-2009, because I remember one of my first projects with Arduino was interfacing with an ultrasonic rangefinder and an optical encoder that year. It later become Swervy's brain. Memories of my freshman dorm remind me why students need hackerspaces:


I was a really bad roommate...I shared this room with my unfortunate roommate. The dividing line is supposed to be roughly in between the black backpack and black storage cube.

My 'hackerspace' at the time was a $99 refurbished Ryobi drill+circle saw, a $30 set of Ryobi drill bits,  a Radioshack soldering iron, and misc. things taken out of the dumpster from USC, along with some sensors I got from my FIRST robotics team and the aforementioned Arduino Diecimila.

Sophomore year of college, beginning of the spring semester, Sparkfun has a big promotional event/server load test. They were giving away one hundred $100 free orders (Sparkfun Free Day). I skipped that morning's Calculus lecture and hit refresh on the page until I won my free order. The four 6.5" red seven segment displays were in my cart for that order, along with a few other things including a Simon Says kit that taught me how to do surface mount soldering.


At some point in 2015 I dug up these parts and put it all together. The circuit is just a bunch of BJT transistors and some buttons to set the input. I never figured out what I wanted it to do, so I put it up on the wall and update it as a calendar, manually, whenever I come in to my office. The Arduino doesn't have enough output ports to handle all the digits at the same time so it enables/sets one at a time, repeatedly and very quickly. Very common solution to a common problem, but it was a fun afternoon project anyway. The wiring harness took the most time. I think I glued the whole thing to the wall with no consideration for whether I'd want to move it in the future. I'll have to tear it off the wall eventually, since Sector67 is getting a new building. Chris jokes sometimes that it looks like what the TSA might think is a bomb; this clock slightly predates the recent clock incident. Which also reminds me of this page which left a huge impression on me when I first read it in college and turned me into a fan.

Friday, March 31, 2017

Telegraph Sounder Player

Yesterday I put the finishing touches on my latest project, the telegraph sounder player.

Video link first if you want to see it in action before you read: https://www.youtube.com/watch?v=vKzZMfzuw68



The device on the left is the telegraph sounder player, and the device on the right is the telegraph sounder. The telegraph sounder itself is a vintage telegraph module. The player allows for up to four sounders to be plugged in to it, and plays back random snippets from text files loaded up on an SD card. It uses American Morse Code (as opposed to international). I loaded the SD card with excerpts from my favorite novels as well as full novels from Project Gutenberg.

The project was made for the owner of the telegraph sounder, Don. Don used to work as a telegrapher for a railroad company (US land based telegraph operations used American Morse code). He had a first version of the player built for him during the most recent Build Madison event and came back to Sector67 with notes for some features he would like for the next version.

Features I included in this telegraph player include: SD card playback, 4 sounder support, knob for speed of playback, separate knob for inter-sentence interval length, status led and ON/OFF toggle, CNC milled hardwood box with brass corners, battery power (est. 10 hours of playback), battery charging circuit onboard.

Design process in brief:

I had four 5V relays I found in a bin here at Sector, so I chose to support up to 4 sounders and to use a 5V Arduino for the brains. The relays are each switched by a BJT transistor circuit, and when switched they connect the sounder port to the battery pack.

The ports are barrel jack connectors with a 10uF capacitor across the leads and a high power rating. low resistance resistor. The resistor is mostly just in case the leads going to the telegraph are placed so that they are touching each other.

I had been playing around with trying to minimize the currents induced by the sounder turning off (the clack, as opposed to the click). A large ferrite choke close to the telegraph with many windings of the connector cable worked, but was a little unwieldy. The capacitor doesn't entirely solve the problem, but helps make the clack sharper. Too large a value and the clack actually bounces presumably as the capacitor charges on the induced current, and then discharges back into the sounder and re-enables it somewhat. The 'problem' is in any case limited to the Arduino resetting only when connected to a computer during the disconnection of the sounder, so it was possible to ignore. 

The SD card is connected through a bi-directional level converter. The Arduino has limited memory, so the code handles things by grabbing small chunks of the files into four small buffers for the four ports. The Morse codes are stored as 2-byte unsigned int objects, with the exception of a few special 4-byte unsigned long objects for some of the punctuation. The code can be found here: Arduino Code.

Access to the Arduino is through the bottom of the box, which is attached with screws to corners that are glued inside the box.


The project uses two battery cells that each have overcurrent and undervoltage protection built in. The charging circuit is as described here.

You can find more photos of the project and an additional write up here: https://imgur.com/gallery/fIBFZ

And a video here:
https://www.youtube.com/watch?v=vKzZMfzuw68






Friday, March 17, 2017

Painting Robot Compilation

A while back somebody donated an old robot arm kit to Sector67. I built a small model of it with glue, tape, foamboard, and potentiometers, and hooked it up to an Arduino. The project is very simple, but it has been very popular at science fairs (and has survived many hours of use requiring minimal fixes only). Here's a compilation video.


Friday, March 10, 2017

Trapezohedra

Another set of magnetic tile polyhedra. This time, three different trapezohedra.

https://en.wikipedia.org/wiki/Trapezohedron

http://www.thingiverse.com/thing:2168615


https://github.com/eshira/polyhedra

I used this resource to define the kite and dihedral angles, but the hexagonal and heptagonal did not work out (and I believe the octagonal also has issues). Fixing them is on my to do list. I plan to write a script that actually generates them from the antriprisms.


Tuesday, March 7, 2017

Talking Calipers 2

Preface: Voting on a contest I submitted this to is open right now. You can vote once a day through this link. http://review.wizehive.com/voting/view/infypublic2017/47311/4496312/0

Before going all in on the research for my minimal-cost minimal-footprint all on one custom PCB version of the Talking Calipers, I went through my bin of parts and made one more with just what I had already available. It uses:





The video demonstrates that the reading is taken when the button is pressed, so the display can potentially change to reflect that the measurement has changed since the reading began being spoken aloud. The button is a limit switch super glued next to the thumb wheel.

It was a useful learning experience to build, and it was good to have it done quickly for demos and such, but in the end I built something very, very similar to the Adafruit Wave shield (https://www.adafruit.com/product/94) but slightly more expensive.

Feature list (all were tested on the breadboard, though some haven't yet migrated to the somewhat portable prototype):
  • Rechargeable battery on board, recharges via USB
  • Separate amp with lower gain for headphones for hearing safety
  • Headphone detection for automatic headphone/speaker output switching
  • SD card for easy file loading--can be made to work in any language by switching out files
  • Relay mutes headphones to reduce pop
  • Sleep mode that wake up on request for a reading, for longer battery life
Features not yet added:
  • Filter to get rid of a hum caused whenever the SD card is read
  • Low battery indicator (through sound or vibration ideally!) (though thanks to a circuit built in to the battery header, it will turn itself off to protect from under-voltage to the battery)
  • Inches mode protocol is different and currently reads off incorrectly
  • Audio files and code support for reading numbers more naturally (currently reads off each digit separately)
  • Change sleep mode to trigger only if the readings are constant and the button is not pressed for a certain time interval (right now just based on button)
I actually recorded my voice for this project since I couldn't find everything I wanted online. I used http://www.audacityteam.org/ to create the files and then used http://sox.sourceforge.net/ to make sure the files were 44.1kHz sampling rate and 16 bit sample size and then volume adjusted to a max volume that did not clip. (Note: the wav shield played most any wav files I could find fine, but the Teensy audio library was picky about the settings, necessitating use of the sox tool).

I estimate the battery life at about 10 hours of use (assuming it never enters sleep mode). The main power switch does lose the zeroing, but is convenient to have (out of the box, these calipers draw enough current to drain the coin cell in half a year to a year, unless the battery is removed). Auto-entering sleep mode when the user is away helps extend the battery life even more.

Monday, February 20, 2017

ThinkPad X200 Tablet as peripheral device Attempt 1

The goal of this project is to turn a "Wacom Penabled" Thinkpad Laptop into a peripheral screen and input device for my main Windows PC machines.

This documents just the first attempt, as not everything is working yet and I need to move on to other projects for a while.

The starting point: I have a Wacom Penabled ThinkPad x200 Lenovo Machine, pictured below. I read the following articles to figure out how I might go about my goal: article 1article 2


The instructions from the article above were not sufficient for my setup and included some steps I skipped. Below are the steps I took to get as far as I have so far.

I installed Ubuntu (latest desktop version) on the Thinkpad machine. It installed cleanly alongside my existing Windows 7 install without need for any other tools (such as WUBI  mentioned in the articles). The pen worked out of the box on this system.

I verified the port of the pen with the following command (the port is /dev/ttyS4 for me)
    dmesg | grep ttyS 
I installed ser2net:
sudo apt-get install ser2net
And changed the configuration file located at /etc/ser2net.conf and added the following line at the bottom section:
    7000:telnet:600:/dev/ttyS4:38400 remctl NONE 1STOPBIT 8DATABITS -XONXOFF -RTSCTS -LOCAL 
I added my username to the group dialout with this line:
    sudo adduser myusername dialout
At this point, I was not able to see the output of the pen on port /dev/ttyS4 because all attempts to access manually it gave me a resource busy error. I fixed this by commenting out the second to last line in /lib/udev/rules.d/69-wacom.rules. This steps should require a restart of the machine afterwards in order to have effect. The line to comment out reads as follows, with comments:
    # comment out the next line if your distribution does not provide systemd
    # If a /dev/ttySx device with the WACf name is detected, start the
    # wacom-inputattach service with the kernel name as parameter
    # SUBSYSTEM=="tty|pnp", KERNEL=="ttyS[0-9]*", ATTRS{id}=="WACf*", TAG+="systemd", ENV{SYSTEMD_WANTS}+="wacom-inputattach@%k.service"
Now it is also necessary to disable the Wacom pen services from xinput. If not done, streaming the data out with the cat command will make the computer behave erratically, and probably caused the kernel panic I experienced once as well. To list the devices, use command "xinput" which will return a list with ID numbers listed to the right. I found three Wacom entries under Virtual Core Pointer. They were the stylus, eraser, and touch. They had IDs of 12, 13, and 14 respectively. I disabled all three with commands as follows. These should be put in a start up script, as otherwise they must be manually entered with each start up of the system, possibly including log ins.
xinput disable 12
xinput disable 13 
xinput disable 14

At this point the pen won't work on Ubuntu anymore, which shouldn't be a problem because the goal is just to get the data over to the Windows machine anyway.

Now I can finally test the device locally. I entered the following command and saw data (unintelligible gibberish symbols as displayed) come out whenever I touched the stylus to the screen.
sudo cat /dev/ttyS4
Now I opened up a PuTTY session on the Windows PC . I set it up to 'Connection Type: Telnet', 'Port: 7000', and host name as the IP of the Ubuntu machine on the network which I identified using the command 'ifconfig' on the Ubuntu machine. The connected network interface, which for me is called wls1 and indicates my wireless card, states the address on the second line. It looks something like the following:
inet addr: 192.168.1.150
With the address entered into the 'Hostname (or IP address)' field I hit the "Open" button and saw that whenever I moved the pen to the Ubuntu screen, unintelligible symbols appeared in the window. Excellent. [Note: close the PuTTY winow after this test]

The only remaining steps were to get the Wacom modified tablet drivers to work and install HW VSP. I only made half-successful progress here, so I won't write it up yet in detail. I followed the instructions for the drivers from article 2. I tried this on two Windows machines each running Windows 7.

On my desktop computer, the Wacom drivers (installed per the instructions on  seem to be the issue. I get a complaint about them every time I boot the machine, and the HW VSP says "connected" but all the packets end up just adding to the number in the Queued section, and the VSP fields other than Status all remain blank. On the Windows 7 laptop, I do not get the Wacom drivers complaining at me, and the HW VSP window shows some values for all the VSP fields, as well as incrementing values for Rx and Tx packets in the VSP column. But the pen on the Ubuntu machine still does not move the pen on the PC.

For now I'm shifting gears away from this project, but hopefully I can get it working in the future. The Windows drivers are clearly the problem, and I've tried a lot of things, including other drivers, as well as messing with the registry and the driver files. I might have more success going through a second Ubuntu machine and using Ubuntu tools and drivers here. But the point is in part to get this working for a Windows PC, just because of my software and OS use patterns (I develop on Ubuntu, but use design and entertainment software on Windows).

If you think you have a solution let me know!