Arm Swarm (#cmurobotcontest)

There was some servo chiseling at the door:

IMG_20141011_154000 IMG_20141011_153954

There were more. Many more.


I tried to keep them out. I managed to keep most of the smaller ones off me, but then came the chief from the back of the pack.

10622824_10102589654718015_5025336009944660602_n 10492613_10102589654613225_8949492634966480232_n The swarm needed a new head.

Now I’m recruiting humans for the arm swarm!


Will you join the arm swarm?

Happy Halloween!

Learn more about the Autonomous Cyborg Backpack project here.

Python Sensor Libraries

Robots often work better when you mod them w/ available cheap sensors such as IMUs, digital compasses, and range sensors (such as ultrasonic). However, when working with available cheap sensor packages you can buy online, sometimes the APIs aren’t well documented or even nonexistent. Sometimes you have to write the sensor polling code from scratch in C from design docs. Wouldn’t it be nice to just have python code that could automate popular cheap sensors and just give you sensor readings?

I am releasing code for several popular sensors here. All code is implemented in underlying C, compiled as a shared library, and then c-types is used to load the shared library into nice and friendly high-level Python. I think most users will choose to use the Python but you can also use the underlying C if you want. Code is available for using the following products:

Download code here. Enjoy automating your robots in python and skip the lower level stuff!

CyFitNet: A Multi-Robot AI for Jogging Street Art

I. Introduction

One of the challenges in the fitness domain is to gamify fitness – to make it fun, exciting, and engaging so that people are motivated. Often times, the best form of encouragement occurs in groups where teams of multiple people play some kind of game or take part in some team challenge to collectively improve or meet their fitness objectives.

Multi-robot AI planning is concerned with creating algorithms to control and plan routes for swarms of robots, agents, cyborgs, etc where often the robotic agents are collaborating to accomplish some team objective. Multi-robot systems, thus, provides an algorithmic framework to help optimize both individual and team objectives in fitness.

In this study, we propose CyFitNet, a possible application that builds upon a popular meme on Facebook involving jogging street art. Using apps such as RunKeeper or MapMyRun, users create drawings and artwork with the geographic trajectories they jog in the real world. Users then post their runs/drawings for others to see (and possibly compete against).While most of the time these runs/drawings are made by individuals, we believe it is possible for a medium to large-scale team of joggers to run/draw a larger image such as a text message or a famous work of art.We believe that when individuals are presented with an individual objective (such as drawing one part of an artwork) that is part of a larger team objective (i.e. drawing the entire artwork), they are more likely to do their part.

To facilitate this objective, we develop a prototype system that allows users to upload any image that they wish to draw while running. The system decomposes the overall image as a set of trajectories. Extracted trajectories are mapped onto the real world (in a geographic space selected by the user) so that a team of joggers could draw the image while running the trajectories. Finally, multi-robot algorithms are used to assign trajectories (from the full set of trajectories) to a team of joggers to create the image. The algorithms flexibly take into account various preferences of the individual joggers such as where they are starting their run, fitness objectives such as what total distance they want to run, and associated weights for preferences while meeting the team objective of completing the picture.

II. System/Algorithm Design

The underlying system has several algorithmic steps that enable function.

Step 1: The user uploads an image to the server that contains a drawing or message they wish to jog. We will use as our example an image of coins (shown in Figure 1).

Figure 1: Original Image of Coins
Figure 1: Original Image of Coins

Step 2: A Canny Edge Detection algorithm is run to produce an edge map of the image. An edge map classifies each pixel in the image as being the location of an edge or not. The edge map can help identify edge pixels of the underlying drawing (results shown in Figure 2).

Figure 2: Edge Map from Canny Edge Detector
Figure 2: Edge Map from Canny Edge Detector

Step 3: An Edge Extraction algorithm is run to extract continuous edges from the edge map. These extracted edges form a candidate set of trajectories that a team of joggers will run to draw out the image. Extracted trajectories/edges are shown in Figure 3.

Figure 3: Extracted Trajectories / Edges from Edge Map
Figure 3: Extracted Trajectories / Edges from Edge Map

Step 4: The extracted trajectories/edges are mapped onto a geographic region in the real world. The trajectories are transformed into Latitude / Longitude space and transferred to a geographic location specified by the user. For our example, we map the trajectories to streets near Monta Vista High School and plot on Google Maps (see Figure 4).

Figure 4: Trajectories Mapped to Geography near Monta Vista
Figure 4: Trajectories Mapped to Geography near Monta Vista

Step 5: Once the drawing is specified, the locations of our jogging team are queried along with their preferences. In our example, we simulate N=20 joggers in the region ready to run our routes, along with individual jogger fitness objectives, such as the total distance each jogger wants to run today. The locations of the joggers, along with the set of possible trajectories, are plotted in Figure 5.

Figure 5: Simulated Joggers in Geographic Vicinity
Figure 5: Simulated Joggers in Geographic Vicinity

Step 6: A multi-robot planning algorithm analyzes the preferences of the joggers as well as the possible trajectories to be run and comes up with an optimal assignment of the trajectories to the joggers. The underlying multi-robot solver algorithm is based on the binary integer programming solution to the Assignment Problem ( in Multi-Agent Active Learning (Tandon 2012). We assume that the cost of the overall team assignment is decomposable as the costs to the individual agents. The fitness function used takes into account the following factors:

  1. The distance between the starting position of the jogger to the start of the trajectory
  2. Whether the total distance traveled by the jogger is within the fitness objective specified by the jogger (i.e. If the user says they want to jog a minimum of 1 mile and maximum of 2 miles, then that constraint is taken into account in the route assignment).
  3. Whether as much of the team picture is completed as much as possible.

All these constraints can be plugged into the Assignment Problem ( bintprog in MATLAB was used to solve the binary integer program ( In terms of the formulation, the cost C can be made of constraint #1 and constraint #3 can be plugged into f while consideration #2 can be plugged into b so you get an optimal solution from the solver without doing much else.

Figure 6 shows an example optimal assignment of routes to the joggers taking into account these constraints using these costs in our fitness function optimization.

Figure 6: Example Trajectory Assignment to Joggers
Figure 6: Example Trajectory Assignment to Joggers

The end result is a trajectory that each jogger can run that satisfies their fitness objectives, maximizes according to their preferences for routes, while maximizing the overall team objective of completing the picture.

III. Conclusion

In this study, we prototype a possible useful multi-robot application to the fitness domain. Our system helps gamify jogging via jogging in teams to create jogging street art. The developed algorithms automatically extract trajectories from an uploaded artwork or message, assign them to a team of joggers to draw out the picture while running, all the while taking into account each individual joggers’ unique fitness objectives. We hope our system will encourage people to jog more and create beautifully stunning works of art. mona_lisa

Well, occasionally more stunning. But you get the point
Well, occasionally more stunning. But you get the point.

Ongoing challenges are to improve the multi-cyborg AI:

1. Mapping to actual streets may prove a challenge for non-rectangular, non-square images. The application may work better in areas in the world with plenty of roundabouts for optimal angular directions and seemingly random projections of the data onto the world.

2. One of the major challenges with controlling robotic-humans (i.e. cyborgs) as opposed to just robots is that robots typically follow programmed directions. In contrast, cyborgs have two controllers (i.e. the original mind as well as the computer controller) so may choose to override directions.This happened with some of our user tests. Some of the joggers chose not to follow the paths given to them and drew human reproductive parts and obscenities instead. The challenge thus is to spot cyborgs that are running uninstructed paths from messing up the picture. We hope to develop appropriate error correction and detection mechanisms so that algorithms can be robust to cyborgs that may corrupt the data. This double brain problem is, in general, a major research challenge with cyborg systems that does not come up as much in classical robotics.

Autonomous Robotic Helper Backpack

My latest maker obsession: A low-cost, open-source robotic helper backpack.


10492613_10102589654613225_8949492634966480232_n 10509687_10102589654548355_7502045392508755617_n 10622824_10102589654718015_5025336009944660602_n

Watch the cyborg backpack in action!

Note the pen dropped because I’m really bad at controlling the cyborg arms. I accidentally opened the gripper. It’s not the fault of the system.

Completely controlled with your Android phone and smartwatch! There’s an app for the world of cyborgs!

System Specs:
-2 Dagu 6DOF Arms
-SSC32 Servo Controller w/ custom enclosure
-Raspberry Pi B w/ portable USB battery and Adafruit Case
-2 Ultrasonic sensors on the sides of the backpack serve to detect obstacles and your phone beeps if you get close to something. This helps you protect the extra arms from damage.
-2 web cams allow you to see behind you.
-Tekkeon External Battery for powering the servo controller and rasp pi

The Cyborg Distro is open source: Feel free to contribute to the codebase! We hope many more humans will join the distro. Our goal is to distro all humans by 2050!

Custom SSC32 Servo Controller Enclosure keeps the controller safe inside of the backpack but still easy to use:

10348594_10102520666046865_1135802710746736141_n 10384103_10102520666106745_3051994140826053941_n

Also, there are many more arms from where this came from: