The Teleoperated Drawing Robot


Brought to you by Dan Morris, Kirk Phelps, and Neel Joshi



Video ( .mpeg | .wmv )




This project is a vision-based teleoperation system that allows a user to draw using the PUMA.  The project uses vision-based tracking and feature detection to identify color and position.  The user holds one of two colored balls, either red or green, and the robot draws the same color line while following the path of the ball as moved by the user.


The system is started by first running the tracker sever, then the controller software on the PUMA side is started, it connects to the server, and then moves the PUMA to a predetermined start position.  When the user moves a ball in front of the camera, the tracker will send commands that the controller will use to put the pen down to the paper and then it will begin tracking.  The PUMA will now draw as it follows the path of the ball.  If the user changes the ball color, the system will execute a pen change sequence of pen-up, wrist rotation, pen-down, and then tracking and drawing will continue.




I. Vision Tracker



The vision-based tracker runs in Windows and utilizes OpenCV by Intel for accessing the camera frames.  The tracker runs a server that is connected to by the controller running on a QNX machine.  The server sends both position information and commands used for pen color changes.


The tracker processes camera images by first slightly blurring and then normalizing the image.  If there was no last valid point it does a brute force search over the image and looks to match pixels as red or green.  It then chooses the current color to be the one with the most match points; the x and y locations are averaged to find the center of the match region and this is sent as the location of the ball.  A point for the ball is set and sent only if there is some minimum number of match points for one of the two colors.  Once one valid point is found, subsequent searches are performed on a local region around the last location point for the ball.  This optimization reduces compute time and thus increases the performance of the tracker. 


In addition to position updates, when appropriate, the tracker will send commands to initiate a pen change.  This includes the pen up, new pen color, and pen down commands with some amount of delay in between to insure the commands complete smoothly.  A pen change sequence is triggered when a new color ball is detected.  This event is triggered when there are a larger amount of match points for the new color relative to the old color.


II. Real-time PUMA Controller



The controller on the PUMA side provides for several behaviors.  The tracking behavior is implemented as a velocity saturated PD controller where the desired coordinates in the control law are updated by the positions read off the network from the tracker server.  The update rate is approximately 30 times a second.  Other control laws were tried such as ones based on cubic splines and linear interpolation between the points received every 30th of second.  Due to the speed of the updates and the slowness and friction in the PUMA, the simple PD control had the best feel and performance.


The controller performs the pen-up and pen-down behaviors by rotating the base joint in and out of the plane thus moving the pen off and on the paper.  The degree of rotation of the base joint during pen-down and during tracking is governed by force control.  The force sensor on the wrist is used to keep the pen in contact with the paper with an appropriate amount of force.  Slight rotations of the base joint are also used to control the force.  The pen change behavior is implemented by rotating the wrist by 180 degrees.  So that the rotation can occur unobstructed, this command is only to be executed when the pen is up.


III. The setup



To have a proper surface to draw on we constructed a structure to mount paper on at a small distance away from the plane of the robot.  This was achieved by building a large wooden structure designed for the setup of the PUMA in B30 in Gates. 


Our setup also included an easel.  This was used to mimic a common drawing environment and to provide a good background for the color segmentation from the camera.  The camera used is a Labtec web cam at 352 x 288 resolution at 30 frames per second.  The tracker ran on a 1.13 GHZ PIII laptop running Win2k and the controller ran on a 233 MHZ Pentium running QNX.




The teleoperated system performed up to expectations.  When a person is drawing with a colored ball he can feel a true one to one correspondence between his motions and that of the robot.  The lag between user movement and PUMA movement is minimal.  While the general correspondence is good the accuracy could be improved.  Due to the friction in the motors of the arm, the inaccuracies due to alignment and lack of calibration, and the variation of the location of the match point on the ball, the current system provides a level of accuracy that works well for less detailed drawing.  More detailed work such as writing words shows the inaccuracy of the system.  This could be improved with good calibration and a more controlled environment.  This system does show that a vision-based approach is a possible viable alterative to other designs used by current teleoperation systems.