Adityan Srinivasan, Joan Davis and Munzir Zafar
Project Abstract:
We aim to teach a humanoid robot sword-fighting. The robot we intend to use is Krang, in Mike's Golem lab. Our goal is to develop a simulation for this using the software codebase for dynamic visualization currently being developed by GVU at Georgia Tech. We would like to see this extended to Krang after developing the simulation this semester.
Related Work:
- G. Welch and G. Bishop, "An Introduction to the Kalman Filter", Technical Report, University of North Carolina, Chapel Hill, 1995.
This paper helped us learn and implement the Kalman filter for our prediction algorithm. We used the filter to estimate the sword position and orientation at a future state. The filter's recursive estimation was done by updating the covariance matrix. The covariance matrix that we used was dependent on the measurement of the sword-length.
- S. Lee and Y. Kay, ["http://dx.doi.org/10.1109/ICPR.1990.118073 A Kalman Filter Approach for Accurate 3-D Motion Estimation from a Sequence of Stereo Images"], CVGIP: Image Understanding, Vol. 54, No. 2, pp. 244-258, 1991.
In order to force the Kalman filter to converge quickly, we used the technique described in this paper for motion estimation from stereo images.
Proposed Work:
- Planning Challenges
- Sword fighting is a very interesting subject to tackle in the field of planning from a human-robot interaction perspective. Robots can be made to fight against each other with a finite set of attack plans and default responses to said attacks. However, humans are unpredictable, and a clever one will plan an immediate response based on a plan several steps ahead into the future. Thus, the robot has to incorporate an accurate prediction algorithm and respond appropriately.
- Proposed Solutions
- Our solution consists of three steps: 1) tracking the sword, 2) predicting the sword move and 3) responding to the sword move. The robot will have some default responses to common sword attack trajectories and will have to 'learn' new response maneuvers and add these to its default list. The response itself can be divided into three aspects: determining from where to start the response trajectory,determining if the default response will be sufficient, and determining when contact has been made.
- Implementation Challenges
- We used the Robot Operating System (ROS) interacting with OpenNI and Kinect Camera for tracking and predicting the sword and Gazebo for simulation. An initial sword-tracking algorithm and a prediction algorithm were implemented in MATLAB. After testing these with sequences, we converted them to C++ so that they might be integrated into the ROS environment. Our initial idea was to use the Skeleton Tracking package available in OpenNi to track the sword. But this algorithm confused the sword with the hand holding the sword. Since the package was not open source, we decided to implement our own tracking algorithm for the sword. Another implementation challenge we faced was finding and adapting a robot simulation in gazebo for our purposes by adding a sword to the robot arm.
- Proposed Solutions
- We will develop our own sword-tracking algorithm for the project. We will test it in MATLAB first and then port it to ROS as CPP programs. We will also try out Pioneer, PR-2 and Shadow Robot arm simulations in gazebo and figure out which works the best for our project.
Timeline
General Task | Adityan 1 | Joan 2 | Munzir 3 | |
---|---|---|---|---|
Week 1 | ROS and lit review | Do literature review of Mike's paper. | Get familiar with ROS | Get familiar with ROS |
Week 2 | Learn to use Kinect using OpenNI and ROS | Learn to use Kinect using OpenNI and ROS | Learn to use Kinect using OpenNI and ROS | Learn to use Kinect using OpenNI and ROS |
Week 3 | Try to use Skeleton Tracking Package for sword tracking | Skeleton tracking for sword detection | Skeleton tracking for sword detection | Skeleton tracking for sword detection |
Week 4 | PCL for Image Tracking | Learn to use PCL using OpenNI and ROS for Feature Extraction | Learn to use PCL using OpenNI and ROS for Feature Extraction | Learn to use PCL using OpenNI and ROS for Feature Extraction |
Week 5 | MATLAB Image Tracking | Sword-tracking in Matlab | Literature review on prediction techniques | Sword-tracking in MATLAB |
Week 6 | CPP Image Tracking by Open CV | Learn OpenCV, Convert Matlab image tracking to CPP | Learn OpenCV, Literature review on prediction by Kalman | Learn OpenCV, Convert Matlab image tracking to CPP |
Week 7 | Prediction in MATLAB, Prediction in C | Convert to C | prediction in MATLAB | Convert to C |
Week 8 | Gazebo for PR-2 Simulation, integration of sword-tracking with prediction and Gazebo simulation of response | Learn to use Gazebo with ROS, publish and request sword-tracking points from and to Gazebo | Learn to use Gazebo with ROS, publish and request sword-tracking points from and to Gazebo | Learn to use Gazebo with ROS, publish and request sword-tracking points from and to Gazebo |
Week 9 | Gazebo simulation of response, Report and Presentation | Gazebo simulation of response | Presentation and Report | Presentation and Report |
Week 1
We decided that our final project in RIP would be implementing a planning strategy for sword-fighting. This choice was inspired by the following work: Strategic Chess: Dynamic Planning for Robot Motion. In order to get started with the project, we decided that we would learn ROS and do a literature review of the paper to see how our approach should differ from that.
Week 2
The Microsoft Kinect was chosen as our vision system for tracking the sword. This is because the OpenNI library for Kinect is available merged with ROS. There is a good support community and well-documented code for this stack in ROS. We began learning how to use the OpenNI package for obtaining depth images and point clouds in RViz. We also learned how to publish nodes of information in ROS.
Week 3
There is a Skeleton Tracking package which uses OpenNI interface in ROS.We used this to track the movements of the human holding the sword. Unfortunately, the skeleton tracking algorithm confused the human arm and the sword which was held by the arm. Since this code was not open source, we realized that we could not edit and use this for our project. After searching for several alternatives, and contacting several people in the field, we decided that we would need to write our own algorithm for tracking the sword.
Week 4
We began learning the Point Cloud Library thinking that we might be able to get the sword points from the point cloud in the vicinity of the arm. But when we tried to implement this, there were several issues and we had to abandon this approach. We were back to devising our own sword-tracking algorithm.
Week 5
We implemented the sword-tracking algorithm in MATLAB. We were able to obtain the start points and end points of the sword. But the start points were erroneous in many cases. So we decided to find the centroid of the sword and then find the central (x,y,z) coordinates in the sword's depth image. This was more accurate. We also began to look into prediction techniques for sword motion.
Week 6
We converted the sword-tracking in MATLAB into CPP. The image processing functions were implemented in CPP using OpenCV,
Week 7
We implemented a prediction algorithm in MATLAB using Kalman filters. This was then converted into CPP then to get it into ROS.
Week 8
We began to learn how to use Gazebo in ROS. This is for simulating the sword-responses of the robot to the human opponent's moves. We needed to choose a robot-arm simulation for our project. We tried out PR-2, Shadow Robot Arm and Pioneer 5 DOF arm packages. We found out that PR-2 had the most well-documented simulation package for Gazebo. So we decided to use the PR-2 package. But the package had many functionalities which were not relevant to the package. To increase the speed of response, we stripped down the URDF file to keep only what we needed.
Week 9
We will complete the response algorithm and integration with the sword-tracking and prediction. We will also work on the report and the presentation.