Assignment 2: Sensor Data

Part 1: Making Maps

Our map discretizes the world into a grid of cells. The relation between the grid and the continuous world is defined by five parameters.
  • origin_x and origin_y - defining the bottom left corner of the grid (in world coordinates)
  • resolution - the (square) size of each grid cell (world units)
  • size_x and size_y - the number of grid cells in each dimension.

Python Only

Coordinate Conversion

Write code for converting between world coordinates and grid coordinates.
def to_grid(x, y, origin_x, origin_y, size_x, size_y, resolution):
Convert the coordinate (x,y) (a pair of floats in world coordinates) by returning a pair of integer coordinates in the grid. If the coordinates are outside the defined grid, the function should return None.

def to_world(gx, gy, origin_x, origin_y, size_x, size_y, resolution)
Convert the coordinate (gx,gy) (a pair of integers in grid coordinates) by returning a pair of float coordinates representing the center of the grid cell in world coordinates.

Bresenham's Algorithm

Write code that implements Bresenham's algorithm in assignment02/src/assignment02/geometry.py, given the function definition:
def bresenham(x0, y0, x1, y1):
It should return a list of tuples of coordinates. For example
print bresenham(2,2,4,2)
[(2, 2), (3, 2), (4, 2)]
You may assume that the input to the function will be integer coordinates. This can be tested with assignment02/src/line_test.py

ROS Python

Map Maker v1

Fill in the details of the class assignment02/src/assignment02/map_maker.py to create a map from Odometry and LaserScan messages.

For version one of the map maker, you can assume that the origin of the laser is at the coordinates indicated by the odometry. Whenever you get a laser reading, use the most recent odometry message (and the above Python methods) to clear out the empty space in the map (by changing the cell value to 0) and mark the occupied cells (with 100). The map is initially -1.

The map will be built in the OccupancyGrid message. Instead of a two dimensional grid, the data is stored in a one-dimensional vector (OccupancyGrid.data). You can translate a pair of integer coordinates with the to_index(x,y,size_x) function in geometry.py

There are two ways to test your code. For version 1, download this bag file. You can process the logged data without running roscore by running assignment02/src/make_map.py. This version will print out an ASCII version of the map. Alternatively, you can run assignment02/src/ros_map.py to make the map with ROS. This allows you to also run rosrun rviz rviz -d assignment02/map.rviz which will display the laser data as well as the map as you build it.

Note that you should not mark cells as occupied if the range is greater than or equal to the range_max (from the LaserScan message).

You can also change the resolution of the map with the -r argument. By default it is 0.5 meters. Try 0.25 and 0.05

Map Maker v2

The second bag file is made with a slightly different robot configuration, where the laser scanner is offset from the odometry coordinates by a slight translation and rotation. Hence, you need to translate each laser reading from the laser scanner's frame ('base_laser_link') to the robot's frame ('base_link'). This is done by creating a PointStamped message.
p = PointStamped()
p.header.frame_id = '/base_laser_link'
p.point.x = x_coordinate # relative to sensor
p.point.y = y_coordinate

# Returns a PointStamped where the point is relative to 
# the frame /base_link
p2 = self.transformer.transformPoint('/base_link', p)

When testing with the second bag file, use the same scripts as above, but also use the flag -p2 to change the size of the grid and insert the offset.

Part II: Data in THREEEEEEE DEEEEEEEEEEEEE

Data Collection

For this portion of the assignment, you will need to collect Kinect data in the lab. Rather than use the Turtlebots, there is a calibrated Kinect connected to the computer ariel (front row, farthest to your left when you walk in the door of the lab).

To start the kinect, run the command roslaunch cse550_kinect kinect.launch. This will start the Kinect driver and a visualization. After that, run (in a separate terminal) the command rosrun rqt_reconfigure rqt_reconfigure, navigate to camera/driver and ensure that the depth_registered box is checked.

You need to record three messages for each set of data. These will be stored in a ROS Bag file. The command to do this is rosbag record -l 1 /camera/rgb/image_rect_color /camera/depth_registered/hw_registered/image_rect_raw /camera/depth_registered/points -O NAME
(this specifies to record one message (-l 1) of three topics, and will output the file into NAME.bag.)

You create at least three bag files. The first one is just of the background with no people in the frame. Then for EACH GROUP MEMBER you will create a bag file of the person standing with their arms out to the side and another bag file of the person in some different pose. These will be used in this and later labs.

You can make the files smaller by running the command rosbag compress *.bag. Upload or email these files to the professor as part of your final submission.

Data Processing

Your goal for this section is to segment the background out of the data so the data just shows the person, not the background. You can test on your own data, or you can use these test bag files.

The first step is to do simple image subtraction, by extending assignment02/src/subtract_image.py. This script should read in three parameters, the background bag file, the target bag file, and an image file name to write the difference image. Your algorithm should go through and compare the colors of each pixel, and if they are similar enough, replace that pixel with a blue pixel. Your subtraction need not be perfect but should isolate most of the person. Note that the Image message in Python stores the data as a string, which makes it hard to manipulate individual elements, so there is code to convert it to an array of integers, and convert it back.

The second step is to subtract one PointCloud2 from another in assignment02/src/subtract_points.py. The script should read in two source bag files and a bag file name to write the resulting PointCloud to. Your algorithm should go through the different points and compare the colors and distances between points and remove similar points.

Grading

to_grid5
to_world5
bresenham5
MapMaker0130
MapMaker0215
Data Collection10
subtract_image10
subtract_points20
Due Date: March 6, 2015, 11:59pm CST.