Difference between revisions of "CubeSat: Image Selection and Compression"

From ESE497 Wiki
Jump to navigationJump to search
 
(7 intermediate revisions by the same user not shown)
Line 3: Line 3:
  
 
== The Project ==
 
== The Project ==
Project Timeline: [[Media:ISC_Timeline.docx|Timeline]]
+
Project Timeline: [[Media:ISC_Timeline.docx|Download]]
 +
 
 +
Project Poster: [[File:UGR_cubsat2_poster_SP11.ppt|Download]]
 +
 
 +
== Project Report ==
 +
 
 +
===Project abstract===
 +
The goal of this project is to examine the performance of different lossless compression algorithms and determine which one would be the most suitable for a cubeSat based communication system. As the cubeSat based communication system has both high data collection potential and relatively low data transmission throughput, optimal compression is a key performance bottleneck for the overall system performance. We select an optimal subset of the images that are generated using the IR camera mounted on a cubeSat and compress these subset of images. The compression algorithms we investigated and tested are run-length encoding, difference encoding, and Huffman coding. Initially, we implement these algorithms in MATLAB. Later, we will use the datasets developed in the first part of the cubeSat project.
 +
 
 +
===Background===
 +
The cubeSat system refers to a micro-satellite device that is relatively cheap to launch and provides great research opportunities for Universities and other institutions running on a relatively tight budget. Being micro-satellites, cubSats are naturally small devices (10cm x 10cm x 10cm). While this is one of the factors that reduces cost, it also significantly reduces the communication capabilities of the device. As cubSats can only stay in space for a few weeks before burning up in the atmosphere upon reentry, an effective communication line is key to retrieving valuable research data. Currently the maximum bandwidth is limited by the hardware to approximately 240 Mb per day. To put this in perspective, a small IR camera that might be capturing images from the cubSat could easily generate 240 Mb of images in a few minutes. This is a roadblock to effective use of the cubeSat platform and can be addressed both on the hardware and software side.
 +
 
 +
===Overview of compression methods===
 +
*run-length encoding (RLE)- takes advantage of repeats in the data, stores a data element followed by a encoding element that states how many times that data repeats.
 +
 
 +
*variable run-length encoding (VRLE)- similar to RLE,  but has one bit after each data element that says whether or not there will be an encoding element, this saves space when there are not a lot of repeats. in the data.
 +
 
 +
*difference encoding (DIFF)- the method takes advantage of the variance of the data, it stores one data element that is the seed value. All the following data are represented by an encoding element that is the difference between the current data value and the previous data value
 +
 
 +
*Huffman coding (Huffman)- Huffman coding uses the probability distribution of the data values to construct a uniquely decodable map of variable length bit codes. Each bit code corresponds to some data element. the more frequent data have short bit codes and the less frequent ones have the long bit codes.
 +
 
 +
 
 +
 
 +
===Work completed===
 +
The following image compression algorithms have been implemented and tested for accuracy in MATLAB:
 +
*run length encoding
 +
*variable run length encoding
 +
*difference encoding
 +
*Huffman coding
 +
 
 +
source code (zipped)-[[File:Cubesat_compression_source_SP11.zip]]
 +
 
 +
note- the code is decently commented, however it may be a bit confusing. So, for simplicity one should run the script called "Compression test" this script should prompt the user for a image to process and then, given a valid filename perform an array of compression tests. After finishing it should display the results of the operations graphically.
 +
 
 +
 
 +
 
 +
===Problems encountered===
 +
The main problems I ran into had to do with the fact that i was using matlab. It seems like the way matlab works, it needs to store the data as a contiguous array and there are no other supported abstract data types. Because of the high memory overhead for the algorithms, particularly the Huffman, I would often run out of memory when working with a large pixel bit size or a large image. I never really found a good solution to this problem. However, I expect this problem to be alleviated when using a different language. Another problem i encountered was matlab's "native" tree structure support. I found it hard to work with and somewhat lacking in functionality (in particular, it was hard to build a tree from the bottom up, which is required for Huffman). My solution was to write a small tree class function library.
 +
 
 +
===Experimental setup===
 +
To test how effective each algorithm was at compressing a variety of images, I selected a few random picture and ran the compression algorithms for each one, with varying encoding element sizes. I then looked at which algorithm had the best compression performance, that is, the smallest compression ratio (compressed/original). I also ran the algorithms on images generated by code from Michael Scholl. those images were supposed to be somewhat representative of the kind of images we'll be getting.
 +
 
 +
 
 +
===Results===
 +
While compression performance varied based on the actual image. The huffman was most consistently the best in terms of raw compression. However, as stated before, the memory and processing overhead was much higher. While RLE encoding could execute in fractions of a second, Huffman would take many seconds. Furthermore, for optimum compression performance with Huffman, there is also a communication overhead which might render any compression benefits useless.(note, look at the poster powerpoint to see examples of results)
 +
 
 +
 
 +
 
 +
===Future work===
 +
*analyze the expected time complexity and space complexity of the algorithms
 +
*Implement the algorithms in a more general purpose language like C.
 +
*Get hands on some hardware and attempt to run the code. examine factors like power consumption processing speed, etc.
 +
 
 +
====Related Works====
 +
#[[CubeSat:_Image_Modeling_and_Selection|CubeSat - Image modeling and selection (Michael Scholl, Phani)]]
 +
#[[CubeSat 2010| CubeSat Projects (Michael, Alex G., Andrew)]]
 +
#[http://cubesat.slu.edu Saint Louis University CubeSat Project]

Latest revision as of 15:06, 17 May 2011

CubeSat Sub-Project: Image Selection and Compression

This project is tackling the Image Processing component of the CubeSat project. In particular, dealing with selection of valuable of image data and the effective compression of that image data.

The Project

Project Timeline: Download

Project Poster: File:UGR cubsat2 poster SP11.ppt

Project Report

Project abstract

The goal of this project is to examine the performance of different lossless compression algorithms and determine which one would be the most suitable for a cubeSat based communication system. As the cubeSat based communication system has both high data collection potential and relatively low data transmission throughput, optimal compression is a key performance bottleneck for the overall system performance. We select an optimal subset of the images that are generated using the IR camera mounted on a cubeSat and compress these subset of images. The compression algorithms we investigated and tested are run-length encoding, difference encoding, and Huffman coding. Initially, we implement these algorithms in MATLAB. Later, we will use the datasets developed in the first part of the cubeSat project.

Background

The cubeSat system refers to a micro-satellite device that is relatively cheap to launch and provides great research opportunities for Universities and other institutions running on a relatively tight budget. Being micro-satellites, cubSats are naturally small devices (10cm x 10cm x 10cm). While this is one of the factors that reduces cost, it also significantly reduces the communication capabilities of the device. As cubSats can only stay in space for a few weeks before burning up in the atmosphere upon reentry, an effective communication line is key to retrieving valuable research data. Currently the maximum bandwidth is limited by the hardware to approximately 240 Mb per day. To put this in perspective, a small IR camera that might be capturing images from the cubSat could easily generate 240 Mb of images in a few minutes. This is a roadblock to effective use of the cubeSat platform and can be addressed both on the hardware and software side.

Overview of compression methods

  • run-length encoding (RLE)- takes advantage of repeats in the data, stores a data element followed by a encoding element that states how many times that data repeats.
  • variable run-length encoding (VRLE)- similar to RLE, but has one bit after each data element that says whether or not there will be an encoding element, this saves space when there are not a lot of repeats. in the data.
  • difference encoding (DIFF)- the method takes advantage of the variance of the data, it stores one data element that is the seed value. All the following data are represented by an encoding element that is the difference between the current data value and the previous data value
  • Huffman coding (Huffman)- Huffman coding uses the probability distribution of the data values to construct a uniquely decodable map of variable length bit codes. Each bit code corresponds to some data element. the more frequent data have short bit codes and the less frequent ones have the long bit codes.


Work completed

The following image compression algorithms have been implemented and tested for accuracy in MATLAB:

  • run length encoding
  • variable run length encoding
  • difference encoding
  • Huffman coding

source code (zipped)-File:Cubesat compression source SP11.zip

note- the code is decently commented, however it may be a bit confusing. So, for simplicity one should run the script called "Compression test" this script should prompt the user for a image to process and then, given a valid filename perform an array of compression tests. After finishing it should display the results of the operations graphically.


Problems encountered

The main problems I ran into had to do with the fact that i was using matlab. It seems like the way matlab works, it needs to store the data as a contiguous array and there are no other supported abstract data types. Because of the high memory overhead for the algorithms, particularly the Huffman, I would often run out of memory when working with a large pixel bit size or a large image. I never really found a good solution to this problem. However, I expect this problem to be alleviated when using a different language. Another problem i encountered was matlab's "native" tree structure support. I found it hard to work with and somewhat lacking in functionality (in particular, it was hard to build a tree from the bottom up, which is required for Huffman). My solution was to write a small tree class function library.

Experimental setup

To test how effective each algorithm was at compressing a variety of images, I selected a few random picture and ran the compression algorithms for each one, with varying encoding element sizes. I then looked at which algorithm had the best compression performance, that is, the smallest compression ratio (compressed/original). I also ran the algorithms on images generated by code from Michael Scholl. those images were supposed to be somewhat representative of the kind of images we'll be getting.


Results

While compression performance varied based on the actual image. The huffman was most consistently the best in terms of raw compression. However, as stated before, the memory and processing overhead was much higher. While RLE encoding could execute in fractions of a second, Huffman would take many seconds. Furthermore, for optimum compression performance with Huffman, there is also a communication overhead which might render any compression benefits useless.(note, look at the poster powerpoint to see examples of results)


Future work

  • analyze the expected time complexity and space complexity of the algorithms
  • Implement the algorithms in a more general purpose language like C.
  • Get hands on some hardware and attempt to run the code. examine factors like power consumption processing speed, etc.

Related Works

  1. CubeSat - Image modeling and selection (Michael Scholl, Phani)
  2. CubeSat Projects (Michael, Alex G., Andrew)
  3. Saint Louis University CubeSat Project