Longboard sensor module log
This week we began physical planning and code/data planning. Additionally we created our BOM and Gant chart.
Physical planning: First where the parts will go in the box, what size the box should be, and where the box should be mounted on the board.
Code/Data Planning: We started planning how much data we were going to capture, where to store it, and how to interpret it. We also started looking into communication protocols.
We continued on our project planning. The majority of this was finalizing the parts we were going to use and made sure all of the parts were compatible with eachother. Another constraint for the parts was size. We tried to find both the smallest and most accurate (while still cheap) version of parts. That way we could ensure good data while also ensuring a small form factor on the board. The final (and possibly most important) constraint for the sensors was that they had to be compatible with the raspi. While writing our own library for the sensors is possible, it would take away from time we could spend making the final result of the project much better overall.
We've started developing the CAD model of all of the components to make a box that holds the sensors and the Raspi. This started with modeling all of the individual components that will go in the box and so that the CAD model can accurately be built around them. The model also has to be built taking into account the restrictions of 3D printers.
We've ordered all the components and have started working on learning the libraries necessary to communicate between the Raspi and the external sensors.
Additionally we've prepared our presentation for the class: https://uploadfiles.io/ibkna
Files are now backed up on Danny Andreev's git repo under a larger electric longboard project. All electrical components were CADed.
<a href="https://github.com/lolomolo/LongboardMarkII/tree/master/Ev%20Sensor%20Module">Git Repo</a>
The components are fully and accurately modeled in our CAD file for the box.
The code planning is almost complete and ready to be started for next week. We've decided we will take 3 samples/second from every sensor and store that in a local SQL DB. We are aware that some of the sensors will provide the equivalent of "null" data for some of those samples so this will allow us to "smooth" or average the data to ~1 data point/second for the user to view. Finally we will use a python library to chart the data locally on the raspi and view through HDMI. This will allow us to quickly iterate through different sample rates and smoothing algorithms and ultimately decide what data the user will have access to.
A very basic case for the box is complete. Needs some cosmetic touch-ups and a few more port holes before printing.
The code stubs and database have been made. The code still needs communication protocols implemented to the sensors and then we can begin reading data and testing.
Box model is complete, the bottom potion of it has been printed. The electronics are all soldered together, tested and working. The Raspi is set up, and a local SQL server is running on it. Database input and output via Python has been built and tested.
Now we just need to read in sensor data and store it to server. I2C output (accelerometer) to the Rapsi is up an running. Once UART (GPS) is in we can start adding data points to the DB. The code is currently on the Raspi which doesn't interact well with WUSTL wifi, but will be on github (and linked here) ASAP.
Additionally NumPy is now on the RasPi but we have to switch the RasPi OS to one with a GUI to get the graph output we desire.
The Raspi can read data over I2C and UART from the sensors. The data needs to be distilled and relevant information displayed on a UI. The model for the enclose has been updated slightly. It needs to be reprinted.
The issues with the SPI interface were resolved. Gps, Acceleratometer, barometer data was successfully gathers. A basic program was finished which pulls images of google maps through the google maps API and overlays the GPS Locations onto the picture. This will be further revised and the GUI updated.
Data is processed and using the google maps api we can compile a an image which shows the path, velocity and position during the travel of the device.