Vybz Log

From ESE205 Wiki
Revision as of 07:18, 5 May 2018 by Jdfeher (talk | contribs) (Protected "Vybz Log" ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite)))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Week 1 (Jan 26th - Feb 2nd)

All Members: We brainstormed ideas and decided on a plausible project idea that would be useful in its functionality (30 min). Our goal is to create a speaker that will process the noise in a room and self-adjust its output, maintaining the social atmosphere of the space. The professor mentioned briefly that the Raspberry Pi would be the most suitable processing unit for this type of project. With this in mind, we conducted individual and group research as it related to the use of the Raspberry Pi, the input of noise via a microphone, and candidates for speakers compatible with the Raspberry Pi (30 min). In developing this idea we met with our TA, Sam Chai, and the Professor to further our search for preliminary materials and augmenting our goals for the upcoming weeks (1 hour). During this meeting we discussed the A/D converter in processing sound, challenges of coding as it relates to communication between microphone and Raspberry Pi, and creating more elements of our Wiki page.

Week 2 (Feb 2nd - Feb 8th)

All Members: We had our weekly meeting with Sam Chai on February 7 to discuss the progress of our project. During the meeting we had the chance to discuss our newly generated Gantt Chart, learn a possible approach to analyze the sound input in the Fast Fourier Transform, and begin the preliminary steps to setting up our Raspberry Pi (1 hour 30 minutes). Also, we continued working on the project proposal including additions to budget items, challenges, and objectives (45 min).

Daniel: Researched the internet for potential A/D converters that would be compatible with the Raspberry Pi and an overall good fit for our project (30 minutes). Found the MCP3008 which can process more than the required 40,000 Hz of sound.

Benjamin: Researched microphones and speakers that are compatible with the Raspberry Pi and would fit the input and output needs of the project (1 hour). Found both a quality speaker and microphone compatible with the Raspberry Pi also well within our budget. Worked on the Google Slide presentation for the project proposal (45 min).

Isaac: Setting up the Raspberry Pi and working with the lab monitors (2 hours).

Benjamin and Daniel: Generated the Gantt Chart which detailed the appropriate goals and deadlines for the project (2 hour). Set aside roles for each group member and assigned different tasks for the team.

Week 3 (Feb 9th - Feb 15th)

Isaac (Feb 12): Researched the Fast Fourier Transform. Found Matlab code that could potentially be useful (1 hour). ( Feb 15): Found C Libraries which can by implemented through python; PyAudio, PySerial, NumPy. Necessary for FFT program to run( 1 hour).


All Members (Feb 13): Created project proposal in Google Slides. Created a rudimentary budget, objectives, and challenges and included an updated Gantt Chart (1 hour). Met with Professor Mell to discuss project. Brought up the FFT and strategized how to apply it through adding/subtracting sin waves. (30 min) Autostarted Raspberry Pi with VNC viewer (1/2 hour). Began learning Python via Codecademy.

Ben and Daniel (Feb 14): Enabled SSH with Raspberry Pi. Connected personal laptops to the pi with IP address (2 hours). Configured the Pi to run the bash script and email its IP address 30 seconds after the Pi rebooted

Link to Project Proposal: https://docs.google.com/presentation/d/1-aceL_Ulm3FMN4_c8DnqJogrqBLV7KN6ToSFz62DLEk/edit?usp=sharing

Week 4 (Feb 15th - Feb 22nd)

All members: This week we purchased all of the materials needed to progress into the next stage of our project (30 minutes). Each group member allocated time to learning the Python programming language (2 hours each). We met with Sam on Wednesday and discussed the next steps in hooking up our microphone to the Raspberry Pi (1 hour). At the same meeting, we had the chance to catch up with Professor Mell about our progress.

Week 5 (Feb 23-Mar 2nd)

All Members: Kept learning the foundations of Python via CodeAcademy (1 hour, 30 min). Met with Professor Mell on Wednesday who prompted us to look for a driver to connect the Raspberry Pi to the microphone. Met with Sam on Thursday and discussed how to utilize the microphone through our Raspberry Pi.

Daniel: Found an audio recording software program used to receive an input from our microphone. Installed it onto our Raspberry Pi and was able to see sound waves from our microphone (1 hour). Purchased a aux jack to usb converter as well as a power hub for the speakers, as they use too much power to be directly plugged into the Pi.

Isaac and Ben: Programmed the Raspberry Pi to turn on LED lights using the breadboard, wires, and resistors (1 hour).

Isaac: Researched code to analyze input on the Raspberry Pi, specifically the FFT code (1 hour).

Ben: Learned basic functions of Raspberry Pi that will come in handy when coding future programs on the Pi (1 hour).

Goals for Next Week: Figure out how to use the Audacity data and be able to change it. Find out how to always be recording the audio around the microphone.

Week 6 (Mar 3-Mar 9)

All Members: Worked on learning the basics of Python with CodeAcademy (1 hour per member).

Ben and Daniel: During meeting with Professor Mell, learned of the Python library "sounddevice" to use as a method to analyze and make changes to the input we receive from the microphone.


Week 7 (Spring Break)

All Members: Worked on learning the basics of Python with CodeAcademy (2 hours/member).

Week 8 (Mar 16 - Mar 23)

All Members: We met in the lab and started working with the Fast Fourier Transform and applying to the the Pi's Python capabilities. We found an online software that we could use to analysis the input from the microphone in REAL TIME (1 hour 30 minutes). We also met with Sam to discuss our project progress and the steps we should take to ensure that the FFT is working by next week. (1 hour). Spent roughly 2 hours each learning Python.

Wave.JPG

Isaac: Experimented with the microphone to receive a signal that accurately represents the volume of the noise in the room (2 hours).

Ben: Started doing research into taking a running average of the volume of the noise in the room which will be directly compared to the power output of the speakers ( 2 hours).

Isaac and Ben: Experimented with Audacity to see if we could utilize built-in FFT for our project (2 hours)

Week 9 ( Mar 23 - Mar 29)

Ben and Isaac: Found multiple real time analyzing FFT sound code online. Found programs for real-time audio analysis called Friture and PulseAudio, however was not able to implement these codes into our Raspberry Pi. The code was not able to run from directly using the code from the libraries. We are having trouble dissecting what the code means and how to make modifications if needed. Outside help is likely needed. (3 hours)

Daniel: Connected the speakers (output) to Raspberry Pi. Need to figure out a way to change the output without doing it manually. (1 hour). Helped search for FFT sound code and found helpful code to: 1) analyze microphone input. 2) apply the FFT to the input. 3) graph out the sound in a usable format. (2 hours).

All Members: Met with TA Sam on Wednesday and found helpful sound libraries like Friture and PulseAudio.

Ben and Daniel: Met with Professor Mell on Friday and requested additional help with coding.

Week 10 ( Mar 30 - Apr 6)

All Members:

  • We met with Professor Mell and discussed the next steps of the project in order to stay on track for a successful project (30 minutes). During the meeting, we generated some ways that we could simplify the implementing of the code by first testing it with our laptops instead of the Raspberry Pi.
  • The team met up to update the Wiki page with some useful information regarding progress as well as to work on narrowing our search for both audio input and output code. More specifically, we are looking to how to read the decibel level generated by our microphone and then amplify the speaker using a volume controller/equalizer (1 hour 30 minutes).
  • Met with Sam who showed us a method to create an audio spectrum analyzer. This is able to process our audio input in real time and display the sound through a graph. The most important part of this video is that it displays the data in real time, which is vital to our demo and project. The next step is implementing this into our Raspberry Pi and then adjusting our audio output based on the data from this spectrum analyzer (1 hour).

Ben: Worked with a Fast Fourier Transform using single tuning forks and a combination of tuning forks. I was able to better understand and visualize the workings of the FFT along with how the algorithm could possibly contribute to the project (3 hours).

Week 11 ( Mar 6 - Apr 13)

Ben and Daniel: Met with Sam on Saturday, April 7 to work on generating an audio spectrum analyzer via a computer instead of using the Raspberry Pi (1 hour). We were success full in creating the audio spectrum analyzer on a Windows computer but were unable to get it working on a Macbook. On Thursday, April 12, we went to the lab to implement the audio spectrum analyzer code on the Raspberry Pi (4 hours). While we were unable to come up with a nice graph for the real-time volume capture, we were able to successfully get the data we need using standard deviation.

All members: Tuesday, April 10th. Began working on poster by compiling relevant information to have printed for the demo. (1 hour)

All Members: Met with Sam on Sunday, April 8 to further discuss the progress of the project and work on implementing the python code on a laptop rather than the Raspberry Pi ( 1 hour 30 min).

Isaac: Making code written on Mac OS work on Raspberry Pi on Friday, April 13th (2 hours)

Isaac: Continued working on poster (1 hour)

Python Code and Audio Spectrum Analyzer on the RPi

Week 12 ( Apr 14 - Apr 20)

All Members: Met with Professor Mell to redesign project proposal. We are now using the sound spectrum analyzer as a volume detection system to tell when people are playing music too loudly in dorm rooms during social events. Designed a practice demonstration in the Lopata Lobby on May 19th.

Daniel: Finished creating the poster and was granted approval by Sam Chai. Added pictures of the project as well as a Mission/Introduction section, a Programming Methodology Section, and a Challenges/Solutions section. (2 hours) Edited Project Proposal in both the Project Page and the Log of the wiki. Updated Objectives, Challenges, and added a link to the poster on the project page (1 hour). Designed Project Website (1 hour).

Isaac: Added details to poster. Made edits to both the Project Page as well as the Project Log. Prepared for presentation. (3 hours)

Ben: Created a Python Program that runs the audio spectrum analyzer when a start icon is pushed. When the input of the microphone reaches a loud range that can be heard outside the dorm room, the program will print "You're being too loud the RA can hear you!" When the input of the microphone is not in that range, the program prints "Party Smart" Developed a website to display this code in a visually appealing way. Configured our program to notify users when they are being too loud via text message(15 hours).

Project Poster

Week 13 (Apr 21 - Apr 28)

Ben: Worked on tutorial (3 hours)

Daniel: Worked on the log and the Project page. Set up reimbursement forms and got materials checked out with Sam. (4 hours)

Isaac: Worked on Project Page. Added videos, pictures, and code from our project onto the page. (4 hours)