Difference between revisions of "Laser Harp"

From ESE205 Wiki
Jump to: navigation, search
(Uploading the composition to a AWS server)
Line 4: Line 4:
[[Category:Spring 2019 Projects]]
[[Category:Spring 2019 Projects]]
How to tutorial:  
How to tutorial: [[Playing multiple sounds at once]]
Git hub: [https://github.com/ESE205/laser-harp.git Git hub]
Git hub: [https://github.com/ESE205/laser-harp.git Git hub]

Revision as of 11:14, 27 April 2019


Weekly log: Laser_Harp_Weekly_Log

How to tutorial: Playing multiple sounds at once

Git hub: Git hub

Presentation slide show: File:Laser Harp.pdf



Although instruments have come a long way from their origins, they still have room to grow. Inspired by the transition from acoustic to electronic instruments, the laser harp strives to introduce a new way of experiencing music. With the laser harp, one would be able to enjoy musical scales by touching rays of light. Since the harp will have different keys and scales, it is important that the system's programming is properly carried out so the user can easily operate the instrument. Moreover, the sensors must be properly installed and synced with the disturbance of the lasers' trajectories for the project to be successful.

Team Members

  • Taylor Howard
  • Jennifer Fleites
  • Yoojin Kim
  • TA: Chance Bayles
  • Instructor: Jim Feher


  • Learn how to use Raspberry pi and python.
  • Build a circuit connecting the laser diodes, photo-resistors, and LEDs to the Raspberry Pi.
  • Build a frame for the harp through woodworking.
  • Determine which notes or sounds are feasible based on execution of code.
  • Create code to work the Raspberry Pi (this includes code for determining when a note is played,turning on LEDs and playing sounds as notes are played, increasing volume as note is held, playing back a composition that the player wishes to record, and uploading that composition to a AWS server).



  • Writing and understanding code that is executed in the different cases that occur when a laser is tripped


  • Reliably alligning lasers to photoresistors
  • Cord management



  • Buttons
  • Wood
  • Digital-to-analog converter
  • Wires
  • Raspberry pi


Tax: $3.65

TOTAL Purchased: $18.45

Gantt Chart


Proposal Presentation

Design and Solutions


Determining when a laser beam breaks

The amount of light that the photo-resistor senses is converted to an analog value using an analog digital converter. Using this light value, it can be determined whether or not the laser beam has been triggered by seeing if the value lies within a certain range. If a beam has been triggered, the sound associated with that beam is played.

Playing different notes concurrently

Pygame.mixer.Channel() and pygame.mixer.sound were implemented within the code so that multiple notes could be played at once. Using this method, different sounds are projected to different channels where they can be played independently without interfering with each other.

Final Codes

Recording the composition

The composition is created by recording the audio output. Using an app (PulseAudio) and the subprocess model, the system records what is sent to the headphones and when the user ends the recording the composition is sent to an AWS server where the user can access it from the device they wish. If the user wishes to record their composition, the record button is pressed and as they play each note or collection of notes, the produced sounds are added to the recording. At the end of this process, there is a finalized recording that is then uploaded to an AWS server where the user can access the file online with a static IP address on another device.

Codes for recording

Uploading the composition to a AWS server

We first coded requests to upload the files to different folders on the local host. Once we succeeded in creating a code that would request the server to find a file and upload it to a different folder, we then transferred the codes to an online host by using AWS Lightsail, a cheap and easy hosting server, to upload the recorded files to a static IP address associated with our AWS account.

AWS Codes

Website to access the recording

Building the Frame

We sketched the case using Solidworks, where proper measurements were made regarding the lengths and angles necessary for cutting the plywood used for the final design. After the wood was cut, the parts were drilled together and the proper wiring was enforced. In wiring the lasers and photo-resistors to the RaspberryPi, the lasers were connected to the power supply provided by the pi so that they can turn on and the photo-resistors were connected to the analog-digital converter so that the amount of light hitting them could be read and used to execute the code programmed into the pi.


How Results Compare to Original Objectives

For the most part, all objectives were met; the case was sketched and created, the wireframe was properly executed, and the code was able to handle multiple notes being played at once. The objectives that were not met include: altering the volume as the user holds a note, a switch that allows the user to decide which scale they wish to play with, and LEDs that blink when a note is played.

Limitations that affected the result

Throughout the project, the main factor that delayed our progress was figuring out how to play notes concurrently. Due to this delay, the recording of the composition and the altering of volume became harder to implement. To set the volume, it would be optimal to create sound objects and set the volume of those objects to a certain value. However, the way we implemented the sounds to play was to call the pygame.mixer.play function directly to play a wav file without referring to an object. Therefore, given the structure of our main code, it was difficult to find a way to change the volume as then we would have had to continue exploring other ways to play the sound. A good alternative would have been to start with a wave function where the sound files would have been a result of altering the function. With this it might have been easier to implement a change of volume while calling the function used to play multiple sounds. The installation of LED blinks could have been possible given our timeline, but when we initially planned whether to install them or not, we decided that it would be an "extra" factor that we could consider if we had time at the end. Given this assumption, we designed the harp frame to exclude the inclusion of LED lights. Thus, when considering whether to add these at the end, we thought it would be best to exclude the lights as they would not fit aesthetically with our harp frame since we would have to drill additional holes, which may also affect our wiring and setup of the pi.

Playing Notes Concurrently

This part of the project took the most iterations. In getting multiple notes to play at once, we attempted combinations of pydub, pygame.audio, pygame.mixer.music, multiprocessing, multithreading, and swmixer before finally applying pygame.mixer.Channel() and pygame.mixer.sound. Pydub and pygame.audio were somewhat successful in playing notes individually. Due to the run time needed to execute the lines of code responsible for playing the note, there was a lag present in the sound produced, making these options less viable. To fix the lag, pygame.mixer.music was applied. However, even though the lag was resolved, only one note could be played at a time. We also found that pygame.mixer.sound was more beneficial in supporting multiple sounds being played, so we replaced pygame.mixer.music with pygame.mixer.sound when finalizing the code.

Working towards the goal of playing notes concurrently, we attempted using multiprocessing and multithreading. Multiprocessing was attempted first since it would make sense for the sounds to play completely independently of each other. However, when executing the code, we observed that multiprocessing would not produce the sounds being commanded since the sounds were overwriting each other. We then used multithreading, because unlike multiprocessing, the "threads" could communicate with each other and we believed the sounds would not overwrite each other. However, despite our change, the notes could only be played one at a time. Multithreading is more widely implemented in Java, more so than python, and therefore, the programming language we used could have been a limitation. In order to get notes to be played concurrently, we also experimented with another library called swmixer instead of pygame.mixer, as swmixer claimed to allow sounds to play concurrently. Although the notes were able to play concurrently, the problem we faced was the inability to stop the sounds after removing our finger from the laser. The sound also became distorted and did not resemble the original sound of the wav file, which made the produced notes unpleasant to hear.

Code using multiprocessing Code using multithreading and swmixer

After looking at the pygame library in more depth, we discovered pygame.mixer.Channel(). Implementing pygame.mixer.Channel() allowed multiple sounds to be played at once by projecting each sound to a new channel instead of forcing them to play in the same one, as was the case with multithreading. Implementing pygame.mixer.Channel() and pygame.mixer.sound within if statements corresponding to each case where a beam is triggered, allowed for the proper execution of code. Even though we were able to get the sounds to play concurrently, there are sometimes buffers in our sound. Moreover, we still face some lags between playing the note with our fingers and hearing the sounds. However, given the constraints of the programming language and equipment we used, we believe the lag is inevitable and using .Channel() would be the best approach to take for the notes to play concurrently, based on our extensive experiment with various combination of libraries and methods.

Code using channels

Recording the Composition

In keeping track of the notes being played, the composition was initially recorded by concatenating files as they were played. However, as different methods were implemented so that multiple notes could be played concurrently, the idea of concatenation became more complex. Therefore, after pygame.mixer.Channel() was implemented, the composition was created using another method that was not concatenation. The composition is created by recording the audio output. Using an app, the system records what is being sent to headphones and when the user ends the recording the composition is sent to an AWS server where the user can access it from the device they wish.

Next Steps

  • Having a switch that would allow user to choose between different scales.
  • Applying a change of volume as user holds a note.
  • Having LEDs that light up when a note is played.
  • Having multiple buttons were recordings could be saved to and replayed as they are touched.