Guy – Research Project – Dominico

For my research Project, I wanted to try making a composed pice of music, something I had never done before. Unfortunately, I didn’t have access to a DAW more advanced that Audacity due to my own poor planning and Carnival. That being said I think the result turned out pretty well

I was interested in using the human voice as a instrument, and see how far I could push it to create interesting sound. However I wanted to use languages I was unfamiliar with since using English would have, for me, removed focus from the sound of the voice itself rather than the meaning. In the indI found a recording of a man speaking in a  Dominican language called Lindala. I like the sound of it so much I decided to limit myself to sing exclusively this recording and try and build a piece of music out of it.

Goodnight Sweet Prince

Group Members: Steven MacDonald, Tamao Cmiral, Coby Rangel, Samir Gangwani
The Building Process:
All of us had the same responsibility of building the instrument and we all participated in the process. None of us have very much experience with using arduino, and it was a long process trying to put the piece together. At the very start of the process, Steven obtained the wooden plank which became our base for the instrument. We knew we wanted to attach bits and pieces to the plank and apply solenoids, motors, and/or servos to create an instrument but we weren’t too sure how. We tried out a lot things, and to put it simply our biggest struggle was getting them to do what we wanted. Finding ways to not only have the motors do what we wanted in terms of coding and programming the arduino, but to then take those motors and have it apply to our instrument in a way that would produce a significant sound. This was a challenge for us.
We went to home depot and found parts to add to the plank. This included various items but the metal sheets eventually became our main source of sound for the instrument. We created our piece by attaching two metal sheets to the plank. These were connected by servos which twisted and turned them in place. In between the sheets, we attached another servo with a metal rod connected to it. We dangled string from this rod with little pieces of metal at the bottom. The idea was to have these swing at the metal sheets to create a unique “wobbly” sound effect. We then connected a contact mic onto each sheet which were hooked up into max. As the string swung, the sound would be picked up by the contact microphones and processed through max for ambisonics. On the other end of the plank, we drilled a hole and inserted a motor. The idea was to have something descend from plank, and then rotate to hit the sheets as well, but this became a problem as it was too difficult to maintain the object in place. Eventually we detached this item and gave it a different purpose: Moosh moosh.
Pictures of the process:
The Performance:
Our group was aware that our project/instrument did not have much value and would not be able to carry out much of a performance on its own. Our group’s lack of experience with arduino made the whole process very slow and therefore we knew we wanted to add more on our own. That’s when the idea of created a live performance was introduced. Our instrument reminded us of a baby mobile and so we decided to play out a sketch for the day of the performance. Tamao was able to get the air mattress he used for his previous project, and we were able to form a short skit. Tamao and Steven played the roles of father and son. The performance went very well.

Chaos | Order: a robotic musical compilation

Robot Sound Project | Arduino Theremin

Group Members | Adrienne Cassel, Amy Rosen, Patrick Miller-Gamble, Seth Glickman

Initial Brainstorming

IMG_1365      IMG_1363

Our project began with no shortage of creative, raw design ideas.  Flexing sheets of aluminum, shaking tambourines, playing an assortment of drums and percussion instruments, spinning and striking metal cylinders, throwing objects into operating blenders, motoring air pumps into buckets of water (of various sizes), constructing a Rube Goldberg machine, were all part of spirited brainstorming sessions.  Conjuring grandiose robotic visions, it would seem, was well within our collective skill set coming into the project.  Any experience or innate concept of building the components of these visions was unfortunately not.

Table of Initial Collected/Tested Tools

IMG_1300      IMG_1301

IMG_1302      IMG_1303

IMG_1306      IMG_1299

 Use of Saw + Foot Cymbal Video

We began with a “golden spike”—a proof of concept that the four team members could together build a simple robotic musical device.  Starting with a “motor-test” patch, we removed the multi-directional code to instruct an Arduino to spin an external motor in a single direction, at a desired speed.  To the end of the motor, we attached a liquid dropper at the tip.  The dropper itself had been modified to contain a cutoff of a standard pencil connected at a perpendicular angle.  The motor and said attachments were placed inside a metal cylinder, which rang loudly as the motor spun the makeshift contraption.

“Pencil Metal Thing”

IMG_1308      IMG_1309

IMG_1759     IMG_1882

From there we aimed a degree larger: The Air Pump.  Removing the motorshield, we connected the Arduino to a more robust external power supply and programmed instructions from a modified “blink” patch.  We connected an air pump found in the shop to the circuit and successfully achieved a degree of air pressure being distributed from the pump.  However, again unfortunately, the air pressure was not powerful enough to blow out a candle, no less power through a bucket of water.  Our second attempt though was indeed successful as we replaced the existing pump with a powersync and connected that bottleneck to a pump capable of more significant air power.

“Air Pump”

IMG_1409      IMG_1410

IMG_1412      IMG_1413

IMG_1746      IMG_1749

IMG_1751      IMG_1756

Pump Video

“Air Pump as Sound Activator” – Movement Hitting Other Instruments

IMG_1757      IMG_1752

IMG_1755      IMG_1753

Amidst other trials, we began constructing the beginnings of a narrative to guide the preparation for our eventual performance.  We listened closely to each prototype and began to appreciate various aspects of the sounds they created.  To us, they were robots in a given space, interacting, conversing, even fighting with one another.  We designed Arduino code to operate servos at various speeds and delays, and combined these with the growing collection of other orphaned robot musicians.

“Robot Arguments”

IMG_1778     IMG_1777

Robot Argument Video

Meanwhile, one of the prototype developments exceeded our anticipation and expectations.  Using a breadboard, a light sensor and an external speaker, Adrienne constructed a system that would translate and scale light input data into a variable audible frequency.  She’d essentially created a performable Arduino-driven theremin, which quickly became the narrative denouement of the project.

Theremin Video 1

Theremin Video 2

Amy designed the staging such that the arduinos and instruments were placed on “pedestals” and highlighted as sculptural entities.  Originally, the four group members were going to each play one of the instruments; however, after parsing and pruning a variety of performance configurations with the organic and robotic instruments, we eventually curated the setup to highlight the theremin and utilize our various prototypes as accompaniment.  The cables and chords were carefully strung through the “pedestal” boxes to create a clean and composed performance.

“Robot Sculptures”

IMG_1782      IMG_1784


“Robot Band”

IMG_1799      IMG_1798IMG_1796      IMG_1795


Test of Theremin with Vocals

Final Staging + Display of Tools



The experimental piece was performed at the Hunt Library’s Media Lab on Wednesday, April 6, 2016.

Glass Solenoid Structure

Group: Breeanna Ebert, Erik Fredriksen,  Kaalen Kirrene, and Sean Yang


For our piece, we wanted to create a robotic instrument that was generated sound utilizing bottles. The final design was composed of eight bottles arranged in a circular manner. Each bottle had a solenoid paired with it that would strike the bottles. Each bottle would have a tube, which would deliver water into the bottles through a device that we 3-d printed. This would alter the pitch of the composition.

Instrument Building Process:

During brainstorming, we decided to build an instrument that was focused around having sound derived from bottles. First, we thought of using wind to blow over the bottles, but we decided to design it around having something strike the bottles instead. Our initial idea was to have a motor spin in the middle, with an arm attached to it that would strike the bottles. However, after a period of consideration, we decided that using solenoids would allow us to have more precise control of when the bottles would be struck.

After deciding on the idea, we created a platform to hold the bottles on. Then, we 3-d printed a device that would route water to different bottles. This device had a stepper motor attached to it, which would rotate the tubing of water into various holes, which would deliver the water.  We cut acrylic pieces in order to put together a structure that we would use to hold the stepper motor and the 3-d printed device. This structure had two levels, with the top level holding the basin of water, which was a milk jug, and the middle one holding the 3-d printed water delivery system.


The 3d Printed Water Delivery System


Wooden Platform for holding bottles


Acrylic Pieces for Structure


Final Design


Close up Of Water Delivery System

Composition and Presentation:

We focused on creating different loops of poly-rhythms by using arduino, Ableton, and Max. Most of it was hard coded in Arduino. However, a day before we were to present, we ran into an issue where the stepper motor stopped working, and couldn’t get it resolved. Also, the solenoids no longer functioned properly. On the day of the performance, the code for the solenoids randomly began working, but we were unable to get the stepper motor to work properly.

Screenshot 2Screenshot 1


Screenshot 3Screenshot 4

Robot Folk


Guy De Bree: Instrument Design and Creation

Mark Mendell: Instrument Design, Software/Hardware

Matthew Turnshek: Sound Design, Software

Making the instruments themselves was largely the responsibility of Guy. The design for the string portion of our instrument is based heavily off of primitive folk instruments such as Diddley Bows, albeit with a tuning peg. Matt obtained the wooden frame used for the string instrument, and It happened to provide a lot of structural convenience for us. Essentially, we attached a taut string to our instrument with a strumming and tuning mechanism for it which would be controlled by the Arduino.

The glass tube used as a percussion instrument was Guy’s. We decided to use it since it’s a visually interesting article that produces decent sound. Initially, we wanted to use servos since they seemed like the best option for swinging some kind of mallet. Upon testing them we found that the sound of the servos themselves also added a lot of body to the overall sound of the robot. We ended up using hot glue sticks for the mallets themselves, since they could bend, making them safer to use when hitting glass.

Some of the craftsmanship for the string portion of the instrument could have been done better. Unfortunately, Guy had to fall back on use of hot glue and tape, which aren’t generally good for heavy mechanical use in a machine like this. Fortunately, they did hold out this time, and it wouldn’t take much more work to upgrade the construction to make it more reliable.


Next, we needed to synchronize the movements of five servos. To do this, we connected them all to an arduino uno and a separate 5V power supply with help from a breadboard. The arduino was connected via USB to a computer running Max and outputting serial data. The code on the arduino interpreted the numbers 0-180 as degrees for the tuning peg servo. 181-184 were indications to strike/pluck with the corresponding servo. For plucking, the arduino would alternate between the left and right side of the string when it got the signal. For striking, the servo would move towards the object and back a fraction of a second later.


Matt designed the algorithm ultimately used to control the string and mallets. We wanted to create a sound that changed over time and showcased the interesting sounds our instrument was capable of producing. As such, we focused on changing tempo frequently, heavy and light sections, and twangy string sounds.

The algorithm switched between three phases with different weights and average tempos in Markov Chain fashion, with equal likelihood to enter each phase from each other phase. Each phase had a different tempo range and likelihood at each beat for a mallet to swing or string to be strummed or tuned.


The strange acoustic noises our instrument produced and the methodical yet sporadic way it delivered them gave some of our listeners the impression of music from an exotic culture. This, along with the intimidating robotic visuals lead us to dub our project ‘Robot Folk’.

fish fugue

Ticha Sethapakdi: Concept, Software and Hardware Design
Kyoko Inagawa: Sound Design, Performance
Melanie Kim: Sound/Set Design, Experience Design

Fish Fugue employs computer vision to enable a soloist to perform with a goldfish-controlled toy piano accompaniment. A webcam mounted on top of the goldfish tracks the fish with Processing code while an Arduino dictates the notes played on a toy piano. As the goldfish moves to a different quadrant in the tank, the melody changes to reflect the fish’s position.

GitHub repository here.

The circuit diagram:


Arduino code (click on each to view in detail):

Processing code (click on each to view in detail):


There are eleven solenoids connected to eleven keys on the toy piano (D, high D, E, high E, G, high G, A, high A, B, high B, high C). The Processing code divided the webcam feed into four sections, and the Arduino would “play” the notes based on which section the fish was in. Therefore, we composed the four accompaniment parts using only these eleven notes, and made sure they would flow from one another in case the fish moved erratically between quadrants. The performer would see which quadrant the fish is on the monitor and improvise her solo to fit the accompaniment the best. The monitor effectively becomes her musical score. The performance lasts three minutes, after which the code is set to stop playing the piano.



The set evokes playfulness with the childlike and minimalist colors. We strove to hide most of the “scary” guts of the circuits and electronics, as they would distract from the performance. One of the feelings we wanted to impart was a “miniature concert.”


Special thanks to Fred the Fish for being such a hardworking swimmer and Jesse for providing us the solenoids.

The Performance

Kaalen Kirrene, Kabir Manthra, Mark Mendell


The main concept for the story was to portray the frustrations that come with learning a new instrument. Kaalen and Kabir were both playing “Heart and Soul” on instruments they had no experience with apart from learning how to make basic sound. We also wanted to play with the audience’s expectations and make it ambiguous whether missteps in the performance were intentional or not. For this, we took inspiration from a Mr. Show sketch called The Audition. The only unplanned misstep was the buffering of the videos which interfered with the pacing of the ending. Our general timeline was that Kaalen and Kabir would get an awkward start interrupted by feedback; this would make Kaalen and then Kabir look up a YouTube tutorial while Mark fixed the feedback; they would then sound like a good performance of Heart and Soul until it was revealed to be a lipsync, at which point their true sour notes would cut through; from here towards the end, things got more chaotic: the sour note looped, more and more effects were applied to the instruments, and a malevolent YouTube tutorial started criticizing their playing.

Audio Processing

For the live processing, Mark used a Max patch with some basic granular synthesis, reverb, and a looper. The gain on the french horn was turned up at the beginning to cause feedback. The looper was turned on right when the lipsync finished so that the true sour note would play over and over. Granular synthesis and reverb made the instruments sound strange as things went crazy.

Kaalen created the “haywire insults” part ie. he did all of the effects on the audio when the YouTube tutorials where yelling at them. He did all of it in audacity. We started off by ordering the insults to have increased intensity as time went on. We wanted them to shift from insults about their playing to insults that focused more on their character. After we had them organized Kaalen started playing around with different effects by copying the original tracks and adding effects to those. He wanted the intensity of the effects to mirror the intensity of the insults so in the beginning he just used pitch shifting as it sounded creepy but it wasn’t overwhelming. From then he mostly did different combinations of pitch shifting and sliding pitch shifting. For the parts where there was an echo he would copy the sound and then pitch shift it and play it slightly later than the original sound giving it that creepy ascending echo. He added some preverb and reverb to have that ghostly sound. At the end he pitch shifted Brooke’s (the girl’s) voice up really high and then took smaller and smaller cuts of her sound until it became that annoying ringing sound. For Christian at the end Kaalen wanted a lower sound to contrast the high pitched ringing so he didn’t pitch shift him, he just took cuts of his voice. The final effect he added was a sinusoid with a frequency of 250 hz which (according to the internet) is a pitch that causes nausea and headaches that increased in volume just to add to the tension of the insults. For that he took inspiration from Gaspar Noe’s Irreversible where the entire begging sequence has a similar pitch to cause discomfort in the audience.


The video component of our project was the part closest to the heart of our concept. Since the rise of YouTube, videos have become one of the most ubiquitous forms of media. One could also say that they are one of the most democratic, since almost everyone (in the US) has at least one video camera and anyone can upload to YouTube. This perceived democratization leads to a greater trust in the authenticity of the content, which is why tutorial videos work in the first place: “if they can do it, so can I”. But this isn’t necessarily the case, as we’ve seen countless times. Many everyday, ‘authentic’ vloggers turn out to be professional setups to trap audiences (this has become a problem on Instagram as well). Our videos explore the almost surreal extent to which video media, and YouTube in particular, can control our self perception as well as the standards we compare ourselves to.

The actual process for making the videos happened in 2 stages. The first was shooting the videos with our actors in front of a green screen. We did this in 4 different takes. One for each of the instructional videos at the start, one for the second instructional video, and one just containing criticism. The script was fairly loose. We planned out the start and end of each video as well as major cues, such as when Brooke reacts to Kabir playing a note. The rest of it was improvised.
The second stage was the video editing. The instructional videos were the easiest, Kabir just needed to find believable locations for them to be set in. The aesthetic he was going for was believable but slightly fake, foreshadowing the chaos that’s about to ensue. For the second video, the effect we were going for was more uncontrolled and surreal; the video gets a life of its own and our tools end up controlling us.