The Performance

Kaalen Kirrene, Kabir Manthra, Mark Mendell


The main concept for the story was to portray the frustrations that come with learning a new instrument. Kaalen and Kabir were both playing “Heart and Soul” on instruments they had no experience with apart from learning how to make basic sound. We also wanted to play with the audience’s expectations and make it ambiguous whether missteps in the performance were intentional or not. For this, we took inspiration from a Mr. Show sketch called The Audition. The only unplanned misstep was the buffering of the videos which interfered with the pacing of the ending. Our general timeline was that Kaalen and Kabir would get an awkward start interrupted by feedback; this would make Kaalen and then Kabir look up a YouTube tutorial while Mark fixed the feedback; they would then sound like a good performance of Heart and Soul until it was revealed to be a lipsync, at which point their true sour notes would cut through; from here towards the end, things got more chaotic: the sour note looped, more and more effects were applied to the instruments, and a malevolent YouTube tutorial started criticizing their playing.

Audio Processing

For the live processing, Mark used a Max patch with some basic granular synthesis, reverb, and a looper. The gain on the french horn was turned up at the beginning to cause feedback. The looper was turned on right when the lipsync finished so that the true sour note would play over and over. Granular synthesis and reverb made the instruments sound strange as things went crazy.

Kaalen created the “haywire insults” part ie. he did all of the effects on the audio when the YouTube tutorials where yelling at them. He did all of it in audacity. We started off by ordering the insults to have increased intensity as time went on. We wanted them to shift from insults about their playing to insults that focused more on their character. After we had them organized Kaalen started playing around with different effects by copying the original tracks and adding effects to those. He wanted the intensity of the effects to mirror the intensity of the insults so in the beginning he just used pitch shifting as it sounded creepy but it wasn’t overwhelming. From then he mostly did different combinations of pitch shifting and sliding pitch shifting. For the parts where there was an echo he would copy the sound and then pitch shift it and play it slightly later than the original sound giving it that creepy ascending echo. He added some preverb and reverb to have that ghostly sound. At the end he pitch shifted Brooke’s (the girl’s) voice up really high and then took smaller and smaller cuts of her sound until it became that annoying ringing sound. For Christian at the end Kaalen wanted a lower sound to contrast the high pitched ringing so he didn’t pitch shift him, he just took cuts of his voice. The final effect he added was a sinusoid with a frequency of 250 hz which (according to the internet) is a pitch that causes nausea and headaches that increased in volume just to add to the tension of the insults. For that he took inspiration from Gaspar Noe’s Irreversible where the entire begging sequence has a similar pitch to cause discomfort in the audience.


The video component of our project was the part closest to the heart of our concept. Since the rise of YouTube, videos have become one of the most ubiquitous forms of media. One could also say that they are one of the most democratic, since almost everyone (in the US) has at least one video camera and anyone can upload to YouTube. This perceived democratization leads to a greater trust in the authenticity of the content, which is why tutorial videos work in the first place: “if they can do it, so can I”. But this isn’t necessarily the case, as we’ve seen countless times. Many everyday, ‘authentic’ vloggers turn out to be professional setups to trap audiences (this has become a problem on Instagram as well). Our videos explore the almost surreal extent to which video media, and YouTube in particular, can control our self perception as well as the standards we compare ourselves to.

The actual process for making the videos happened in 2 stages. The first was shooting the videos with our actors in front of a green screen. We did this in 4 different takes. One for each of the instructional videos at the start, one for the second instructional video, and one just containing criticism. The script was fairly loose. We planned out the start and end of each video as well as major cues, such as when Brooke reacts to Kabir playing a note. The rest of it was improvised.
The second stage was the video editing. The instructional videos were the easiest, Kabir just needed to find believable locations for them to be set in. The aesthetic he was going for was believable but slightly fake, foreshadowing the chaos that’s about to ensue. For the second video, the effect we were going for was more uncontrolled and surreal; the video gets a life of its own and our tools end up controlling us.



A Performance by Adrienne Cassel, Breeanna Ebert, and Erik Fredriksen


The music was composed around the idea of transitioning between natural and mechanical soundscapes, and was primarily composed by Adrienne. Each section was based around a certain feel to the music, which influenced the choice in effects and lighting. The beginning motif was the first thing written, and many of the remaining motifs were improvised as a group and written down into a structure. Below we see an initial sketch of a score, followed by Adrienne and Breeanna on guitar and kalimba respectively. The kalimba was tuned to an E minor scale.

IMG_6143 IMG_6137 IMG_6136


Recordings were used throughout the entirety of the piece, all recorded by Breeanna. Since the theme was the contrast between natural and mechanical sounds, we recorded machines and animals.

The mechanical sounds were recorded in the Hamerschlag Mechanical Engineering lounge, and included:

  • Refrigerator
  • Heat Exchanger
  • Thermalfins Fan
  • Radiation and Convection tube
  • Sink

Natural Sounds recorded included:

  • A tortoise eating
  • A turtle eating
  • A stool being pulled across the floor

The recordings were recorded using an omnidirectional zoom recorder, and were cut to an appropriate length using Audacity.


We built a Max patch to run the guitar and kalimba through, have a hub for controlling the volumes of recorded sounds, and control lights. We used TouchOSC to operate the patch live, running on an iPad Mini operated by Erik. We will talk in detail about the patch itself and the technical process of figuring out sounds.

Full Picture

This is an image of the full patch. The entire patch can be divided into 6 parts: Ambisonics, Guitar, Kalimba, Munger 1 (Granular Synthesis), Sounds, and Lights.

Touch OSC


This was the interface Erik used during the performance. The lights were controlled with the top right sliders, the sounds were cued with the toggles in the top center, the 1d sliders were used to control the wet dry of effects on the guitar and kalimba, and the 2d sliders were used to control the placement of the instruments spatially.



We used the Ambisonics library patch that we were given in class to control where each sound was spatially in the Hunt Media Lab’s 8.1 speaker system. TouchOSC was hooked up to the first two inputs (the guitar and kalimba respectively) so that Erik could control where they were spatially in real time. The other inputs were fixed in position.



The inputted guitar sound was manipulated in a few ways. The objects to the left are dry-wet control in how much the guitar signal was mixed with a rectangle wave at E4 (the piece being in the key of E minor whenever the effect was used), resulting in an Amplitude Modulation. The rectangle wave adds a jaggedness to the guitar sound, effectively creating a digital distortion. Since the modulation frequency is the tonic, the effect responds powerfully to Adrienne’s playing.

The objects to the right give a dry wet control to a delay effect. The delay time is approximately the double the tempo of the beginning section of the piece.

Both of these controls were controlled with TouchOSC. There is an additional out that sends the guitar signal to the Munger1 object.



The input of the kalimba is being manipulated in a few ways similar to the Guitar. It is being mixed with a modulation frequency for another amplitude modulation. It is a sawtooth signal; its frequency is being controlled through TouchOSC with the bounds 0-1000. Afterward, it is sent out to the Munger1 object. When the frequency is 0, it plays the input exactly. When the frequency is 1000, it sounded like the notes were being distorted and then played a couple octaves up. When the frequency is changed quickly and the output goes through the Munger1 object, it creates a very intense mechanical sound in a huge cave-like space that works very well in the climax of the piece. The dry signal of the kalimba is also being sent to a delay with a wet dry control with input from TouchOSC.

Munger1 (Granular Synthesis)


These are the settings for the Munger1 object we used. These settings created a huge underwater cave-like space, and the feedback would make a deep rumble. The levels of the guitar and kalimba going into the object were being controlled through TouchOSC.



The sounds that Breeanna recorded were all hooked up in a similar manner. When the patch loads, all the sounds being looping silently, and toggles in the TouchOSC interface were used; when the toggle was on, the volume of the sound was at 1, and when the toggle was off, the volume of the sound was at 0. A few of the sounds (a machine sound, the sound of chair moving, and the sound of bubbles) were mixed with a modulation frequency to make them more distorted and harder to discern. This contrasted with some of the more natural sounding sounds, such as the water flowing or the electronic drum kit.



All of the lights were controlled through TouchOSC live. The basic patch relating to the Hunt Media Lab was hooked up to 4 sliders and a button in TouchOSC. The first three sliders were hooked up to the green, blue, and red colors of the lights, which for convenience sake became the 3 main settings. The fourth slider was for the saturation of the lights, which ended up not being used. The button would randomize the color and saturation of each individual light, and that button would be hit as fast as humanly possible during the climax of the peace. The biggest limitation of this system was that after each light was randomized, hitting any of the other sliders would send the old input of colors, rather than changing the lights with respect to their current color (the green, blue, and red lights were saved as a 3-tuple).



Rest was intended to explore the mental space of a person half-awake in the morning, repeatedly ignoring their alarm. It’s a half-dream space where this person’s rational thought is pitted against its environment.

We lacked direction until Patrick joined the group, bringing new ideas. He ended up designing most of the experiential aspects of the piece, including the use of a mattress as an instrument, and the performative play that became attached to this.

As a source for structural inspiration for the piece we turned to works like Terry Riley’s In C. We were interested in what would happen if you turned In C into a mechanical process e.i. the musician’s progressions be caused by random inputs (in our case, tennis balls hitting a mattress).

I was the one who ended up composing the set of bars to be progressed. I decided to go with a set of fairly spacious, simple compositions, to prevent heavy layering of the bars from being unintelligible. They’re a mix of C Major and Minor. They ended up mostly being a dreamily kind of happy, which I think matched our general theme.

Cleo was really interested in adding texture to the piece from a sound editing station from the beginning. Her main contribution to the piece was the entire Max system she developed and controlled during the performance.  Here’s some pictures of them she sent me:





Tamao helped generally run the preparation for performance. Unfortunately my main camera failed me during the performance so I had to resort to taking a partial video with my phone. This is entirely my (Guy’s) fault.

Overall we think the piece went well, and even though the mattress falling was unplanned, Patrick recovered well, and it became part of the performance. We did feel that the piece should have escalated with time more, and unfortunately we didn’t have enough time to thoroughly rehearse before performance so we missed this problem.

Story of the Wind

At the start of this project, we threw multiple proposals around before settling on the idea that we wanted to put on a sort of hybrid acoustic/electronic performance with aspects of both that were intertwined and interdependent on one another. This, we decided, would lend to the feeling we wanted to convey that when we performed our piece, it would seem as if one person was putting on an electronic performance while the other two, playing acoustic instruments, would simply be a part of the first performer’s “tool belt” or more appropriately, instrument rack. We wanted to design our piece such that the performer (Ticha) could change something digitally in TouchOSC (explained below) that would affect the live instrumentalists (Matt and Coby).



It was also decided pretty early on that we would use a melody and accompaniment that Matt composed in Logic as the”backing track” of our piece. This track would be ever present, and Ticha would be able to layer audio effects from Ableton Live onto it to change the sound and mood.

Story of the wind screenshot

The other half of the sound part of our performance would be covered by Matt and Coby on the Guitar and Bassoon, respectively. They would improvise a new melody over the backing track, but only when Ticha “switched them on.” She would be able to do this, as well as adjust the intensity of their playing, all from the TouchOSC interface. Below Matt and Coby are represented by the Warrior and Philosopher.



The bulk of the difficulty in this project lied in the integration of TouchOSC with Ableton, and later on the Media Lab lighting grid. Our goal was for Ableton to recognize OSC as just another MIDI controller and allow us to map various buttons within Ableton to the OSC interface, which was difficult in itself even before we decided that we wanted to do it wirelessly. Ticha was invaluable during this stage of the process, as it was her who was familiar with TouchOSC and another “software sketchbook” called Processing, which allowed her (after many hours of programming wizardry) to eventually link TouchOSC’s buttons to the on/off switches of the audio effects in Ableton. And after another many hours of wizardry, she figured out how to control two lights in the grid individually through TouchOSC.

Processing screenshotimagejpeg_1


Screen Shot 2016-02-29 at 2.14.06 AM

It took us a while before we agreed upon how we wanted to present our piece- how we would convey our efforts to the audience, how we would demonstrate that Matt and Coby were simply part of Ticha’s electronic performance, and how, if at all, we would let the audience interact and affect the way the piece sounded.  We ended up with a simple narrative, one where we could make it look as if Ticha’s performance was telling a story, and Matt and Coby represented characters in that story. We were not trying to be to complex,  but we also needed a way for the audience to see the TouchOSC interface in order to more easily understand the story. Seeing as it also received a huge chunk or our attention leading up to this point, we wanted to display it in a prominent spot throughout our performance.


We worked through several options, but strangely enough the only one that seemed to work feasibly was mounting a projector upside down onto a tripod and using an Apple TV to receive Ticha’s ipad screen and send it to the projector.


This way, the audience could see what Ticha was doing in TouchOSC, and immediately see what effect it was having on the performance as a whole. This was our final product:


The roles for this project were as follows:

Ticha Sethapakdi – Software Programmer/Experience Director

Matthew Turnshek- Sound Designer/Experience Director

Coby Rangel-Sound Editor/Scribe

Live Performance – The Trip



We had a tough time getting started. The amount of options available was overwhelming, and we had trouble getting focused on one concept.


Our first attempts at planning the performance were fruitless – we needed to get in a closed space and make some noise.


We shared music we had been listening to recently, and we all liked the vibe of Daniel Caesar’s Death and Taxes. There was something unmistakably trippy about it, and it made us all rock back and forth. We knew we wanted to take advantage of all the effects on Ableton, so simulating a drugged out experience was our best option.


We started off with one soundbite from Death and Taxes. We then worked off of that to create vocal riffs and melodic harmonies with the violin. We did this by recording our instruments with a Zoom recorder, and transferring directly into Ableton Live. After having recorded these bits, we would add effects that seemed to work well with our concept. We continued this process over and over until we were satisfied with the end result.


The beginnings of the background visuals – we liked the idea of Arnelle and I being the voices in Kyoko’s head. It accidentally took the shape of a French flag.




Things really took off once we hit the media lab.

In addition to our pre-recorded sound, we wanted to put live effects onto both the vocals and the violin. So we placed a DPA mic onto the violin, and had microphones for both vocalists. We then chose multiple effects that worked well with our ongoing soundbite of Death and Taxes for each instrument.

Rather than building up and intensifying, we decided that we wanted to start our musical performance with the climax which we then would gradually break down and subtract effects from.

Our next process consisted of mapping each effect onto a MIDI keyboard. One this was done, we labeled the keyboard and played around with the timing of the effects for the performance.

trip_process_photos_8 trip_process_photos_9 trip_process_photos_10

Screen Shot 2016-03-06 at 3.41.49 PM Screen Shot 2016-03-06 at 3.42.12 PM Screen Shot 2016-03-06 at 3.42.30 PM Screen Shot 2016-03-06 at 3.42.50 PM Screen Shot 2016-03-06 at 3.43.00 PM Screen Shot 2016-03-06 at 3.43.15 PM

Our Ableton file.


Although we got video footage of our performance, the camera forgot to record sound. We went around this by layering the pre-recorded Ableton tracks over the live performance video.


By Seth Glickman, Melanie Kim, and Elliot Yokum

A closer look (without a million people):

The accompanying music and the appropriate Max patch were composed by Seth, the individual notes on the pentatonic scale were produced by Elliot, and most of the visual elements were designed by Melanie. All of us did the wiring and circuitry.

Our controller was a Makey Makey. We used a combination of alligator clips and (lots and lots of) conductive tape to connect the Makey Makey to each of the five “altars.” The symbols on the altars were drawn with invisible blacklight ink pen on construction paper. Whenever a person touched the two tapes on either side of the symbol at the same time, they would complete the circuit, triggering the unique musical note and the light attached to it through Max. The light would be fluctuating from a color to blacklight, which would reveal the symbol on the paper.








Early concepts and tests for Makey Makey, control scheme, and the symbols: