Shut The Door!

By Kyoko Inagawa, Kabir Mantha, Breeanna Ebert

Our original idea was to create a collage of vocal majors and transform the piece to sound like white noise as time went on. But eventually, we saw the prop door in the lab, and used that as inspiration for our project.

Sound Production:

We all decided to attend recitals, and we recorded 2 vocal recitals and a violin recital. From this, I extracted some samples that I felt could be looped continuously throughout the piece. Each sample lasted anywhere from 1 to 30 seconds.

Screen Shot 2016-02-17 at 7.45.22 PM

Sound Editing:

I used Audacity to cut and combine the different samples I found. I used extracts from both recitals, and played around with Paulstretching, reverse, reverb, pitch and time changes, etc. I used reverse with some of the sound to create some phasing with the wave frequencies. I made sure some of the samples I found would work well looping with itself without creating clippings or sudden volume changes.

This required frequently zooming into the loops and messing with the sound at a small scale. I drew some parts of the sounds in to get rid of clipping noises, and any noises caused by wind or hitting the microphone. I also had to make sure if there were any sudden noises or clippings I wanted to include that I lined them up accurately with a small margin of error in milliseconds. When I reached the peak of the piece, I had to make sure the peak transitioned into the last sound well enough that it didn’t end too suddenly, and the first sound of the piece didn’t seem out of place. I was then able to loop the entire piece.

Screen Shot 2016-02-17 at 7.46.39 PM

Concept Change:

We had then decided to change our concept, and focus on doors and environment changes. We then went out and recorded door sounds and changes in sound going through a door. We then went through the door recordings and decided we wanted to go from loud door sounds, to more of a focus on environment changes, and end with a sudden door slam.

12714171_10208790705351923_1157102750_n

Sound Editing:

I didn’t edit any of the door sounds, but instead incorporated the original piece to emphasize the environment changes. As soon as a new door sound was played, the piece had a new effect added to it.

I made splits to ensure I made the change as soon as the peak of the door sound played. It’s noticeable on this picture that the piece changed. The volume also was effected. The soundtrack starts off with a door sound to catch the listener off guard. The piece sounds closer to the original after each door sound to give the impression that we’re approaching a more open environment. There are a couple instances in the piece where we backtrack slightly and end up in a more confined place. For example, one of the door sounds moves the listener from their environment to a bathroom, then back outside.

This shows the transition to the bathroom. I applied a high pass filter to the soundtrack, giving it a constrained feeling. The piece then goes on unedited until we reach the final door sound, trying to catch the listener off guard again.

Screen Shot 2016-02-17 at 7.48.31 PM

Project 2: Extended Music Performance

For this project you will execute a music/sound performance that is extended through electronics: including electronic sound generation, sound spatialization, electronic processing of acoustic instruments/sources, visuals (video/lighting), or some combination of these.  Your roles are open-ended, you may compose, perform, edit, mix, program, design, etc.  Every group member is expected to perform an equal share of the work and to share their contributions in class and in the project documentation.

Sound

I have given the link to a musical video I think is very cool, and would love to try and make something like this. I think the visuals in this video are very helpful in trying to understand how to really be able to put this together, although I do not think making something like this would be as easy as timing golf balls to hit certain things at certain times.

 

 

Two interesting musical pieces

This is an experimental art instillation by musician and composer Craig Colorusso. What struck me about it apart from being a really interesting way to represent sound was its emphasis on walking around and exploring the soundscape. Considering this is what I wanted to  happen with my first project I was immediately drawn to this instillation. Each speaker has a different guitar sample hooked up to it so when the sun charges the speaker enough it plays its sample.

With this song, I have always been fascinated with making other forms of media into a song. So I am huge fan of mashups. This video takes things further by taking the movie pulp fiction and turning certain scenes into a song.

Project 1: Ambisonic Environments

Presented Monday, February 8, 2016

Experience Director: Kaalen Kirenne

Our project definitely evolved over time in regards to the story we wanted to tell, however, we kept one theme constant throughout the entire project. We wanted to present our audience with different environments to explore.

We wanted our project to be interactive in the sense that a person could walk around the room and hear different parts of the current environment. For example, in a cave an experiencer could hear a bat fly across the room, or underwater a fish swimming in the corner. People were intended to move around the room to try and experience the entire environment. It was more of installation than a musical project. The lighting and the sounds were meant to bring a person to that environment creating a sonic virtual reality.

We had many iterations of our project as we learned more about the details of it and the tools we had available. Our first idea was replicating the experience of a pianist’s mind wandering through different places. He would imagine himself in different places and as such we would replicate that by changing the reverb of and filtering the piano track Tamao recorded for us. Our original story featured in order a normal room, a cave, a beach, underwater, a forest that eventually catches fire, a desolate scene, and finally a concert hall. Each room would smoothly transition to the next as sounds would transform from one to another.

However, when we learned that we had to record all of our sounds, we had to rethink our environments because we are not all trained Foley artists. We did not want to give up entirely on our idea so we started to try and think about what sounds we could make. Once we had our sounds we narrowed our idea down to three locations, a cave, underwater, and an “extreme” scene. The nuance of our project was lost but we still kept the overarching idea that we wanted to create a space that people could explore and discover.

Tamao, Matt, and Kaalen working in the library

Sound Designer: Matthew Turnshek

As for recording, we used a variety of natural and “artificial” techniques to obtain the sounds of our three environments.

Some sounds were recorded and used by finding a real-life equivalent of the environment we wished to create. In our cave scene, most of the noises were produced by doing various activities in an echoing, underground area we found. Our water noises also all came from actual recordings of water. For these sounds, the most modification we did was to use amplification and reverb to make the sounds flow naturally with one another.

Other sounds were simulated by slowing down something we recorded. For example, in our ‘extreme’ scene, we slowed down the breaking of ice cubes and received a low rumble sound, like an earthquake or a boulder crumbling. Another example from the ‘extreme’ environment is our fire-crackling sound, which was made with a pitched down recording of breaking up potato chips. We used this website to learn some techniques. 

Our piano piece was recorded in the studio in CFA. We added three different automated reverbs and faded them in and back out at the beginning and end of each of our environments. For the cave scene, we used an in-built IR from Pro Tools. For the water scene, we used an IR for underwater we found online. For the extreme scene, we used a normal concert hall reverb, but a bit more than usual.

List of sounds used

Sound Editor: Tamao Cmiral

Once we had the sounds we needed, we set them up in the ambisonic-spatializing Max patch. We created 18 presets for our 25 sound sources, including the Chopin piece. Next, we wanted to divide our presets into the environments they would be playing. Before we dragged the sounds into Max, we put a number to each sound so we could keep track of which sound was which when organizing them into their respective environments.

After having a clear list of the sounds for each environment, we used the total length of the Chopin piece as our project length, and spread our 18 presets over 315 seconds. We needed to figure out the division of time and sections to the piece so we could switch the group of sounds according to the section.

We started the patch with only the Chopin playing, then we progressively started adding sounds of a cave with each preset. We decided sounds would shift across the grid or stay put according to what the sounds was. For example, the bat sounds and the flapping wings were programmed to fly across the room. This goes the same for the swimming fish and the airplane noises. During the process, the piano remained in the same central area.

We had technical issues with our Max patch when setting up on the day of our presentation as well as the night before, when running it on the IDeATe desktop. We had two main problems – getting our sound objects to play, and having our presets load correctly. Luckily, Matt plugged in his laptop, and the patch worked correctly for our presentation.

 

Ambisonic-spatializing max patch after adding making our modifications

Sounds (or well, methods) I Find Interesting

The first two artists are similar in that there is one individual creating most of the sounds and layering them onto one another.  When performed live, these pieces become compiled field recordings, so they are always unique to the performance location.

The first piece is by Andrew Bird.  He is one of my favorite artists for “peaceful” or “relaxing” music compositions.  His talent with myriad instruments is instantly apparent, but what makes his music (and this piece in particular) beautiful is the delicate way in which he blends the different sounds together.  By always beginning with the basic beat and building up, his music becomes very natural to listen to.  Notice that the vocals (whistling not included) in this piece do not actually come into play until about a minute and a half into it, allowing the listening to become fully emerged into the soundscape before being reminded that a human is creating these sounds.

The second piece is by James Blake.  Similar to Andrew Bird, he also creates his music through layering live; however, unlike Andrew Bird, most of his sounds are created with a keyboard and electronic effects in additional to the vocals.  This creates a completely different atmospheric quality than that created by Andrew Bird.  Yet, the overall effect of James Blake’s music – slowly building the piece by virtue of a repetitive initial sound or two followed by dramatic beats and accents – also allows the listener to become extremely comfortable with it prior to being confronted with a variety of more abrupt, intense, or semi-unsettling sounds that simply make the pieces more interesting.

I also love this song by James Blake and Bon Iver, because it takes advantage of the skills of both artists by setting them in cldear opposition while simultaneously seamlessly joining them.  This song takes advantage of various instruments and tools to create overlapping beats and rhythms that fluctuate in a melodic manner.

On another note, I wanted to share an interesting method of sound-creation that I recently discovered.  ArcAttack is an experimental group that harnesses the power of Tesla Coils and robotics to create varying frequencies and sound qualities.  In addition, through strategically dramatic lighting and acting, they accentuate the futuristic sounds with complimentary visuals.  By mainly doing covers of well-known songs, ArcAttack allows the listeners to easily piece together the sounds, while being mesmerized by the technique.

And finally, because we are going to be working with machines, and I’m sure I’m not the only one who has created music using household appliances, I thought I would share this fun compilation.

The Speech

By Patrick Miller Gamble, Samir Gangwani, Melanie Kim, and Cleo Miao.

From the beginning, we wanted live performance for the project as well as an overall feeling of anxiety, crowd, and being on stage. Then a simple idea occurred: what if you’re on the stage because you’re giving a speech, and the audience is also part of the performance? The mindset of a speaker on a stage became our subject.

The Intro Segment

We wanted to first transport the audience through sounds, to convey the speaker’s movement throug space: streets → dry space → confined space → auditorium space, etc. We made various recordings ranging from on the field to through the computer. A notable one is the recording of our quoting of Gene Ray’s “top bottom front back two sides,” which was the chant that gained monstrously near the end of the speech.

sounds

The recordings were pieced together in Logic, and most of the noise was normal sounds with reverb/preverb/echo/distortion. The ambient noise was largely piano reversed. This stitched piece resulted in five different channels, each exported separately for ambisonic animations in Max. We put some keyframes, and let them loop automatically during the performance.

1

2

The lighting during this part was done only through the stage lights native to the sound design room, controlled through sliders. We used two of them, and signified the transition to the next part of the piece with the entrance of the speaker (going on the podium) and the light illuminating them from behind.

sliders

The Speech

The full speech can be found here. We also made a PowerPoint that would play (and run through manually) during the speech for the audience to follow. We compiled the speech from standup comedy routines, conspiracy theorists, and political speeches. We were trying to capture a wide range of the different tones of voice and rhythms people use when speaking publicly. For instance, the tone of voice used when greeting the audience is very different from the tone when confronting a heckler, or telling a joke, or saying something serious, etc. A goal of the experiment was to divorce speech patterns from their literal meaning in order to appreciate them musically and/or sonically.

Once the speaker started the speech, we manipulated their voice as well as the varying sound files to insert crowd reactions such as boos. The live sound file manipulation and voice manipulation was programmed to be controlled with Touch OSC through an iPad Mini.

screen 1

This first screen contains one large square with a dot that can be dragged around the room which correlates to a ambisonic encoder (hoa.map) in Max. The voice can be altered by two main sources: a feedback engine and a reverb engine. The feedback engine was controlled with the top two vertical sliders (fb output and feedback) and the the two knobs (delay and transpose). The reverb engine was created using a third party patch called yafr2 which controls four parameters (high frequency dampening, grain size, diffusion, and decay time) that correlate to the four horizontal sliders.

screen 2

The second screen controls eight solo sound samples that can be triggered by simply clicking the on/off buttons. Each sample has its own box just like the large box on the first screen which allows them to be spatialized however the performer would like.

The Ending

The chants were actually a continuation from the intro segment earlier, which had been playing this entire time. The stage lights went off, and the grid lights in the room went full haywire during the chanting to reinforce the frantic feeling. This was activated manually to match the timing, but accomplished through Max as usual with a looping element:

3

4