Category Archives: Uncategorized

Nu Sigma Alpha

Nu Sigma Alpha

The Politics

This piece was intended to address Carnegie Mellon’s role in the increasingly expansive culture of surveillance both in the US and internationally. We saw the format of a fraternity as a humorous way to compare the NSA’s recruitment techniques to the much more casual commitment of joining a frat. This comparison satirizes the casual nature in which CMU students decide to work for the NSA and other branches of the military industrial complex. With the speaker mounted atop CFA we wanted to demonstrate how the seemingly private or personal act of taking a photo is uploaded to a wider database and is actually in no way a personal act.

The Sound

Camer Shutter:

Drone:

The speakers on top of the room of CFA, command a place of power and authority on campus, due to its central location and height. By distancing, amplifying and broadcasting the shutter clicks from this vantage point, we were attempting to emphasize how each innocuous shutter click, each innocent piece of information about ourselves that we give actually has resounding and far reaching effects. It is not just an image file stored in your phone. Rather, it has been inducted into a vast distributed network of information flow in which the meaning of property and privacy are vastly different and more loose than we think. Secondly, the sound was meant to emphasize the magnitude of the situation. Every photo taken was done so with the explicit consent of the subject, probably under the assumption that it was no big deal. It’s just a photo after all, right? Each photo, each email address, each piece of your life that is taken from you is another nail in the coffin of your freedom, another rung in the ladder to a police state, and will have resounding effects, echoing all through campus, this nation, and the rest of your life.

The Process

installing speakers  on the roof: HIGH SECURITY CONTENT

The Instagram

Click here to see Nu Sigma Alpha’s Instagram!

13170520_1302136686480938_605965356_o13113062_1302136363147637_47385342_o

Instagram is a social media database where pictures of content are linked and divided according to hashtags. Tagging CMU student’s faces with things like #facesoftheNSA, #surveillance, and #dronestrike not only creates the association between our school and the NSA, but puts our faces into a greater database, grouped alongside pictures of drones, weapons, and topics of national security. Our process of social media categorization mimics the NSA’s own ability to extract, evaluate, and categorize our personal information into unreachable databases.

Screen Shot 2016-05-07 at 6.59.02 PM #dronestrike

thank you to our followers

Screen Shot 2016-05-07 at 6.25.27 PMScreen Shot 2016-05-07 at 6.34.34 PM

Our Friends from the NSA!

DSC_0006 DSC_0007 DSC_0009 DSC_0010  DSC_0011 DSC_0013

DJ Scratch Table ish

IMG_4854

For my personal project I wanted to make a dj scratch table using max and Arduino. The idea was to have a series of buttons, a couple potentiometers, and a motor. The box containing all of these pieces was laser cut from acrylic and the disk was also cut form acrylic.  The motor was attached to the the red disk and the idea was that it would track which way it was turning and I would use max to apply a scratching effect based on that. The buttons would control different samples and the potentiometers would control things like EQ and reverb. However when I hooked up the ardunio to max, I could not get any stable readings from the motor. It would bounce around randomly and did not give any reliable data.  What I learned upon further research was that an encoder attached to the motor would do exactly what I wanted. It tracks the rotational motion of the the motor so it would tell me how far it spun.

So I ignored the motor and just used the potentiometer and buttons. In the end the final design had 3 buttons and 1 potentiometer.image4 Now came what I thought was the easy part. I spent hours google methods of applying scratch effects with max and finally gave up and found methods to do it with ableton. I settled on the this method https://www.youtube.com/watch?v=HH3ryAEP308. From there my scratch table essentially became a midi controller which I accomplished by sending note values with noteout in max whenever I pressed a button. I got the max patch and ardunio code from http://playground.arduino.cc/Interfacing/MaxMSP and then modified it to suit my needs.

Screen Shot 2016-04-22 at 5.40.57 PM

However getting the dial in max to control a value in ableton was very difficult. I ended using multimap after running around trying to find access to the software (which is why my project is so late). Screen Shot 2016-04-22 at 5.40.20 PM

Then there came the issue of getting the buttons and potentiometer to work together also the tutorial I watched was in a different version of ableton so it made it hard to handle. But I finally managed to produce a semi scratchie sound shown in the video above. A problem I noticed while scratching was I couldn’t turn the potentiometer fast enough so it lacked the punch of a usual scratch sound.

image2image1

Guy – Research Project – Dominico

For my research Project, I wanted to try making a composed pice of music, something I had never done before. Unfortunately, I didn’t have access to a DAW more advanced that Audacity due to my own poor planning and Carnival. That being said I think the result turned out pretty well

I was interested in using the human voice as a instrument, and see how far I could push it to create interesting sound. However I wanted to use languages I was unfamiliar with since using English would have, for me, removed focus from the sound of the voice itself rather than the meaning. In the indI found a recording of a man speaking in a  Dominican language called Lindala. I like the sound of it so much I decided to limit myself to sing exclusively this recording and try and build a piece of music out of it.

Goodnight Sweet Prince

Group Members: Steven MacDonald, Tamao Cmiral, Coby Rangel, Samir Gangwani
The Building Process:
All of us had the same responsibility of building the instrument and we all participated in the process. None of us have very much experience with using arduino, and it was a long process trying to put the piece together. At the very start of the process, Steven obtained the wooden plank which became our base for the instrument. We knew we wanted to attach bits and pieces to the plank and apply solenoids, motors, and/or servos to create an instrument but we weren’t too sure how. We tried out a lot things, and to put it simply our biggest struggle was getting them to do what we wanted. Finding ways to not only have the motors do what we wanted in terms of coding and programming the arduino, but to then take those motors and have it apply to our instrument in a way that would produce a significant sound. This was a challenge for us.
We went to home depot and found parts to add to the plank. This included various items but the metal sheets eventually became our main source of sound for the instrument. We created our piece by attaching two metal sheets to the plank. These were connected by servos which twisted and turned them in place. In between the sheets, we attached another servo with a metal rod connected to it. We dangled string from this rod with little pieces of metal at the bottom. The idea was to have these swing at the metal sheets to create a unique “wobbly” sound effect. We then connected a contact mic onto each sheet which were hooked up into max. As the string swung, the sound would be picked up by the contact microphones and processed through max for ambisonics. On the other end of the plank, we drilled a hole and inserted a motor. The idea was to have something descend from plank, and then rotate to hit the sheets as well, but this became a problem as it was too difficult to maintain the object in place. Eventually we detached this item and gave it a different purpose: Moosh moosh.
Pictures of the process:
unnamed
unnamed-2
unnamed-4
unnamed-6
unnamed-5
The Performance:
Our group was aware that our project/instrument did not have much value and would not be able to carry out much of a performance on its own. Our group’s lack of experience with arduino made the whole process very slow and therefore we knew we wanted to add more on our own. That’s when the idea of created a live performance was introduced. Our instrument reminded us of a baby mobile and so we decided to play out a sketch for the day of the performance. Tamao was able to get the air mattress he used for his previous project, and we were able to form a short skit. Tamao and Steven played the roles of father and son. The performance went very well.

Glass Solenoid Structure

Group: Breeanna Ebert, Erik Fredriksen,  Kaalen Kirrene, and Sean Yang

Objective:

For our piece, we wanted to create a robotic instrument that was generated sound utilizing bottles. The final design was composed of eight bottles arranged in a circular manner. Each bottle had a solenoid paired with it that would strike the bottles. Each bottle would have a tube, which would deliver water into the bottles through a device that we 3-d printed. This would alter the pitch of the composition.

Instrument Building Process:

During brainstorming, we decided to build an instrument that was focused around having sound derived from bottles. First, we thought of using wind to blow over the bottles, but we decided to design it around having something strike the bottles instead. Our initial idea was to have a motor spin in the middle, with an arm attached to it that would strike the bottles. However, after a period of consideration, we decided that using solenoids would allow us to have more precise control of when the bottles would be struck.

After deciding on the idea, we created a platform to hold the bottles on. Then, we 3-d printed a device that would route water to different bottles. This device had a stepper motor attached to it, which would rotate the tubing of water into various holes, which would deliver the water.  We cut acrylic pieces in order to put together a structure that we would use to hold the stepper motor and the 3-d printed device. This structure had two levels, with the top level holding the basin of water, which was a milk jug, and the middle one holding the 3-d printed water delivery system.

12899462_10204696798738539_1818224054_o

The 3d Printed Water Delivery System

12986484_10201488968503474_2104802640_o

Wooden Platform for holding bottles

13016659_10201488968423472_782406130_o

Acrylic Pieces for Structure

12980415_10201488968383471_248763748_n

Final Design

13010228_10201488968343470_1903186436_o

Close up Of Water Delivery System

Composition and Presentation:

We focused on creating different loops of poly-rhythms by using arduino, Ableton, and Max. Most of it was hard coded in Arduino. However, a day before we were to present, we ran into an issue where the stepper motor stopped working, and couldn’t get it resolved. Also, the solenoids no longer functioned properly. On the day of the performance, the code for the solenoids randomly began working, but we were unable to get the stepper motor to work properly.

Screenshot 2Screenshot 1

 

Screenshot 3Screenshot 4

fish fugue

Ticha Sethapakdi: Concept, Software and Hardware Design
Kyoko Inagawa: Sound Design, Performance
Melanie Kim: Sound/Set Design, Experience Design

Fish Fugue employs computer vision to enable a soloist to perform with a goldfish-controlled toy piano accompaniment. A webcam mounted on top of the goldfish tracks the fish with Processing code while an Arduino dictates the notes played on a toy piano. As the goldfish moves to a different quadrant in the tank, the melody changes to reflect the fish’s position.

GitHub repository here.

The circuit diagram:

Fish_Fugue_Circuit_bb_revised

Arduino code (click on each to view in detail):

Processing code (click on each to view in detail):

IMG_0344

There are eleven solenoids connected to eleven keys on the toy piano (D, high D, E, high E, G, high G, A, high A, B, high B, high C). The Processing code divided the webcam feed into four sections, and the Arduino would “play” the notes based on which section the fish was in. Therefore, we composed the four accompaniment parts using only these eleven notes, and made sure they would flow from one another in case the fish moved erratically between quadrants. The performer would see which quadrant the fish is on the monitor and improvise her solo to fit the accompaniment the best. The monitor effectively becomes her musical score. The performance lasts three minutes, after which the code is set to stop playing the piano.

musescore

IMG_0333

The set evokes playfulness with the childlike and minimalist colors. We strove to hide most of the “scary” guts of the circuits and electronics, as they would distract from the performance. One of the feelings we wanted to impart was a “miniature concert.”

IMG_0343

Special thanks to Fred the Fish for being such a hardworking swimmer and Jesse for providing us the solenoids.

The Performance

Kaalen Kirrene, Kabir Manthra, Mark Mendell

Story

The main concept for the story was to portray the frustrations that come with learning a new instrument. Kaalen and Kabir were both playing “Heart and Soul” on instruments they had no experience with apart from learning how to make basic sound. We also wanted to play with the audience’s expectations and make it ambiguous whether missteps in the performance were intentional or not. For this, we took inspiration from a Mr. Show sketch called The Audition. The only unplanned misstep was the buffering of the videos which interfered with the pacing of the ending. Our general timeline was that Kaalen and Kabir would get an awkward start interrupted by feedback; this would make Kaalen and then Kabir look up a YouTube tutorial while Mark fixed the feedback; they would then sound like a good performance of Heart and Soul until it was revealed to be a lipsync, at which point their true sour notes would cut through; from here towards the end, things got more chaotic: the sour note looped, more and more effects were applied to the instruments, and a malevolent YouTube tutorial started criticizing their playing.

Audio Processing

For the live processing, Mark used a Max patch with some basic granular synthesis, reverb, and a looper. The gain on the french horn was turned up at the beginning to cause feedback. The looper was turned on right when the lipsync finished so that the true sour note would play over and over. Granular synthesis and reverb made the instruments sound strange as things went crazy.

Kaalen created the “haywire insults” part ie. he did all of the effects on the audio when the YouTube tutorials where yelling at them. He did all of it in audacity. We started off by ordering the insults to have increased intensity as time went on. We wanted them to shift from insults about their playing to insults that focused more on their character. After we had them organized Kaalen started playing around with different effects by copying the original tracks and adding effects to those. He wanted the intensity of the effects to mirror the intensity of the insults so in the beginning he just used pitch shifting as it sounded creepy but it wasn’t overwhelming. From then he mostly did different combinations of pitch shifting and sliding pitch shifting. For the parts where there was an echo he would copy the sound and then pitch shift it and play it slightly later than the original sound giving it that creepy ascending echo. He added some preverb and reverb to have that ghostly sound. At the end he pitch shifted Brooke’s (the girl’s) voice up really high and then took smaller and smaller cuts of her sound until it became that annoying ringing sound. For Christian at the end Kaalen wanted a lower sound to contrast the high pitched ringing so he didn’t pitch shift him, he just took cuts of his voice. The final effect he added was a sinusoid with a frequency of 250 hz which (according to the internet) is a pitch that causes nausea and headaches that increased in volume just to add to the tension of the insults. For that he took inspiration from Gaspar Noe’s Irreversible where the entire begging sequence has a similar pitch to cause discomfort in the audience.

Video

The video component of our project was the part closest to the heart of our concept. Since the rise of YouTube, videos have become one of the most ubiquitous forms of media. One could also say that they are one of the most democratic, since almost everyone (in the US) has at least one video camera and anyone can upload to YouTube. This perceived democratization leads to a greater trust in the authenticity of the content, which is why tutorial videos work in the first place: “if they can do it, so can I”. But this isn’t necessarily the case, as we’ve seen countless times. Many everyday, ‘authentic’ vloggers turn out to be professional setups to trap audiences (this has become a problem on Instagram as well). Our videos explore the almost surreal extent to which video media, and YouTube in particular, can control our self perception as well as the standards we compare ourselves to.

The actual process for making the videos happened in 2 stages. The first was shooting the videos with our actors in front of a green screen. We did this in 4 different takes. One for each of the instructional videos at the start, one for the second instructional video, and one just containing criticism. The script was fairly loose. We planned out the start and end of each video as well as major cues, such as when Brooke reacts to Kabir playing a note. The rest of it was improvised.
The second stage was the video editing. The instructional videos were the easiest, Kabir just needed to find believable locations for them to be set in. The aesthetic he was going for was believable but slightly fake, foreshadowing the chaos that’s about to ensue. For the second video, the effect we were going for was more uncontrolled and surreal; the video gets a life of its own and our tools end up controlling us.

Performance

Recording