Nu Sigma Alpha

Nu Sigma Alpha

The Politics

This piece was intended to address Carnegie Mellon’s role in the increasingly expansive culture of surveillance both in the US and internationally. We saw the format of a fraternity as a humorous way to compare the NSA’s recruitment techniques to the much more casual commitment of joining a frat. This comparison satirizes the casual nature in which CMU students decide to work for the NSA and other branches of the military industrial complex. With the speaker mounted atop CFA we wanted to demonstrate how the seemingly private or personal act of taking a photo is uploaded to a wider database and is actually in no way a personal act.

The Sound

Camer Shutter:

Drone:

The speakers on top of the room of CFA, command a place of power and authority on campus, due to its central location and height. By distancing, amplifying and broadcasting the shutter clicks from this vantage point, we were attempting to emphasize how each innocuous shutter click, each innocent piece of information about ourselves that we give actually has resounding and far reaching effects. It is not just an image file stored in your phone. Rather, it has been inducted into a vast distributed network of information flow in which the meaning of property and privacy are vastly different and more loose than we think. Secondly, the sound was meant to emphasize the magnitude of the situation. Every photo taken was done so with the explicit consent of the subject, probably under the assumption that it was no big deal. It’s just a photo after all, right? Each photo, each email address, each piece of your life that is taken from you is another nail in the coffin of your freedom, another rung in the ladder to a police state, and will have resounding effects, echoing all through campus, this nation, and the rest of your life.

The Process

installing speakers  on the roof: HIGH SECURITY CONTENT

The Instagram

Click here to see Nu Sigma Alpha’s Instagram!

13170520_1302136686480938_605965356_o13113062_1302136363147637_47385342_o

Instagram is a social media database where pictures of content are linked and divided according to hashtags. Tagging CMU student’s faces with things like #facesoftheNSA, #surveillance, and #dronestrike not only creates the association between our school and the NSA, but puts our faces into a greater database, grouped alongside pictures of drones, weapons, and topics of national security. Our process of social media categorization mimics the NSA’s own ability to extract, evaluate, and categorize our personal information into unreachable databases.

Screen Shot 2016-05-07 at 6.59.02 PM #dronestrike

thank you to our followers

Screen Shot 2016-05-07 at 6.25.27 PMScreen Shot 2016-05-07 at 6.34.34 PM

Our Friends from the NSA!

DSC_0006 DSC_0007 DSC_0009 DSC_0010  DSC_0011 DSC_0013

Group 5 Final Project- “Creation and Sustenance”

Visuals: Raphaȅl mentioned the Montreal-based visual artist Sabrina Ratté when we were first thinking about visuals. We looked into her working methods and the concept of visual synthesizers was alluring to us. However, we could not get access to any visual synthesizers for the time being. But we still enjoyed the idea of using some electronic hardware for the visual component of our project. We started talking about doing a collage both visually and sonically.

The original idea was to have each of us record ourselves singing or playing instruments, associate the sound pieces with video clips, and have them both work like rain drops randomly triggered. As we started working on it, we started having doubts about this idea and it’s merit sound-wise. Raphaȅl had the idea of using an oscilloscope for visuals, which branched from the interest in visual synthesizers. We were able to get inputs from the microphone and have it show up on the oscilloscope in a visually interesting way. Since the oscilloscope resets every time it is turned off, we decided it was a better idea to record its reaction to sound ahead of time so we don’t have to tweak the setting during setup for the performance. Then Raphaȅl edited the visuals together to make a beautiful video. We had the oscilloscope in front of the audience during our performance to hint its usage in our visuals.

Sound: For the sound component of our project, we decided to go with the collage style. Cleo, Sean, and Jordan all worked on a short piece of collage-inspired composition, making use of Kyoko’s violin clips and Arnelle’s vocals. Arnelle then pieced all three individual parts together and read poetries by Tao Lin during our performance to transition between the pieces and to add to the collage theme we were going for. The sound and visuals were supposed to match up more in theme than with individual notes.

process_doc_1

process_doc_3

process_doc_4

process_doc_8-3

process_doc_8-2

process_doc_5

process_doc_8

process_doc

 

process_doc_6

Super Mario Soundmaker

Super Mario Soundmaker is a project by Breeanna Ebert, Steven MacDonald, Coby Rangel, and Elliot Yokum. 

We wanted to create a project which managed to recognize the sounds of a pre-existing video game and transform them into something much more haunting and grotesque. We wanted to turn the familiar into the unfamiliar through soundscape and audience interaction. So, we created a patch in Max for Live which recognized specific audio sounds from the original Super Mario Bros. game, and utilized Ableton Live to edit and transform these sounds. We then had audience members play an online emulator of the game, which featured the new sounds, thus challenging the audience to accept the unknown sounds that they were generating by playing a once-familiar video game.

Our original ideas were a bit too beyond the scope of the time we had–we had hoped to connect a WiiU to M4L and to edit the video along with the audio. When we discovered very little information about WiiU-Max connections, we chose to use an online emulator instead. We used Soundflower in order to send the sound from the internet into a Max Patch. This patch had samples of sounds from the game inserted into it, and analyzed the sounds being sent by Soundflower to match them to the preloaded sounds–when it recognized a sound, it sent it to Ableton Live, which added effects to the sound and played the sound from the speakers. Super Mario Soundmaker ended up being a wonderful technical challenge for all of us.

BLASTULA

  • art/animation & sound design: Adrienne Cassel
  • music & sound design: Seth Glickman
  • programming: Melanie Kim
  • hardware: Kaalen Kirrene

Our early discussions of the piece quickly coalesced around a few specific technological concepts and media devices: Interactivity, Audio Visualization, Custom Controller Construction.  We began with the idea that two guests would interact in a space where they had at least partial control over both the audio and visual elements in a collaborative experience.

Projection mapping came up in our talks, and this medium fit well within our original motivations and challenged us to integrate it within an interactive space.  We pulled inspiration from old school table-top video game units where players were situated across from one another, and converted an earlier idea of an arcade style interaction to gameplay activity positioned around a screen.

20160504_184748
Initial draft of the board and gameplay.

The final iteration of the game featured controller button shapes project into the common space between the players when pressed.  Once “launched,” colliding pieces from both players would link and remained positioned within the center “membrane.”  As more combined shapes collected, the environment would increasingly vibrate until a critical mass was achieved, concluding the experience.

Designing and building the board

IMG_4863

We decided to use acrylic as it is easy to laser cut and it has a surface on which it is easy to project. We wanted our board to have buttons and controls that we could projection map to. We used Makey Makey to facilitate buttons and Ardunio to add an additional source of control that was more dynamic than a button press. The buttons were different geometric shapes so that the projection mapping would be more unique.

IMG_4872

We used conductive tape on the bottom of the button and on the top of the board with wires connected to both. That way when the button was pressed, the conductive tape would meet thus completing the circuit. With the help of our mechanical design consultant Kelsey Scott, Kaalen designed the board in solid works and then laser cut it in the fabrication lab. Then all that was left to do was wire up the circuit and attach the conductive tape to the buttons. We used hot glue to attach the spring to the buttons and the board so that the button would return to it un-pressed position.

IMG_4866

Musical score and sound design

In tandem with working on the visual elements, we began to establish a sound design for the pieces and their interactions.  We wanted to create both a signature sonic style for the game, as well as enable a certain amount of user variability, fitting for the piece’s focus on interactivity.  We were aiming for short sounds with crisp attacks that could exist in a consistent digital sound space, but also uniquely identify and associate each of the game piece shapes.

The sound design of the game piece shapes were to be “played” by the guests as they interacted with the controllers and engaged with the gameplay.  Could we establish a sort of instrument that could be performed?  Was there a way of being good at playing the game in terms of the overall sound experience?

The score was also composed with this in mind.  To first fit within the sonic space, but then to provide a sense of progression over which the gameplay sound design would provide a version of melody and counterpoint.

Length of play was established to be a general baseline of between 2 and 5 minutes of user engagement per game session.  For a game of this length, customized linear background music can be used in place of a shorter repeating loop structure, fostering the feeling of forward progression through the game experience.  The final background music was 8.5 minutes of vertically-remixed digital tracks produced in Ableton Live.  Ultimately, the music would seamlessly loop if the game lasted longer than estimated projections.

Bringing it together in Unity

0
What the game looks like in the Unity editor.

The game was built in Unity2D, coded in C#. It has two sources of inputs: w/a/s/d (player 1) and arrow keys (player 2)  from the Makey Makey for the individual shapes, and Arduino for the two dials (potentiometer) that the player can aim the shapes with.

1
Arduino code.
2
C# code to set up the Serial input from Arduino. Make sure to include “using System.IO.Ports” at the top for importing the proper library and set the Api Compatibility Level to .NET 2.0
3
C# code to parse values from Arduino in the Update() function.

This is the basic logic that was programmed:

  1. Press button to initiate launch animation and sound, unique to each shape.
  2. The launched shapes all have 3 frames of hand drawn animation. They are unable to interact with each other until within the big circular waveform, also hand animated.
  3. When player 1’s shape hits player 2’s shape within the big circle, the two combine to produce a unique sound.
  4. Every time two shapes combine in the middle, they vibrate in increasing amounts.
  5. To reach the ending, reach a certain number of shape combinations. The background music would slow and fade when this happens.
4
C# code for the audio manipulation at the end.

The name, “Blastula,” was coined by team member Adrienne Cassel as the gameplay pieces forming and collecting in the center reminded her of the hollow cells that appear in early stage embryonic development.

Split Walk – Final Project

Matt Turnshek: Piano

Amelia Rosen: Visual Design, Live Video Manipulation

Guy de Bree: Composition, Live Mixing

For our final project, we were interested in exploring the mental space of a person with anxiety. We knew we were more interested in a come conventional piece of music performance, and we were working off the back of Matt and Guy’s research projects (two extremely different pieces of music we were trying to resolve into one), when the idea of exploring anxious psychology came up, and we felt it matched the direction we were going in well.

Structurally, the piece work as follows: Guy was live mixing an Ableton project containing a variety of recorded and synthesized sounds, as well as the lights in the room. Amy was using a Max patch to war a piece a piece of video to match the mood Guy was setting, and Matt was improvising on piano in response to what he was seeing from both Amy and Guy.

The piece contains a number of ‘phases’ that are switched between, that were meant to represent a gradient from normal to highly anxious. The more anxious the phase, the more aggressive the sounds guy was playing, and the more erratic Matt’s and Amy’s parts became also.

The Max patch we used was based off of adrewb@cycling74’s DirtySignal patch. We modified it to our tastes, and added controls for Amy to use.

Tinkering with Tinko: Episode 1

Tamao Cmiral:  “Tinko”, Costume Design
Erik Fredriksen: “Honky Tonk”, Sound Design, Script
Mark Mendell:  Max Programmer, Guy Who Cues the Lights and Sounds
Ticha Sethapakdi:  Lighting Design, Arduino Programmer, Sign Holder

For our project we were interested in making a performance that played out like an episode from a children’s television show.  The performance involves one actor and a “puppeteer” that controls the robotic toy piano using a MIDI keyboard.

Content-wise, the episode has the host (Tinko) teaching his toy piano (Honky Tonk) how to play itself and contains motifs such as disappointment, external validation, and happiness.  And of course, feelin’ the music.

Our diverse group of skills was what allowed us to bring this show to life.  Erik wrote most of the script and recorded the very professional-quality show tunes; Mark made a Max patch that converted note on/off messages received from a MIDI keyboard into bytecodes that would be sent to the Arduino through serial, as well as a patch that allows him to control the lights/sound cues from TouchOSC; I wrote Arduino code that reads bytes from the serial port and pushes/pulls the solenoids on the piano keys depending on which bytes were read, and made the lighting animations; and Tamao put together a Tinko-esque costume and spoke in a weird voice throughout the skit while maintaining a straight face.

Overall we had a lot of fun developing this piece and are very satisfied with the outcome.

 

Github page.

DJ Scratch Table ish

IMG_4854

For my personal project I wanted to make a dj scratch table using max and Arduino. The idea was to have a series of buttons, a couple potentiometers, and a motor. The box containing all of these pieces was laser cut from acrylic and the disk was also cut form acrylic.  The motor was attached to the the red disk and the idea was that it would track which way it was turning and I would use max to apply a scratching effect based on that. The buttons would control different samples and the potentiometers would control things like EQ and reverb. However when I hooked up the ardunio to max, I could not get any stable readings from the motor. It would bounce around randomly and did not give any reliable data.  What I learned upon further research was that an encoder attached to the motor would do exactly what I wanted. It tracks the rotational motion of the the motor so it would tell me how far it spun.

So I ignored the motor and just used the potentiometer and buttons. In the end the final design had 3 buttons and 1 potentiometer.image4 Now came what I thought was the easy part. I spent hours google methods of applying scratch effects with max and finally gave up and found methods to do it with ableton. I settled on the this method https://www.youtube.com/watch?v=HH3ryAEP308. From there my scratch table essentially became a midi controller which I accomplished by sending note values with noteout in max whenever I pressed a button. I got the max patch and ardunio code from http://playground.arduino.cc/Interfacing/MaxMSP and then modified it to suit my needs.

Screen Shot 2016-04-22 at 5.40.57 PM

However getting the dial in max to control a value in ableton was very difficult. I ended using multimap after running around trying to find access to the software (which is why my project is so late). Screen Shot 2016-04-22 at 5.40.20 PM

Then there came the issue of getting the buttons and potentiometer to work together also the tutorial I watched was in a different version of ableton so it made it hard to handle. But I finally managed to produce a semi scratchie sound shown in the video above. A problem I noticed while scratching was I couldn’t turn the potentiometer fast enough so it lacked the punch of a usual scratch sound.

image2image1

Creation

There is a video, which is not completed yet (two cluster computers have crashed in the making of it).

Creation is a performance piece featuring robotics by Arnelle Etienne, Cleo Miao, Anna Rosati, and Elliot Yokum. It is inspired by the Chinese story of the creation of humans, in which the goddess Nuwa created the first humans out of clay.

In Creation, Arnelle plays the goddess, and with her singing, she brings two creatures to life–one a minimalist puppet resembling wings controlled by Anna, the other, a human played by Elliot. Over time, the goddess begins to play with a metallic percussion instrument–after growing bored, gives the instrument to the human. The human plays with the instrument by themself at first, only to soon discover technology, which they then use to play the instrument. Through live mixing done by Cleo, the sounds grow throughout the room, as machine replaces man’s job.

MEME_0043
The goddess perch, with the heat sink instrument.

For this project, we chose to build an atmosphere out of the room through sculptural elements, including a depiction of the goddess atop a podium which Arnelle sat on, and a large floating orb which hung from the ceiling. The puppet Anna controlled was our final major structural component.

12974385_1716513468586563_7253297211181660102_n
Anna and her wings puppet.

We used two robotic elements in the piece, using an Arduino with a motorshield. We had a motor which rhythmically plucked a string attached to the floating orb, which Cleo then live mixed. We also had a solonoid on the ground near where Elliot sat, which was used to strike the heat sink percussive instrument.

MEME_0049
The final setup. The block Arnelle is sitting on is where the string for the motor was placed, and the floor nearby is where the solonoid was placed.

Our project went through several iterations and concepts: early ideas included making a color theremin, rolling marbles down pipes, and making a robotic music box. However, upon finding two heat sinks, and hearing the interesting noises they made while struck, we decided upon doing something with that. Eventually, the topic of the Chinese creation myth came up in a discussion about Cleo’s heritage, and Anna, a sculptor and puppeteer, came up with several pieces that could be used as sculptural elements. The combination of these two somehow turned into the performance piece we’ve created.

12516229_10209254622350872_299802819_n
An early brainstorming session.

We had several technical difficulties with this project. None of us were very comfortable with the Arduino software. Our motor would randomly refuse to work and our motorshield would occasionally start smoking. Many of the motorshields available for use were burnt or broken, leaving us with very limited quality material. We had two failed performances due to issues with the Arduino, but finally managed to succeed a third time, creating a performance that integrated storytelling, puppets, song, and robots.