Category Archives: Projects

Nu Sigma Alpha

Nu Sigma Alpha

The Politics

This piece was intended to address Carnegie Mellon’s role in the increasingly expansive culture of surveillance both in the US and internationally. We saw the format of a fraternity as a humorous way to compare the NSA’s recruitment techniques to the much more casual commitment of joining a frat. This comparison satirizes the casual nature in which CMU students decide to work for the NSA and other branches of the military industrial complex. With the speaker mounted atop CFA we wanted to demonstrate how the seemingly private or personal act of taking a photo is uploaded to a wider database and is actually in no way a personal act.

The Sound

Camer Shutter:

Drone:

The speakers on top of the room of CFA, command a place of power and authority on campus, due to its central location and height. By distancing, amplifying and broadcasting the shutter clicks from this vantage point, we were attempting to emphasize how each innocuous shutter click, each innocent piece of information about ourselves that we give actually has resounding and far reaching effects. It is not just an image file stored in your phone. Rather, it has been inducted into a vast distributed network of information flow in which the meaning of property and privacy are vastly different and more loose than we think. Secondly, the sound was meant to emphasize the magnitude of the situation. Every photo taken was done so with the explicit consent of the subject, probably under the assumption that it was no big deal. It’s just a photo after all, right? Each photo, each email address, each piece of your life that is taken from you is another nail in the coffin of your freedom, another rung in the ladder to a police state, and will have resounding effects, echoing all through campus, this nation, and the rest of your life.

The Process

installing speakers  on the roof: HIGH SECURITY CONTENT

The Instagram

Click here to see Nu Sigma Alpha’s Instagram!

13170520_1302136686480938_605965356_o13113062_1302136363147637_47385342_o

Instagram is a social media database where pictures of content are linked and divided according to hashtags. Tagging CMU student’s faces with things like #facesoftheNSA, #surveillance, and #dronestrike not only creates the association between our school and the NSA, but puts our faces into a greater database, grouped alongside pictures of drones, weapons, and topics of national security. Our process of social media categorization mimics the NSA’s own ability to extract, evaluate, and categorize our personal information into unreachable databases.

Screen Shot 2016-05-07 at 6.59.02 PM #dronestrike

thank you to our followers

Screen Shot 2016-05-07 at 6.25.27 PMScreen Shot 2016-05-07 at 6.34.34 PM

Our Friends from the NSA!

DSC_0006 DSC_0007 DSC_0009 DSC_0010  DSC_0011 DSC_0013

Group 5 Final Project- “Creation and Sustenance”

Visuals: Raphaȅl mentioned the Montreal-based visual artist Sabrina Ratté when we were first thinking about visuals. We looked into her working methods and the concept of visual synthesizers was alluring to us. However, we could not get access to any visual synthesizers for the time being. But we still enjoyed the idea of using some electronic hardware for the visual component of our project. We started talking about doing a collage both visually and sonically.

The original idea was to have each of us record ourselves singing or playing instruments, associate the sound pieces with video clips, and have them both work like rain drops randomly triggered. As we started working on it, we started having doubts about this idea and it’s merit sound-wise. Raphaȅl had the idea of using an oscilloscope for visuals, which branched from the interest in visual synthesizers. We were able to get inputs from the microphone and have it show up on the oscilloscope in a visually interesting way. Since the oscilloscope resets every time it is turned off, we decided it was a better idea to record its reaction to sound ahead of time so we don’t have to tweak the setting during setup for the performance. Then Raphaȅl edited the visuals together to make a beautiful video. We had the oscilloscope in front of the audience during our performance to hint its usage in our visuals.

Sound: For the sound component of our project, we decided to go with the collage style. Cleo, Sean, and Jordan all worked on a short piece of collage-inspired composition, making use of Kyoko’s violin clips and Arnelle’s vocals. Arnelle then pieced all three individual parts together and read poetries by Tao Lin during our performance to transition between the pieces and to add to the collage theme we were going for. The sound and visuals were supposed to match up more in theme than with individual notes.

process_doc_1

process_doc_3

process_doc_4

process_doc_8-3

process_doc_8-2

process_doc_5

process_doc_8

process_doc

 

process_doc_6

Super Mario Soundmaker

Super Mario Soundmaker is a project by Breeanna Ebert, Steven MacDonald, Coby Rangel, and Elliot Yokum. 

We wanted to create a project which managed to recognize the sounds of a pre-existing video game and transform them into something much more haunting and grotesque. We wanted to turn the familiar into the unfamiliar through soundscape and audience interaction. So, we created a patch in Max for Live which recognized specific audio sounds from the original Super Mario Bros. game, and utilized Ableton Live to edit and transform these sounds. We then had audience members play an online emulator of the game, which featured the new sounds, thus challenging the audience to accept the unknown sounds that they were generating by playing a once-familiar video game.

Our original ideas were a bit too beyond the scope of the time we had–we had hoped to connect a WiiU to M4L and to edit the video along with the audio. When we discovered very little information about WiiU-Max connections, we chose to use an online emulator instead. We used Soundflower in order to send the sound from the internet into a Max Patch. This patch had samples of sounds from the game inserted into it, and analyzed the sounds being sent by Soundflower to match them to the preloaded sounds–when it recognized a sound, it sent it to Ableton Live, which added effects to the sound and played the sound from the speakers. Super Mario Soundmaker ended up being a wonderful technical challenge for all of us.

BLASTULA

  • art/animation & sound design: Adrienne Cassel
  • music & sound design: Seth Glickman
  • programming: Melanie Kim
  • hardware: Kaalen Kirrene

Our early discussions of the piece quickly coalesced around a few specific technological concepts and media devices: Interactivity, Audio Visualization, Custom Controller Construction.  We began with the idea that two guests would interact in a space where they had at least partial control over both the audio and visual elements in a collaborative experience.

Projection mapping came up in our talks, and this medium fit well within our original motivations and challenged us to integrate it within an interactive space.  We pulled inspiration from old school table-top video game units where players were situated across from one another, and converted an earlier idea of an arcade style interaction to gameplay activity positioned around a screen.

20160504_184748
Initial draft of the board and gameplay.

The final iteration of the game featured controller button shapes project into the common space between the players when pressed.  Once “launched,” colliding pieces from both players would link and remained positioned within the center “membrane.”  As more combined shapes collected, the environment would increasingly vibrate until a critical mass was achieved, concluding the experience.

Designing and building the board

IMG_4863

We decided to use acrylic as it is easy to laser cut and it has a surface on which it is easy to project. We wanted our board to have buttons and controls that we could projection map to. We used Makey Makey to facilitate buttons and Ardunio to add an additional source of control that was more dynamic than a button press. The buttons were different geometric shapes so that the projection mapping would be more unique.

IMG_4872

We used conductive tape on the bottom of the button and on the top of the board with wires connected to both. That way when the button was pressed, the conductive tape would meet thus completing the circuit. With the help of our mechanical design consultant Kelsey Scott, Kaalen designed the board in solid works and then laser cut it in the fabrication lab. Then all that was left to do was wire up the circuit and attach the conductive tape to the buttons. We used hot glue to attach the spring to the buttons and the board so that the button would return to it un-pressed position.

IMG_4866

Musical score and sound design

In tandem with working on the visual elements, we began to establish a sound design for the pieces and their interactions.  We wanted to create both a signature sonic style for the game, as well as enable a certain amount of user variability, fitting for the piece’s focus on interactivity.  We were aiming for short sounds with crisp attacks that could exist in a consistent digital sound space, but also uniquely identify and associate each of the game piece shapes.

The sound design of the game piece shapes were to be “played” by the guests as they interacted with the controllers and engaged with the gameplay.  Could we establish a sort of instrument that could be performed?  Was there a way of being good at playing the game in terms of the overall sound experience?

The score was also composed with this in mind.  To first fit within the sonic space, but then to provide a sense of progression over which the gameplay sound design would provide a version of melody and counterpoint.

Length of play was established to be a general baseline of between 2 and 5 minutes of user engagement per game session.  For a game of this length, customized linear background music can be used in place of a shorter repeating loop structure, fostering the feeling of forward progression through the game experience.  The final background music was 8.5 minutes of vertically-remixed digital tracks produced in Ableton Live.  Ultimately, the music would seamlessly loop if the game lasted longer than estimated projections.

Bringing it together in Unity

0
What the game looks like in the Unity editor.

The game was built in Unity2D, coded in C#. It has two sources of inputs: w/a/s/d (player 1) and arrow keys (player 2)  from the Makey Makey for the individual shapes, and Arduino for the two dials (potentiometer) that the player can aim the shapes with.

1
Arduino code.
2
C# code to set up the Serial input from Arduino. Make sure to include “using System.IO.Ports” at the top for importing the proper library and set the Api Compatibility Level to .NET 2.0
3
C# code to parse values from Arduino in the Update() function.

This is the basic logic that was programmed:

  1. Press button to initiate launch animation and sound, unique to each shape.
  2. The launched shapes all have 3 frames of hand drawn animation. They are unable to interact with each other until within the big circular waveform, also hand animated.
  3. When player 1’s shape hits player 2’s shape within the big circle, the two combine to produce a unique sound.
  4. Every time two shapes combine in the middle, they vibrate in increasing amounts.
  5. To reach the ending, reach a certain number of shape combinations. The background music would slow and fade when this happens.
4
C# code for the audio manipulation at the end.

The name, “Blastula,” was coined by team member Adrienne Cassel as the gameplay pieces forming and collecting in the center reminded her of the hollow cells that appear in early stage embryonic development.

Split Walk – Final Project

Matt Turnshek: Piano

Amelia Rosen: Visual Design, Live Video Manipulation

Guy de Bree: Composition, Live Mixing

For our final project, we were interested in exploring the mental space of a person with anxiety. We knew we were more interested in a come conventional piece of music performance, and we were working off the back of Matt and Guy’s research projects (two extremely different pieces of music we were trying to resolve into one), when the idea of exploring anxious psychology came up, and we felt it matched the direction we were going in well.

Structurally, the piece work as follows: Guy was live mixing an Ableton project containing a variety of recorded and synthesized sounds, as well as the lights in the room. Amy was using a Max patch to war a piece a piece of video to match the mood Guy was setting, and Matt was improvising on piano in response to what he was seeing from both Amy and Guy.

The piece contains a number of ‘phases’ that are switched between, that were meant to represent a gradient from normal to highly anxious. The more anxious the phase, the more aggressive the sounds guy was playing, and the more erratic Matt’s and Amy’s parts became also.

The Max patch we used was based off of adrewb@cycling74’s DirtySignal patch. We modified it to our tastes, and added controls for Amy to use.

Tinkering with Tinko: Episode 1

Tamao Cmiral:  “Tinko”, Costume Design
Erik Fredriksen: “Honky Tonk”, Sound Design, Script
Mark Mendell:  Max Programmer, Guy Who Cues the Lights and Sounds
Ticha Sethapakdi:  Lighting Design, Arduino Programmer, Sign Holder

For our project we were interested in making a performance that played out like an episode from a children’s television show.  The performance involves one actor and a “puppeteer” that controls the robotic toy piano using a MIDI keyboard.

Content-wise, the episode has the host (Tinko) teaching his toy piano (Honky Tonk) how to play itself and contains motifs such as disappointment, external validation, and happiness.  And of course, feelin’ the music.

Our diverse group of skills was what allowed us to bring this show to life.  Erik wrote most of the script and recorded the very professional-quality show tunes; Mark made a Max patch that converted note on/off messages received from a MIDI keyboard into bytecodes that would be sent to the Arduino through serial, as well as a patch that allows him to control the lights/sound cues from TouchOSC; I wrote Arduino code that reads bytes from the serial port and pushes/pulls the solenoids on the piano keys depending on which bytes were read, and made the lighting animations; and Tamao put together a Tinko-esque costume and spoke in a weird voice throughout the skit while maintaining a straight face.

Overall we had a lot of fun developing this piece and are very satisfied with the outcome.

 

Github page.

Creation

There is a video, which is not completed yet (two cluster computers have crashed in the making of it).

Creation is a performance piece featuring robotics by Arnelle Etienne, Cleo Miao, Anna Rosati, and Elliot Yokum. It is inspired by the Chinese story of the creation of humans, in which the goddess Nuwa created the first humans out of clay.

In Creation, Arnelle plays the goddess, and with her singing, she brings two creatures to life–one a minimalist puppet resembling wings controlled by Anna, the other, a human played by Elliot. Over time, the goddess begins to play with a metallic percussion instrument–after growing bored, gives the instrument to the human. The human plays with the instrument by themself at first, only to soon discover technology, which they then use to play the instrument. Through live mixing done by Cleo, the sounds grow throughout the room, as machine replaces man’s job.

MEME_0043
The goddess perch, with the heat sink instrument.

For this project, we chose to build an atmosphere out of the room through sculptural elements, including a depiction of the goddess atop a podium which Arnelle sat on, and a large floating orb which hung from the ceiling. The puppet Anna controlled was our final major structural component.

12974385_1716513468586563_7253297211181660102_n
Anna and her wings puppet.

We used two robotic elements in the piece, using an Arduino with a motorshield. We had a motor which rhythmically plucked a string attached to the floating orb, which Cleo then live mixed. We also had a solonoid on the ground near where Elliot sat, which was used to strike the heat sink percussive instrument.

MEME_0049
The final setup. The block Arnelle is sitting on is where the string for the motor was placed, and the floor nearby is where the solonoid was placed.

Our project went through several iterations and concepts: early ideas included making a color theremin, rolling marbles down pipes, and making a robotic music box. However, upon finding two heat sinks, and hearing the interesting noises they made while struck, we decided upon doing something with that. Eventually, the topic of the Chinese creation myth came up in a discussion about Cleo’s heritage, and Anna, a sculptor and puppeteer, came up with several pieces that could be used as sculptural elements. The combination of these two somehow turned into the performance piece we’ve created.

12516229_10209254622350872_299802819_n
An early brainstorming session.

We had several technical difficulties with this project. None of us were very comfortable with the Arduino software. Our motor would randomly refuse to work and our motorshield would occasionally start smoking. Many of the motorshields available for use were burnt or broken, leaving us with very limited quality material. We had two failed performances due to issues with the Arduino, but finally managed to succeed a third time, creating a performance that integrated storytelling, puppets, song, and robots.

Chaos | Order: a robotic musical compilation

Robot Sound Project | Arduino Theremin

Group Members | Adrienne Cassel, Amy Rosen, Patrick Miller-Gamble, Seth Glickman

Initial Brainstorming

IMG_1365      IMG_1363

Our project began with no shortage of creative, raw design ideas.  Flexing sheets of aluminum, shaking tambourines, playing an assortment of drums and percussion instruments, spinning and striking metal cylinders, throwing objects into operating blenders, motoring air pumps into buckets of water (of various sizes), constructing a Rube Goldberg machine, were all part of spirited brainstorming sessions.  Conjuring grandiose robotic visions, it would seem, was well within our collective skill set coming into the project.  Any experience or innate concept of building the components of these visions was unfortunately not.

Table of Initial Collected/Tested Tools

IMG_1300      IMG_1301

IMG_1302      IMG_1303

IMG_1306      IMG_1299

 Use of Saw + Foot Cymbal Video

We began with a “golden spike”—a proof of concept that the four team members could together build a simple robotic musical device.  Starting with a “motor-test” patch, we removed the multi-directional code to instruct an Arduino to spin an external motor in a single direction, at a desired speed.  To the end of the motor, we attached a liquid dropper at the tip.  The dropper itself had been modified to contain a cutoff of a standard pencil connected at a perpendicular angle.  The motor and said attachments were placed inside a metal cylinder, which rang loudly as the motor spun the makeshift contraption.

“Pencil Metal Thing”

IMG_1308      IMG_1309

IMG_1759     IMG_1882

From there we aimed a degree larger: The Air Pump.  Removing the motorshield, we connected the Arduino to a more robust external power supply and programmed instructions from a modified “blink” patch.  We connected an air pump found in the shop to the circuit and successfully achieved a degree of air pressure being distributed from the pump.  However, again unfortunately, the air pressure was not powerful enough to blow out a candle, no less power through a bucket of water.  Our second attempt though was indeed successful as we replaced the existing pump with a powersync and connected that bottleneck to a pump capable of more significant air power.

“Air Pump”

IMG_1409      IMG_1410

IMG_1412      IMG_1413

IMG_1746      IMG_1749

IMG_1751      IMG_1756

Pump Video

“Air Pump as Sound Activator” – Movement Hitting Other Instruments

IMG_1757      IMG_1752

IMG_1755      IMG_1753

Amidst other trials, we began constructing the beginnings of a narrative to guide the preparation for our eventual performance.  We listened closely to each prototype and began to appreciate various aspects of the sounds they created.  To us, they were robots in a given space, interacting, conversing, even fighting with one another.  We designed Arduino code to operate servos at various speeds and delays, and combined these with the growing collection of other orphaned robot musicians.

“Robot Arguments”

IMG_1778     IMG_1777

Robot Argument Video

Meanwhile, one of the prototype developments exceeded our anticipation and expectations.  Using a breadboard, a light sensor and an external speaker, Adrienne constructed a system that would translate and scale light input data into a variable audible frequency.  She’d essentially created a performable Arduino-driven theremin, which quickly became the narrative denouement of the project.

Theremin Video 1

Theremin Video 2

Amy designed the staging such that the arduinos and instruments were placed on “pedestals” and highlighted as sculptural entities.  Originally, the four group members were going to each play one of the instruments; however, after parsing and pruning a variety of performance configurations with the organic and robotic instruments, we eventually curated the setup to highlight the theremin and utilize our various prototypes as accompaniment.  The cables and chords were carefully strung through the “pedestal” boxes to create a clean and composed performance.

“Robot Sculptures”

IMG_1782      IMG_1784

IMG_1794

“Robot Band”

IMG_1799      IMG_1798IMG_1796      IMG_1795

IMG_1792

Test of Theremin with Vocals

Final Staging + Display of Tools

IMG_1801

IMG_1803

The experimental piece was performed at the Hunt Library’s Media Lab on Wednesday, April 6, 2016.

Robot Folk

 

Guy De Bree: Instrument Design and Creation

Mark Mendell: Instrument Design, Software/Hardware

Matthew Turnshek: Sound Design, Software

Making the instruments themselves was largely the responsibility of Guy. The design for the string portion of our instrument is based heavily off of primitive folk instruments such as Diddley Bows, albeit with a tuning peg. Matt obtained the wooden frame used for the string instrument, and It happened to provide a lot of structural convenience for us. Essentially, we attached a taut string to our instrument with a strumming and tuning mechanism for it which would be controlled by the Arduino.

The glass tube used as a percussion instrument was Guy’s. We decided to use it since it’s a visually interesting article that produces decent sound. Initially, we wanted to use servos since they seemed like the best option for swinging some kind of mallet. Upon testing them we found that the sound of the servos themselves also added a lot of body to the overall sound of the robot. We ended up using hot glue sticks for the mallets themselves, since they could bend, making them safer to use when hitting glass.

Some of the craftsmanship for the string portion of the instrument could have been done better. Unfortunately, Guy had to fall back on use of hot glue and tape, which aren’t generally good for heavy mechanical use in a machine like this. Fortunately, they did hold out this time, and it wouldn’t take much more work to upgrade the construction to make it more reliable.

rot2

Next, we needed to synchronize the movements of five servos. To do this, we connected them all to an arduino uno and a separate 5V power supply with help from a breadboard. The arduino was connected via USB to a computer running Max and outputting serial data. The code on the arduino interpreted the numbers 0-180 as degrees for the tuning peg servo. 181-184 were indications to strike/pluck with the corresponding servo. For plucking, the arduino would alternate between the left and right side of the string when it got the signal. For striking, the servo would move towards the object and back a fraction of a second later.

rot1

Matt designed the algorithm ultimately used to control the string and mallets. We wanted to create a sound that changed over time and showcased the interesting sounds our instrument was capable of producing. As such, we focused on changing tempo frequently, heavy and light sections, and twangy string sounds.

The algorithm switched between three phases with different weights and average tempos in Markov Chain fashion, with equal likelihood to enter each phase from each other phase. Each phase had a different tempo range and likelihood at each beat for a mallet to swing or string to be strummed or tuned.

rot3

The strange acoustic noises our instrument produced and the methodical yet sporadic way it delivered them gave some of our listeners the impression of music from an exotic culture. This, along with the intimidating robotic visuals lead us to dub our project ‘Robot Folk’.