All posts by Melanie Kim

BLASTULA

  • art/animation & sound design: Adrienne Cassel
  • music & sound design: Seth Glickman
  • programming: Melanie Kim
  • hardware: Kaalen Kirrene

Our early discussions of the piece quickly coalesced around a few specific technological concepts and media devices: Interactivity, Audio Visualization, Custom Controller Construction.  We began with the idea that two guests would interact in a space where they had at least partial control over both the audio and visual elements in a collaborative experience.

Projection mapping came up in our talks, and this medium fit well within our original motivations and challenged us to integrate it within an interactive space.  We pulled inspiration from old school table-top video game units where players were situated across from one another, and converted an earlier idea of an arcade style interaction to gameplay activity positioned around a screen.

20160504_184748
Initial draft of the board and gameplay.

The final iteration of the game featured controller button shapes project into the common space between the players when pressed.  Once “launched,” colliding pieces from both players would link and remained positioned within the center “membrane.”  As more combined shapes collected, the environment would increasingly vibrate until a critical mass was achieved, concluding the experience.

Designing and building the board

IMG_4863

We decided to use acrylic as it is easy to laser cut and it has a surface on which it is easy to project. We wanted our board to have buttons and controls that we could projection map to. We used Makey Makey to facilitate buttons and Ardunio to add an additional source of control that was more dynamic than a button press. The buttons were different geometric shapes so that the projection mapping would be more unique.

IMG_4872

We used conductive tape on the bottom of the button and on the top of the board with wires connected to both. That way when the button was pressed, the conductive tape would meet thus completing the circuit. With the help of our mechanical design consultant Kelsey Scott, Kaalen designed the board in solid works and then laser cut it in the fabrication lab. Then all that was left to do was wire up the circuit and attach the conductive tape to the buttons. We used hot glue to attach the spring to the buttons and the board so that the button would return to it un-pressed position.

IMG_4866

Musical score and sound design

In tandem with working on the visual elements, we began to establish a sound design for the pieces and their interactions.  We wanted to create both a signature sonic style for the game, as well as enable a certain amount of user variability, fitting for the piece’s focus on interactivity.  We were aiming for short sounds with crisp attacks that could exist in a consistent digital sound space, but also uniquely identify and associate each of the game piece shapes.

The sound design of the game piece shapes were to be “played” by the guests as they interacted with the controllers and engaged with the gameplay.  Could we establish a sort of instrument that could be performed?  Was there a way of being good at playing the game in terms of the overall sound experience?

The score was also composed with this in mind.  To first fit within the sonic space, but then to provide a sense of progression over which the gameplay sound design would provide a version of melody and counterpoint.

Length of play was established to be a general baseline of between 2 and 5 minutes of user engagement per game session.  For a game of this length, customized linear background music can be used in place of a shorter repeating loop structure, fostering the feeling of forward progression through the game experience.  The final background music was 8.5 minutes of vertically-remixed digital tracks produced in Ableton Live.  Ultimately, the music would seamlessly loop if the game lasted longer than estimated projections.

Bringing it together in Unity

0
What the game looks like in the Unity editor.

The game was built in Unity2D, coded in C#. It has two sources of inputs: w/a/s/d (player 1) and arrow keys (player 2)  from the Makey Makey for the individual shapes, and Arduino for the two dials (potentiometer) that the player can aim the shapes with.

1
Arduino code.
2
C# code to set up the Serial input from Arduino. Make sure to include “using System.IO.Ports” at the top for importing the proper library and set the Api Compatibility Level to .NET 2.0
3
C# code to parse values from Arduino in the Update() function.

This is the basic logic that was programmed:

  1. Press button to initiate launch animation and sound, unique to each shape.
  2. The launched shapes all have 3 frames of hand drawn animation. They are unable to interact with each other until within the big circular waveform, also hand animated.
  3. When player 1’s shape hits player 2’s shape within the big circle, the two combine to produce a unique sound.
  4. Every time two shapes combine in the middle, they vibrate in increasing amounts.
  5. To reach the ending, reach a certain number of shape combinations. The background music would slow and fade when this happens.
4
C# code for the audio manipulation at the end.

The name, “Blastula,” was coined by team member Adrienne Cassel as the gameplay pieces forming and collecting in the center reminded her of the hollow cells that appear in early stage embryonic development.

fish fugue

Ticha Sethapakdi: Concept, Software and Hardware Design
Kyoko Inagawa: Sound Design, Performance
Melanie Kim: Sound/Set Design, Experience Design

Fish Fugue employs computer vision to enable a soloist to perform with a goldfish-controlled toy piano accompaniment. A webcam mounted on top of the goldfish tracks the fish with Processing code while an Arduino dictates the notes played on a toy piano. As the goldfish moves to a different quadrant in the tank, the melody changes to reflect the fish’s position.

GitHub repository here.

The circuit diagram:

Fish_Fugue_Circuit_bb_revised

Arduino code (click on each to view in detail):

Processing code (click on each to view in detail):

IMG_0344

There are eleven solenoids connected to eleven keys on the toy piano (D, high D, E, high E, G, high G, A, high A, B, high B, high C). The Processing code divided the webcam feed into four sections, and the Arduino would “play” the notes based on which section the fish was in. Therefore, we composed the four accompaniment parts using only these eleven notes, and made sure they would flow from one another in case the fish moved erratically between quadrants. The performer would see which quadrant the fish is on the monitor and improvise her solo to fit the accompaniment the best. The monitor effectively becomes her musical score. The performance lasts three minutes, after which the code is set to stop playing the piano.

musescore

IMG_0333

The set evokes playfulness with the childlike and minimalist colors. We strove to hide most of the “scary” guts of the circuits and electronics, as they would distract from the performance. One of the feelings we wanted to impart was a “miniature concert.”

IMG_0343

Special thanks to Fred the Fish for being such a hardworking swimmer and Jesse for providing us the solenoids.

temple

By Seth Glickman, Melanie Kim, and Elliot Yokum

A closer look (without a million people):

The accompanying music and the appropriate Max patch were composed by Seth, the individual notes on the pentatonic scale were produced by Elliot, and most of the visual elements were designed by Melanie. All of us did the wiring and circuitry.

Our controller was a Makey Makey. We used a combination of alligator clips and (lots and lots of) conductive tape to connect the Makey Makey to each of the five “altars.” The symbols on the altars were drawn with invisible blacklight ink pen on construction paper. Whenever a person touched the two tapes on either side of the symbol at the same time, they would complete the circuit, triggering the unique musical note and the light attached to it through Max. The light would be fluctuating from a color to blacklight, which would reveal the symbol on the paper.

IMG_7618-convertedsmall

IMG_7614-converted

IMG_7617-converted

IMG_7615-convertedsmall

IMG_7623-convertedsmall

IMG_7626-convertedsmall

IMG_7627-convertedsmall

Early concepts and tests for Makey Makey, control scheme, and the symbols:

20160227_130854

The Speech

By Patrick Miller Gamble, Samir Gangwani, Melanie Kim, and Cleo Miao.

From the beginning, we wanted live performance for the project as well as an overall feeling of anxiety, crowd, and being on stage. Then a simple idea occurred: what if you’re on the stage because you’re giving a speech, and the audience is also part of the performance? The mindset of a speaker on a stage became our subject.

The Intro Segment

We wanted to first transport the audience through sounds, to convey the speaker’s movement throug space: streets → dry space → confined space → auditorium space, etc. We made various recordings ranging from on the field to through the computer. A notable one is the recording of our quoting of Gene Ray’s “top bottom front back two sides,” which was the chant that gained monstrously near the end of the speech.

sounds

The recordings were pieced together in Logic, and most of the noise was normal sounds with reverb/preverb/echo/distortion. The ambient noise was largely piano reversed. This stitched piece resulted in five different channels, each exported separately for ambisonic animations in Max. We put some keyframes, and let them loop automatically during the performance.

1

2

The lighting during this part was done only through the stage lights native to the sound design room, controlled through sliders. We used two of them, and signified the transition to the next part of the piece with the entrance of the speaker (going on the podium) and the light illuminating them from behind.

sliders

The Speech

The full speech can be found here. We also made a PowerPoint that would play (and run through manually) during the speech for the audience to follow. We compiled the speech from standup comedy routines, conspiracy theorists, and political speeches. We were trying to capture a wide range of the different tones of voice and rhythms people use when speaking publicly. For instance, the tone of voice used when greeting the audience is very different from the tone when confronting a heckler, or telling a joke, or saying something serious, etc. A goal of the experiment was to divorce speech patterns from their literal meaning in order to appreciate them musically and/or sonically.

Once the speaker started the speech, we manipulated their voice as well as the varying sound files to insert crowd reactions such as boos. The live sound file manipulation and voice manipulation was programmed to be controlled with Touch OSC through an iPad Mini.

screen 1

This first screen contains one large square with a dot that can be dragged around the room which correlates to a ambisonic encoder (hoa.map) in Max. The voice can be altered by two main sources: a feedback engine and a reverb engine. The feedback engine was controlled with the top two vertical sliders (fb output and feedback) and the the two knobs (delay and transpose). The reverb engine was created using a third party patch called yafr2 which controls four parameters (high frequency dampening, grain size, diffusion, and decay time) that correlate to the four horizontal sliders.

screen 2

The second screen controls eight solo sound samples that can be triggered by simply clicking the on/off buttons. Each sample has its own box just like the large box on the first screen which allows them to be spatialized however the performer would like.

The Ending

The chants were actually a continuation from the intro segment earlier, which had been playing this entire time. The stage lights went off, and the grid lights in the room went full haywire during the chanting to reinforce the frantic feeling. This was activated manually to match the timing, but accomplished through Max as usual with a looping element:

3

4

melanie – sound pieces

Breaking Bad! No spoilers (I think). Remix made from various parts of the show. What I love about this one is that the composer made the characters react to each other actively through the soundclips, the video contributing immensely.

O Cara Mia, Addio played on floppy drive. This is a song that plays at the end of Portal 2 by sentry turrets, which have been shooting at the player on sight the entire game.  I found this one interesting because it’s a song sung by machines and it’s being literally sung by machines (floppy drives).