All posts by Ticha

Tinkering with Tinko: Episode 1

Tamao Cmiral:  “Tinko”, Costume Design
Erik Fredriksen: “Honky Tonk”, Sound Design, Script
Mark Mendell:  Max Programmer, Guy Who Cues the Lights and Sounds
Ticha Sethapakdi:  Lighting Design, Arduino Programmer, Sign Holder

For our project we were interested in making a performance that played out like an episode from a children’s television show.  The performance involves one actor and a “puppeteer” that controls the robotic toy piano using a MIDI keyboard.

Content-wise, the episode has the host (Tinko) teaching his toy piano (Honky Tonk) how to play itself and contains motifs such as disappointment, external validation, and happiness.  And of course, feelin’ the music.

Our diverse group of skills was what allowed us to bring this show to life.  Erik wrote most of the script and recorded the very professional-quality show tunes; Mark made a Max patch that converted note on/off messages received from a MIDI keyboard into bytecodes that would be sent to the Arduino through serial, as well as a patch that allows him to control the lights/sound cues from TouchOSC; I wrote Arduino code that reads bytes from the serial port and pushes/pulls the solenoids on the piano keys depending on which bytes were read, and made the lighting animations; and Tamao put together a Tinko-esque costume and spoke in a weird voice throughout the skit while maintaining a straight face.

Overall we had a lot of fun developing this piece and are very satisfied with the outcome.

 

Github page.

Ticha – Listening Examples

Since we’ll apparently be doing stuff with Arduinos later on I thought these projects would be a nice segue into that.

This is a musical robot created by a guy named Eric Singer, who is a member of the League of Electronic Musical Urban Robots (LEMUR).  In his practice, Singer makes a lot of weird musical instruments and incorporates the Arduino in his projects pretty frequently.

This project uses capacitative touch sensors and an accelerometer/gyroscope (I think?) to create sounds and process audio effects in real-time. It’s really great to see musicians integrating their instruments with fashion, as it expands the possibilities of live-performance and wearable technology.

 

MEDIATOR

Note: The video has a lot of low-frequency sounds, which may not be very audible through certain speakers.

 

Our development process for this piece was collaborative, iterative and open to change with development.

Adrienne working with Logic, the DAW used to edit the sounds
Adrienne working with Logic, the DAW used to edit the sounds

We began by identifying and recording “machine-like” sounds from the natural environment.  Raw footage was edited down, processed and collaged in a Digital Audio Workstation (DAW), and then played through an ambisonic Max MSP patch, distributing the mix into an 8.1 surround system.  We listened and made several revisions to form a cohesive composition with a narrative arc and elements of rhythm, melody and counterpoint.

A GIF of our Max patch, because GIFs are cool
A GIF of our Max patch, because GIFs are cool

Over the course of many playthroughs, we recognized the sound space would be supported well with a design of the lighting grid.  Our means to accomplish this was a 6×5-pixel color animation in Adobe After Effects, mapping corresponding pixels to sources in the 30-light grid, and adding new functionality to the Max patch.  As our piece was ultimately influenced by the film, Metropolis, we worked to create an ambient experience that incorporated the film’s motifs of chaos and order.  This process was again refined through several iterations and repeated listens.

More adjustment was made to the patch to move critical sonic elements through the space by animating their locations in the surround environment.  

It was at this point that we noticed the presence of lighter objects under the lighting grid reflected well into the space.  We placed materials under the lights to measure their effects.  The room’s white wooden tables best served this function and we proceeded to experiment with their placement.  Beginning with a contiguous rectangular shape, we experimented further until we agreed upon an angular, disjointed configuration that matched the sense of solitude expressed in the overall work.

 

The completed piece was rehearsed and performed on Monday, February 8th in the Hunt Library Media Lab at Carnegie Mellon University.

Mediating3

Main Responsibilities
Seth Glickman: Experience Director
Adrienne Cassel: Sound Designer/Editor
Ticha Sethapakdi: Lighting Designer

Ticha – Interesting Sounds

You’ve probably already heard these kinds of sounds from sci-fi movies or video games, but what’s interesting about these sounds in particular is that they were all achieved with dry ice! While dry ice is commonly used to make smoke effects for live performances it can also be a surprisingly powerful instrument for sound design. The “howling” sound you hear at the beginning of the clip, for example, is the result of having the dry ice vibrate the metal it was in contact with.

Here’s a video of the process:

And a link to the article on it.