Interactive Sculpture

Ice-Burg

 

What new ways can we use to combine physical and digital interaction?

Ice-Burg is an interactive installation created by a group of Human-Computer Interaction graduate students at DePaul University. Their goal was to create a piece that encourages users to explore a digital public space across two sculptures via multiple senses: touch, sound, and visualization.

I was commissioned as a guest sound artist due to both my passion for sound and electronic music and my knowledge of the tools to execute that vision on a low budget, using just two Raspberry Pi's. My own goal was to unify my skills as an experience designer and a sound designer, and encourage and support users via sound.

I've also long been interested in generative and procedural media of all kind, but in this case I was particularly inspired by some of Brian Eno's work, one example being Discreet Music, which set up a series of loops of different lengths playing against each other; effectively playing infinitely.

I used Max/MSP for prototyping sounds, musical algorithms, and ideas, and then creating sound study "snapshots" for the larger team to provide feedback against. Once I was aligned on the final intent with the larger team, I ported my prototypes to Pure Data, which I installed on a set of Raspberry Pi's. Each Raspberry Pi communicates via OSC over Ethernet with the larger sculpture.

Ambient State.

 

When left alone by users, the sculpture reverts to an ambient state. My original brief from the team mentioned pure, constant ambient sounds and humming, but I was concerned about the actual placement of the piece (in a student center) and the potential to irritate students and staff. I developed a musical algorithm to softly play an infinite chord progression, set in E Major.

Although the algorithm plays both altered four note chords and less "musical" chords like a diminished 7, it is weighted to return to a tonal center frequently, and keep an uplifting tone that encourages interaction.

I enjoyed the challenge of walking a line between music and sound design for this state. You wouldn't mistake the ambient state for a song, but it does have something in common with dreamy Romantic-era piano music.

Facet State.

 

The facet state is more directly tied to user interaction. As users touch facets of the sculpture, it reacts with sound and light. As more users touch, it becomes a brighter, louder, more immersive experience. I used additive synthesis to synthesize tones within the harmonic series), which joined together smoothly to create a richer, louder drone.

The second half of the sound study demonstrates how the sound changes for "correspondence" events, which occur when users are touching corresponding facets on two different sculptures. I inverted and transposed the initial series of tones to create a deep, slightly unstable sound.

How It Works.

 
ice-burg-example-01.png

I used Pure Data as my primary production tool, and Max/MSP for prototyping. Pure Data is a dataflow (visual) programming language that is great for rapidly creating multimedia work, especially sound. One of PD's strengths is that it is very lightweight and will run on virtually anything, including embedded Linux machines like the Raspberry Pi. Max/MSP, conversely, is a much heavier weight commercial program for MacOS and Windows that often works at a higher level of abstraction, which is useful for rapidly getting sonic ideas realized.

When I became involved in the project, I created a minimum viable product in Pure Data with all the necessary API endpoints in place, and from that I drove out technical documentation explaining the formatting and parameters of the sound API. This allowed the development team to work against the API while I was still deep in driving out the aesthetics of the piece.

I then created a series of prototypes and audio snapshots in Max/MSP with mock user interaction using various native form elements to simulate the API, and ran through various scenarios to give the team more visibility into my process. Once I had approved sound studies from the team, I ported those Max/MSP prototypes to Pure Data and got the performances running on embedded Linux.

Previous
Previous

The Sound of Two Clouds Caught in Different Storms

Next
Next

Ecosystem