But Will It Synch to My Palm?

Here we go again, chanting the digital mantra: “New technology X will allow you to do Y faster, better, and cheaper than before. Sharper projection, more lights, clearer sound, and cleaner dishes too!” There's certainly no question that digitally controlled production equipment has opened up incredible new possibilities for design in live performance, but this mantra of extension and refinement tends to leave intact the assumptions about how control of the stage environment should work.

Emerging digital technology enables us to completely rethink these assumptions that had once been driven by cost and reliability concerns. Underneath the front panels and behind the user interfaces that inspire our chanting, manufacturers have created thousands of precise, virtual “knobs” that control today's lighting, sound, projection, and scenery systems. These parameters can be controlled by anything that speaks their digital language, not just be made to perform complex crossfades between automation snapshots. Like many others, we here at the HyperMedia Studio in the UCLA School of theatre, film, and television are exploring the creative potential of adaptive stage environments created by connecting these digital “knobs” to new types of intelligent control.

For years, experimental dance and performance art have used combinations of basic sensors and automated production equipment to experiment with controlling stage lighting, sound, and video processing based on performers' actions. With the computing industry's drive towards “smart” homes and workplaces, it seems only a matter of time before more sophisticated technology for sensing and interpreting human action will be available at low cost. The theatrical and theme park experiences of the near future won't come from virtual reality, but from connecting these systems that observe movement, language, and gesture to real-time control over lighting, sound, and other media, building experiences on the fly based on what people do in physical space.

Inspired by the potential of these environments, we are working with what can be achieved in traditional productions now to understand the possibilities and creative challenges of these spaces. For the UCLA department of theatre's spring production of Ionesco's Macbett, directed by MFA candidate Adam Shive, students at the HyperMedia Studio began to build lighting and sound systems that could adapt dynamically to actors' performances. Shive created a set of aesthetic guidelines for the use of these “interactive systems,” focusing on certain characters' supernatural control over the world of the play. We focused on extending ideas from experimental pieces into stable and reliable systems for the production, so cues could be authored rather than “hacked” together.

Martin Professional's Lighting Director was used to track the positions of six performers and two props. Installed above the stage, the MLD's four ultrasonic speakers emit pulses at different frequencies. Modified wireless mics pick up and transmit these pulses back to the MLD, where their time of flight is used to triangulate each tracker's position. The MLD was chosen over video tracking systems popular in experimental productions because it can identify and track performers without the restrictions on design currently necessary for computer vision. It was easier to work with costume designer Ivan Marquez to conceal wireless beltpacks than to convince him to color-code the actors so video tracking might tell them apart! Maiko Nezu's set for Macbett had a 60' × 20' playing area with the audience seated on both long sides. With this configuration, the MLD could track to within a few inches, updating 13 times per second.

The Lighting Director can follow performers with automated fixtures and output dynamic DMX and MIDI control, but we were interested in the raw position information — the 3D coordinates describing where performers are onstage. A Java-based network position server was created to receive continuous updates of these coordinates from the MLD and make them available by TCP/IP on an ethernet network, allowing any other machine to get its own “subscription” to position data, in much the same way as people connect to streaming music over the Internet. Subscriptions were distributed directly to the sound and lighting controllers, to intermediate servers for more processing, and to utility stations like the laptop used to monitor performance data from the house.

The server also continuously calculated each tracker's distance to every other tracker. This “relative distance” data would later encourage us to explore modifying design parameters like sound cue volume or panning based on this relationship between performers that was independent of their position onstage.

Sound designer David Beaudry developed his cues in Cycling 74's Max/MSP software on two Macintosh computers with multichannel audio interfaces. Max is a graphical programming language with superb real-time audio processing capabilities and a network interface that required only a little bit of custom code to subscribe to the performance data servers. He ran traditional cues and the operator's interface on one machine, with the other dedicated to “dynamic” cues. Beaudry, a doctoral student in music, used Max to create interactive sound cues in which volume, panning, reverb level, and choice of sound effect were influenced by performance data streaming in over the network. The wide variety of plug-ins and objects available for Max make it a great tool for experimentation.

The UCLA Freud Playhouse was equipped for Macbett with an ETC Obsession II controlling Sensor dimmers, Vari*Lite automated fixtures, and Wybron Coloram II color scrollers. We created “dynamic lighting” software for Windows NT that coexisted with the Obsession and acted as an intelligent bridge between live sensor data and the DMX world of lighting control. The console's primary DMX output passed through this NT machine before reaching the house distribution system. Normal cue editing and execution on the console was unaffected until a cue on the dynamic system was activated by a MIDI show control “cue go” from the Obsession. Then, the software executed its own cue that dynamically replaced or modified certain dimmer levels based on live performance data.

We worked with lighting designer David Miller to author these dynamic cues in a simple text-based scripting language that defined relationships between performance data and fade, wait, intensity, and even choice of channels. For example, when the character Macol appears in the final act, a single dynamic cue lights his initial dialogue from a platform at one edge of the stage through his cross to the center to kill Macbett. The stage manager calls the cue when Macol appears on the platform and the operator's Go on the Obsession simultaneously triggers both the console and the dynamic system. The Obsession establishes the scene and platform light for his initial dialogue. When he finally leaves the platform, the dynamic system takes over, opening a path of light before him with each step, while capturing and dimming the rest of the Obsession cue as he moves inward. By the time Macol is in the center, he is alone with Macbett in downlight brought in based on his position. A multi-part dynamic cue handles this sequence based on Macol's distance from centerstage, matching his timing every night. After Macbett is killed, the operator takes the next light cue on the Obsession, simultaneously fading out dynamic control and bringing in the next traditional cue.



Position-tracking elements concealed in witches' staffs in a production of Macbett. Photos by Jeff Burke

This cue was easy to express in a text script for the lighting system, but in other cases, even for cues that only consider performer location, it was often much simpler to express them visually. We created an intermediate server to map performer position onto Adobe Photoshop images, finding the transparency of the corresponding pixel in each image layer and reporting it to the lighting and sound systems (or any other subscriber). This “region mapping server” is based on a layer-drawing feature of the MLD software, with a few twists. It ran on a separate machine, receiving network position information for all active trackers — in Macbett, this meant 160 updates every second. With each update, the software checked the tracker's coordinates against the corresponding pixel in each layer of the image, reporting an opacity value from zero (transparent) to 255 (opaque). When the director wanted the Butterfly Hunter, a classic insertion of the absurd by Ionesco, to move through a sound environment that became progressively more detailed as he chased his invisible quarry onto the stage, Beaudry used the tool to draw smooth position-based fades of overlapping audio tracks.

Both of these sequences could be achieved without interactivity. Yet connecting elements of the stage design directly to performers — mapping position or movement to the digital knobs of certain cues — eliminates a layer of mediation between action and design. In cases where the “risk” of allowing performers to affect cues is acceptable, the designer can establish relationships directly rather than approximating them with pretimed crossfades. This has amazing implications for theatre and location-based entertainment: new sensing technologies will soon follow human gesture and voice as easily as we tracked position and movement. These will sense how performers move, not just their location or distance from one another.

With the rapidly updated position data from the MLD, we could experiment with this potential even in Macbett. An “interpreted position server” maintained several short-term snapshots of each tracker's movement and used them to calculate statistics providing some insight into how the performer or prop was moving. These statistics ranged from the familiar — average speed, distance traveled, average direction — to more interesting quantities like the “roundness,” “directness,” and “stillness” of movement. These were continuously calculated for each performer and then made available for subscription on the network.

At first it was hard to imagine what to do with this. Our early tests — generating more stares than usual from anyone who glanced in the theatre — featured a cooperative actor walking manically in circles until the stage lights faded in, signaling consistently high “roundness.” But later on, this “interpreted position” server made possible some of the most interesting and challenging cues that we would attempt. The primary agents of the supernatural in Macbett are the two witches, who also appear as Lady Macbett and her lady-in-waiting. Each witch was to have her own type of control over the environment through her staff. The first conjured thunder and lighting by raising the staff quickly in the air — the quicker and stronger the thrust, the more powerful the lightning strike — while the second witch swirled her staff to create ripples of darkness, color shifts, and the sound of whirling wind proportional to the speed of her staff.

These cues were the most difficult to achieve because, in addition to being based on how the props moved, they had to remain dormant when the witches moved normally with their staffs. A tracker was embedded in the foam head designed by Marquez for each of the staffs and a combination of interpreted position information for both the staff and the witch herself was necessary to achieve the effects. Lightning was created by controlling Vari*Lite strobe and intensity attributes while triggering a proportionally faded sound of thunder in Max. Color scroller shifts combined with slightly randomized intensity control achieved the lighting effect for the second witch. At the same time, the whirling of her staff also faded in and panned the sound of the wind. These relationships were activated at the beginning of each scene where the witches appeared, allowing the actresses to conjure them up at any point. These cues required the performers to be aware of their new capabilities onstage and to work with the director to explore how they could be most effectively used. At the same time, we had to carefully tune the dynamic cues to properly respond to the witches' movements in their costumes.

As the novelty of the technology wore off, occasionally someone could be overhead saying, “We could have done that interactively.” It became what we had hoped for: an unpolished new tool for creation suitable for large-scale performances — another reason to chant the digital mantra. But at the borders of this tool-oriented approach is the hint of a primary research focus of the HyperMedia Studio — how these present and future capabilities challenge our ideas about performance itself and suggest completely new types of art; ones that will provoke us to find again a fresh approach to direction and design.

Jeff Burke conceived of the interactive systems for Macbett and coordinated their development while a graduate student in electrical engineering at UCLA. He continues to work with the HyperMedia Studio to explore the creative implications of new technology in live performance. He can be reached at [email protected]