How I Did That: Converting REMNANT into a Digital Experience

(Ada Westfall)

I’ve been a company member with Theater Mitu, a Brooklyn-based arts collective, since 2006, and my fellow company members are my longest and most fruitful collaborators. 

For the first six years of my work with the company, we operated in a very standard manner, wherein different people had different roles within the production: I was the sound designer, someone else was the set designer, a different group of people were the performers, etc.  However, around 2012 or 2013, the company began to shift from the traditional model and into an anti-disciplinary practice. For me, this manifests as being not only a sound designer, but also a lighting designer, a scenic designer, a performer, a stage manager, a social media liaison...and working alongside other company members also operating in the realms of design, performance, management, and so on. 

Photo by Ada Westfall

REMNANT is the result of this practice, and it is such a seamless integration of the entire company’s research and articulation that it is nearly impossible to separate it back into its component parts.

The original, in-person production of REMNANT premiered in 2018 after three years of interviews, research, and development, and was both designed for and inaugurated our arts space in Gowanus, Brooklyn: MITU580. The piece, from its initial conception, was to be a headphone piece, so all sound for the show would be transmitted to the audience over headphones.

Photo by Ada Westfall

This was as much a practical decision as it was an aesthetic one: the piece consisted of three performance “boxes” in a single theater, all activated at the same time.  These three boxes operated simultaneously, almost as a single entity, with each performer (myself included) receiving all of their cues and dialogue via in-ear monitors. This enabled us as performers to work in perfect synchronicity with each other and the technical elements. As such, without headphones, an audience member would be inundated with three simultaneous performances.

The aesthetic decision to use headphones, however, came first. Creating a piece about loss, death, and war, the piece required an intimacy that a traditional theater piece with loudspeakers cannot achieve. We needed the ability to whisper into your ear and bring you into the emotional center of these painful, emotionally-charged topics.

Photo by Ada Westfall

We are a company that engages with technology in a way that is fundamental to the process. There is no sense of “we’ll flesh it out later.” The elements all need to be considered and addressed by company members when presenting work. As such, we feel ideally placed and ready to explore the new possibilities that are contained within the limitations created by COVID-19. 

When our planned fall touring of REMNANT was cancelled due to the pandemic, we realized it was the perfect project to reconceive in the digital realm. The aim for </remnant> was never to encapsulate a “live” performance, but to take the base material—the research, the aesthetic, the physicality—and re-envision it for the medium in which it would be presented: a fully digital experience. It was important to us that this not manifest as a filming of a live performance, but instead as a piece of interactive digital art.

Photo by Ada Westfall

The first task for me was simply to recreate the original piece in a format I could work on more smoothly. I began by recreating the QLab file (a massive 2,300 cue file with MIDI, OSC, network, and audio cues which ran all three boxes asynchronous and simultaneously) in Logic, pulling all the audio content I needed. Once I had reconstructed all three performance spaces, I reached out to the other company members and asked them to record the text they would normally have spoken live in the piece. 

Usually, I would ask everyone to come to me and I would record each voiceover myself so that quality and tonality remained constant, but obviously that wasn’t an option. Instead, I sent everyone a recording of their in-ear feed to listen to while recording so that their delivery would have identical timing to the original production. Thankfully, most everyone had access to a real microphone so I was able to get good-quality recordings.

Photo by Ada Westfall

Once I’d made an initial pass piecing together the new recordings with the original audio from the three performance boxes, I sent MP3’s off to Ada Westfall, a fellow company member, who was taking the lead on creating the visuals for the piece from the Isadora show files as well as from the archive of previous research and creation sessions.  From that early moment onwards, I made sure to keep all my edits locked in time so that I wasn’t moving events around on her while she worked. I think, ultimately, there was only maybe one voiceover we decided we wanted to shift slightly for pacing purposes, but otherwise, the timing remains identical to the original in-person production.

From that point on, it was about augmenting, layering, and heightening. Since neither Ada nor I could rely on live performers to keep the audience engaged, we had to carefully consider every moment to make sure both the aural and visual elements were working together. I made sure that each voice and each element the audience hears has a decision behind it—a tonality, a placement in aural space, an aural “hook” to pull your ear to that information. I relied a lot on the AudioEase “Speakerphone” plugin to age different voiceovers different amounts and in different ways. 

Photo by Ada Westfall

There isn’t necessarily an idea of character or narrative storyline the audience has to follow, but I wanted to differentiate voices from each other: not just “these are different people,” but “these are different people, in different places, in different moments of their lives, experiencing radically different things.” Additionally, I made extensive use of Waves compressors and the Abbey Road Saturator plugin when working with the voiceovers, and the full complement of Soundtoys plugins and a number of Valhalla reverbs on both vocal and textural elements.

Ultimately, I believe the piece is as good as the parent piece from which it was created. It might even be slightly better, due to having a couple more years to think about the material, and the added resonances of thinking about loss in this particularly difficult moment in American history can’t be denied. 

I don’t think we’ve single-handedly solved the problem of how to present theatrical works digitally, but I do think we’ve created a compelling piece of art that represents a huge shift in how we make work as a company. Access is always an issue in theater, and this moment offers an opportunity for the artform to expand its boundaries beyond the city or region it finds itself geographically located in. I hope this piece has a long life, and I am so excited to share it with audiences around the world.

For ticket information, visit

Alex Hawthorn is an Artist, Composer, and Technologist based in Los Angeles and Brooklyn.

Read more on:

Suggested Articles:

A look at Chris Rynne's lighting for a successful drive-in production of La Bohème.

Part of the NEP Worldwide Network, Faber AV has calibrated its entire fleet of LED panels with Brompton Technology’s Hydra measurement system.

The virtual concert from psych-pop stars Glass Animals featured lighting by Cassius Creative with GLP fixtures.