Daniel Fish’s ambitious revival of Oklahoma!, now playing at New York’s Circle in the Square Theatre, is anything but traditional. This radical re-interpretation of the 1943 Rodgers and Hammerstein classic strips away preconceived notions about the show by taking a naturalistic approach meant to expose darker themes and moral ambiguities lingering beneath its sunny surface.
The production, which is nominated for eight Tony Awards, makes its most dramatic departure from the original in its treatment of songs and sound design. Rather than familiar lush orchestrations, the music has been reworked for a seven-piece ensemble accompanying the cast onstage in rootsy arrangements performed on the instruments of turn-of-the-century Oklahoma. Yet, this “unplugged” aesthetic is only possible through complex technologies and techniques, from object-oriented mixing to real-time localization.
When did each of you first get involved with the show?
Kluger: We developed a piece over three productions, starting in 2015. Daniel Fish, the director, developed this piece at Bard SummerScape. And then we did it at St. Ann's Warehouse in Brooklyn before the Broadway production. Drew and I collaborated on it from the beginning. We sought out Drew because the piece needed a natural aesthetic and there are so few people who do that style of amplification really well.
Did that production aesthetic evolve over time?
Kluger: The director has an ethic for the production, which is that we're all in the same room—the audience, the band, the actors, and singers. The band is on stage in front of you, and nothing is hidden. So there's an intimacy to it, where everything has to have a natural source, acoustically, and an honesty about the way it's presented. We've had to negotiate what that means in different ways and in different moments through the development. But that's a core principle of how the event is set up.
Levy: One of the big things was removing the wall that a sound system can create between the performer and the audience. We don't necessarily hear it filtered through a P.A., that unnatural experience where you can just lean back and have it broadcast to you. We're all in the same room, and the audience does have to pay attention a little bit.
Drew, how did that inform your sound design?
Levy: I was very focused on control and coverage—it's a highly distributed system, so that we never actually have to turn anything up very high. It's a fairly quiet, fairly subtle reinforcement that we refined over the course of the three productions. But it's really about the resolution at which we can localize to an acoustic source on stage? There's quite a lot of technology that goes into making it as transparent as possible, and seamless. There are a lot of speakers and a very large matrix system; we’re manually tracking the position of all the actors in space, so that the sound system is always time aligned to the acoustic source of whomever is speaking.
Kluger: The technology is more and more prevalent in the theater and you experience spatialization and localization in sound design quite a bit. This feels like it's the epitome of the capabilities of a system like this. Because our theater's in the round, so we need to amplify in all directions at once. And it's so successful that people have come up to me and said, oh, it’s so great that the actors weren't miked.
Always the best compliment, right?
Levy: It's the best compliment and also so frustrating. It seems so easy.
Kluger: I'm just proud of what we did. Because I think anybody who is even remotely discerning about amplification knows how difficult that is.
How did the opportunities presented by that technology inform your arranging process?
Kluger: When you're working in an acoustic style of music, the dynamic range has to be built into the orchestration and the performances. And what's so great about a really well done and transparent sound design, is that it allows you to feel like the music is coming from the musicians themselves. That allowed us to feature individual musicians on musical ideas that we're used to hearing an entire section of players play. So you take a line heard from a string section, and you hear it played on a banjo, and you see the banjo player, and you feel like the sound is coming from the banjo player. It gives the whole thing this kind of fragility and intimacy and specificity in the performance.
Levy: There is a lot of detail in the orchestration, so those individual instruments become so important. Because we can localize to that source, we're actually doing a lot less. We're really letting the orchestrations that Dan made speak for themselves in the room, and letting the band do their thing.
Can you give me a top-level view of the sound system?
Levy: We're using the d&b Soundscape DS100 with En-Scene software. It's actually two of them, because we needed somewhere around 100 outputs to make it all work. The En-Scene software allows us to do this object-based mixing where every actor and every band member is represented by an object on the screen. We're moving the objects around, and the DS100 is dynamically calculating the time and level distribution to all of the loudspeakers individually. My associate, Sam Lerner, and programmer, Sam Schloegel, worked a lot on getting that tracking done. Sam Schloegel did a lot of scripting and writing macros in QLab so that we could integrate that into our workflow and make it pretty highly automated. Basically, every time somebody moves on stage, John Sibley, the mixer, takes a cue to move their object to match the blocking.
Most of the music is meant to sound acoustic, except the electronic soundtrack of the dream ballet. Can you tell me about that?
Kluger: When we get to the dream ballet, the music is a totally different aesthetic. It's loud, and the first part of it is played back as a recording. Drew was able to use the same spatialization system to do some really psychedelic surround mixing.
We brought a system in for natural reinforcement. But then when we did the ballet, we were able to achieve a level of sonic experimentation with panning and surround effects that I think are light years beyond what we could have done without it.
Levy: We had a lot of fun doing that. Instead of treating the music like stereo playback, we treat each instrument or each group of instruments as objects, depending on the moment in the piece. Thinking about moving those objects around, and exploring how we can change the audience experience and integrate that with the action on stage was an exciting part of the process.
Do you find that these new spatial opportunities allow you to collaborate in different ways?
Levy: For sure. When we're talking about the dream ballet, the first two thirds of it are not anchored in an acoustic source on stage; it's entirely playback. Then the whole mix of the piece is totally dependent on the programming and the movement and how it's experienced in different parts of the theater. That was a lot of fun for Dan and me, when we got the time we needed to play around with it and figure out which movements and spatial scenarios change the perception of the music and the reaction to the music.
Kluger: Yeah, it was a surprisingly involved collaboration. We had first produced the recording in stereo, and then we broke it apart and rebuilt a surround mix. It's far beyond the complexity of even say, a custom 20-channel surround mix, because the opportunities within the d&b system to treat musical elements in both natural and experimental ways meant that we could reconsider the architecture of the mix in the theater.
How did you go about that?
Kluger: We built the whole thing in Logic; we demoed it out then we replaced it with real musicians. Then we mixed it in Pro Tools. Then in Pro Tools we created a stem busing system that fed the Soundscape system so that our individual stems from Pro Tools could move around the room. And then we use QLab to manage playback of both the panning and timecode of the whole sequence.
Levy: It was fairly involved to just devise a logical workflow. Our experience at St. Ann's working with the system on a slightly smaller scale taught us that we needed to come up with a way to work more efficiently because we weren't going to be able to get the time to sit down and program for hours on end in a Broadway tech. We eventually developed a way to work efficiently in line, and work in the moments that we could snag throughout the tech process. We had a Pro Tools system that mimicked what would eventually be played back in QLab. QLab is doing all the DS100 control, all timecode based, while we were able to mix Pro Tools using a control surface, and could make edits on the fly if we need to.
Kluger: And it all synced up perfectly and never crashed.
It seems like such an exciting prospect to take such a traditional musical and apply these new creative concepts and technologies.
Levy: Aesthetically, I thought it was interesting and exciting and right up my alley in that regard. Of course, when it came to the technology and how to accomplish it, I had a lot of ideas but we were exploring some new territory in a lot of ways.
Kluger: There's sometimes confusion about what sound designers do and why sound designers and music departments are separate or integrated. I'm really proud of how this piece features the possibility for amplification as an art that Drew has created. At the same time, it's featuring the music by putting the musicians right in front of the audience. It's a real celebration of our different crafts and how they support each other.
Sarah Jones is a writer, editor, and content producer with more than 20 years' experience in pro audio, including as editor-in-chief of three leading audio magazines: Mix, EQ, and Electronic Musician. She is a lifelong musician and committed to arts advocacy and learning, including acting as education chair of the San Francisco chapter of the Recording Academy, where she helps develop event programming that cultivates the careers of Bay Area music makers.