Those of us of a certain age can credit Alanis Morissette’s 1995 seminal album Jagged Little Pill for legitimizing our collective insecurity, heartbreak, and rage. Songs like “You Oughta Know” and “Hand in Pocket” were intensely personal, yet somehow, we could all relate, and they became anthems for a disaffected generation.
Nearly a quarter-century later, those songs take on new context in the #MeToo era in the form of Jagged Little Pill, the musical, which opened at Broadway’s Broadhurst Theatre on Dec 5. Directed by Diane Paulus (Hair, Waitress, Pippin) with orchestration by Pulitzer-winning composer Tom Kitt, the show is based on an original story by Oscar winner Diablo Cody (Juno, Tully, United States of Tara) that explores contemporary issues surrounding race, gender identity, addiction, sexual assault, and gun violence.
I sat down with Tony-nominated sound designer Jonathan Deans to dig into the inspiration behind his immersive soundscapes that help reimagine the songs of Jagged Little Pill for a new generation. Check out Part Two of our interview below. (To read Part One of this conversation, click HERE.)
Sarah Jones: I want to ask you a couple questions about the technology supporting your sound design. There’s a rock band onstage; what kind of production tools are you using to shape that sound?
Jonathan Deans: Bryan Perri is our musical director; there’s an eight-piece band onstage. It’s just a fantastic band. The vocals, the ensemble, everything is live. I use a TC Electronic 6000 reverb system; I'm using Waves plug-ins, which are connected to a DiGiCo SD7T “theater version” console.
Generally speaking, you can't set plug-ins to do something when you don't know what's coming at you. You have to be very careful and to know why you're using them and what you're using them for. We're using the plug-ins quite a lot, but to create a sonic illusion as opposed to doing the job of the person who's mixing. That person still has to mix. As soon as you start over-compressing and limiting voices coming in, it's no longer a live show. It sounds like it's being recorded.
You have none of the live dynamics of that individual on stage who's worked for years and years and to develop their voice. I don't pretend to guess at what that person should be sounding like and how they're singing. It's between them and the person mixing to learn about how that works in a live theater.
Sometimes, as we get into technology, we can have devices that can take care of something so we don't have to worry about it anymore; those things can be used on performers and then their performance suffers. I would rather take each show at a time. Each show to me, each performance, is unique. That's why we go to live theater.
SJ: Tell me about the PA.
JD: We're using line arrays; I'm using Meyer Sound loudspeakers. We use LEOPARDs and LINAs for our main array systems, with three different subwoofers: 750 LFCs, 1100 LFCs and the new USW-210Ps for my surround subs. The immersive system, or my “cozy blanket” system, is made from USW-210s and UP-4slims, which are very small, very unimpressive looking, but when you turn them on, they are astounding. We have them all around the audience, in the front of the balcony for the front section and upstairs in the mezzanine. Jagged Little Pill doesn't require us to do any mapping in of sounds that fly around, because it's not that kind of show.
SJ: How do you keep the focus on the dialogue?
JD: The nature of a speaker array is to push sound forward and hand it to the audience; to place it right on their lap. You want that tsunami effect for rock music and for this idea. Then, you go to dialogue; Diablo Cody, who wrote the book, is known for Juno and Tully and many stories where the character and their journey is so critical. I want the audience to listen to the characters as much as they can.
I built a different system for dialogue, using the Meyer UP-4slims. When you look at the speaker array, you see these little speakers next to it; that’s what the dialogue is coming out of. It makes the audience pay attention to the dialogue because it's not being thrown at them through some horn-driven speaker. It's a much more open sound. It's a very wide, conical dispersion.
So, I have a dialogue system and I have a music system. When the characters start singing the songs, we move them from the dialogue system and put them into the music system, the array system. The ensemble drifts into the surrounds, the principals drift into the array system. It's how we are able to go back to dialogue and have that very intimate sound of listening to people talk on the stage.
If you go see a play, you know that you have to sit down, and the audience are asked to work, to listen. It's not being handed to you on a silver plate. You have that connection with an audience and that commitment from an audience. I want the same for musicals, and I think it's a responsibility for sound designers, to be aware that we're, shall we say, training the audience’s expectations.
SJ: It sounds like you're using technology to lead people to listen more actively. Would you say that's accurate?
JD: Yeah. It's funny, because when you say, “the audience to listen actively,” that's what an audience is supposed to do. You can't be inactive when you're listening. What you're saying is absolutely right; it's a little jarring to hear that we're asking them to listen actively. It's like, no, I just want them to be engaged.
SJ: With people in general listening more passively than they did generations ago—being inundated with sound everywhere, listening through earbuds, instead of sitting on the couch in front of a record player—how does that extend to the theater experience?
JD: We used to listen to a record player, you pick up an album, and you would get a story. Now, you can load up songs, and they can be completely out of context from anything else. On top of that, they're usually sent via MP3s, some really terrible, crappy-quality sound, and you're probably listening on your phone, on your computer, et cetera. It has been crushed and all the dynamics and details of the song are being lost.
Therefore, a lot of songs are now being produced without that detail. They're being produced for the idea of what it will sound like when you're listening to it on headphones, on your earbuds, your AirPods. It's complete dumbing down of the hearing of the human race. I just can't listen to music in that way.
If you've gone to the Royal Albert Hall or the Festival Hall or the Met, and you've heard something that’s just beautiful, it resonates. Or, you've gone to see a rock concert and it was fantastic because of how it was mixed and the whole environment. You can then go and download your MP3 and stick AirPods in and sit on a subway and listen to that music. That's triggering your memory, a sonic snapshot of what you experienced. Your brain is filling in all of the things that are missing.
It's my job to make sure there’s a reason why you can't just sit at home and listen to a cast album and say, "I don't need to see this. I've heard it." No, you need to come and witness and have a really special experience that goes beyond the visual content to include the sonic content as well.
In order for a sound designer to do that, you have to work with creatives and designers who are collaborative. A sound team that is awesome. The Jagged Little Pill performance sound team and production sound are Mike Tracey, Chris Devany, Brett Daigle, Brian Hsieh, and Daniel Lundberg. One would hate to do all this just for “utility sound,” wouldn’t one?
Sarah Jones is a writer, editor, and content producer with more than 20 years' experience in pro audio, including as editor-in-chief of three leading audio magazines: Mix, EQ, and Electronic Musician. She is a lifelong musician and committed to arts advocacy and learning, including acting as education chair of the San Francisco chapter of the Recording Academy, where she helps develop event programming that cultivates the careers of Bay Area music makers.