Breaking Sound Barriers

Last week, the entertainment design world reverberated with the announcement of the Madison Square Garden Company’s new Sphere, a globe-shaped, 18,000-seat entertainment venue set to open on the Las Vegas strip in 2020. Even in a city saturated with dazzling technology, Sphere is unprecedented, with an exterior wrapped in 190,000 linear feet of programmable LED lighting, a 4-acre, 250-million-pixel video screen spanning its cavernous arena roof, haptic floors, and a revolutionary new sound system capable of delivering unique, ultra-precise “beams” of sound to each seat.

Holoplot, the developer of this “beamforming” sound technology, combines wave-field synthesis and object-oriented design to render planar waves that propagate over long distances at consistent SPLs. The ability to deliver multiple simultaneous, localized sound fields, complete with virtual acoustic environments, has far-reaching implications: In commercial applications, for example, speech can be broadcast in different languages to listeners just feet apart, with no aural interference. On the creative side, there are endless opportunities for designing more immersive, participatory audience experiences—from mixes that evolve with the movements of performers to personalized content for the hearing impaired.

I sat down with Holoplot CEO Roman Sick to learn about the company’s technology and vision.

 

Sarah Jones: Tell me about the genesis of Holoplot.

Roman Sick: The company started off in 2011; the idea comes from Helmut Oellers, who is our head of R&D. He was working all of his life in wave-field synthesis and acoustics, and he always thought that existing wave-field synthesis applications weren’t really very practical because you always had to install a lot of speakers, at certain levels, around the listener on all sides. Besides the fact that this type of installation is often not possible, you always have the problem that room acoustics disturb the wave field synthesis effect, because you create unintentional reflections. So ideally you would have to damp the whole room, acoustically, which is often not possible.

His wave-synthesis approach was to include the playback room into the reproduction synthesis—rather than trying to ignore it or work against it, work with it, and use certain reflections to create the overall sound field.

SJ: Is this technology available now?

RS: We started shipping from this year, and we have our first customers. We primarily target applications where you have challenging acoustics, especially when it comes to speech transmission as well as creating immersive installations. We are not yet targeting console applications for venues like clubs; the system hasn't been designed to provide the necessary power requirements or high SPLs necessary for those applications. We are looking more into applications like conferences, trade shows, corporate events with a mix of music and speech, museums, and art installations. Retail, as well.

We are testing applications with a lot of different customers, and we can see that the product is very flexible for all kinds of use cases, as it’s fully modular. We have some customers that start off with very small samples and distributed systems, and then we have others that are thinking about bigger, centralized systems.

SJ: Can you give me a sense of scale?

RS: The smallest application is one module, one unit, which is about 90 centimeters wide, 50 centimeters high, and has a depth of around 26 centimeters. The more modules you add—and you can add basically to any size, to make it more vertical or more horizontally formed—the better the overall system performance and directivity.

By design, there is no limitation in how big you can go. One processor, though, currently can control 64 modules, which can be either in one system, or distributed in several systems.

SJ: When you talk about control, what does that look like for someone managing, say, an airport system versus designing concert sound?

RS: Our software interface, Orbit, allows you to move sources with very high precision throughout your space. This works for, as far as we've seen, any application; we have done several installations in train stations in Germany where you just install the system, and then through the software, you can very precisely define where you want your beam to be projected, or multiple beams, or then decide from these multiple beams, or focus points, whether you want to bring in the same content or different content. This allows you to precisely determine where you want to project sound and where you do not, because you might have some reflections, or you’re disturbing neighbors, and so on.

On the creative side, the software allows you to redefine the room acoustics of a given space and give it a new, completely different acoustic impression. You can design a new acoustics profile, so the space you’re in suddenly sounds like an open forest or a church. You can start building immersive environments that are very impressive, as you can’t identify our system as the source, and a movement in the playback space is exactly as a movement in the recording space would be. Combine this with a certain focus point that gives a very local impression to the listener. We currently have 16 channels available that can be configured and defined throughout the room, in direction, amplitude, EQ, and content. You can also use, for example, OSC commands to change presets or move some of these things around.

SJ: You must be doing a lot of educating.

RS: Well there’s some education, of course, on all sides. Many people in the live industry know that that stereo is not necessarily the way to go for the future. And there are possibilities to be more creative and to create new experiences using object-based content instead of a simple stereo signal. This general education is happening in the market, which we really appreciate because our system is fully object-based. It can play any format of course, but that underuses its full potential.

SJ: In entertainment venues where a different mixing engineer comes in every week, what will that experience be like for them?

RS: Everyone will be able to use the system as they are used to, to play their content. If there are certain applications where they want to play stereo sound, that's completely fine, and we will be able to provide that. If their content goes beyond that, we’re working on software to make it easier to do pre-production and then come into the space and replicate the content.

Of course, it takes some time in the beginning to understand, but we will build software tools that make it way easier to utilize this technology in conjunction with other technologies that are in these kinds of venues. It’s a bit still far off, but this is the vision of where we are going. We will be working with sound engineers and artists to find out, okay, how can we make this as easy as possible? There will be opportunities for them to experience the system in a smaller set-up and then replicate this set-up in a larger venue.

SJ: If live sound engineers are interested in experiencing this technology, what are their options?

RS: Well, right now, a good option is come to Berlin to experience it here. But we are just going to market in 2018; we already have systems in the U.S. market deployed for certain key customers, and we will work throughout the year to make those accessible to a broader crowd.