Tomorrow's Pre-Viz Software, Today!

WYSIWYG Automation. The phrase sounds like a punch line to one of those slightly lame techie in-jokes that make any hapless non-technician in the vicinity look politely perplexed (or if they happen to be married to you, sweetly tolerant). The thing of it is, though, it's happening more and more these days and it bears talking about. WYSIWIG, which for the acronymically challenged is a computer programmer term meaning “What You See Is What You Get,” is something we hear a lot about in other disciplines, but is not generally applied to moving scenery around.

Here's the basic concept. Your show involves multiple axes of automation: wagons, turntables, flying gags, elevators, whatever. During the conceptual, design, pre-production, and rehearsal phases, exactly how this will all be utilized changes more often than, well, something that changes a lot. Storyboards, move-lists, shift-plots, and cute little flipbooks of photos of the scale model in different configurations are generated, noted up, redone and so forth. On a more high-end level, a 3D model in Studio MAX or Maya is re-animated and re-rendered through half a dozen iterations, and emailed, FTP-ed or FedEx-ed on DVD from the designer in London to the director in Paris to the producers in NYC, to the shop in Sheboygen, etc. Then, when the whole schmear is built, loaded in, tuned, and, by the grace of God, actually working, then — and only then — are the current set of rough cues entered into the console, usually shortly before tech starts. And as we all know, nothing, I mean nothing, can bring a tech rehearsal to a snail's pace faster than cueing automation.

Now, let's consider an alternative: all through the process, someone in the rehearsal room was actually writing the cues on his or her laptop. Not just typing numbers into a spreadsheet, but actually seeing a 3D model on the screen and, with the director and designer, watching the cues play in real time. Best of all, at the end of the day, they would shoot off an email to the automation shop saying, “attached are the cues for Act II, Sc. 4. Please upload into console.” Whoa!

I'm not being a total idiot/Pollyanna here. The show is still going to change a few dozen times between first tech and opening night. We all know this and — let's be blunt — if it didn't, most of us would be out of a job. But the fact remains that we'd be coming into a difficult process that much ahead of the game. We'd also — and I think this may the most useful thing of all — be thinking about the moves earlier and earlier in the process. And by “we” I don't mean just the technicians, but the creative staff as well. More people thinking earlier is usually a good thing.

Now that you understand my utopian vision for the future of automation, let's get down to the nitty-gritty. In the last two years, I have personally worked on two projects where I ended up creating something very similar to what I just described. I'll talk a little about both projects later in this article. First, there are proprietary systems out there that are starting to do this, and I think it's only the beginning.

What's Out There

Stage Technologies in Britain has a proprietary system called Nomad that includes the ability to upload a 3D model and hook it up to the control, so you can preview cues while off-line. Their system also has the model driven directly from actual position data while online, so the 3D model becomes an additional monitoring method during run-time. The imaging is fairly simple wireframe, but operators are finding it very useful. They also have another very cool piece of functionality, which is path following. For multiAxis flying type effects, you can draw a path in space, using Cartesian coordinates, and the software will do the math to sequence your actuators to put the target point where you want it. This system was hugely useful in the London production of Chitty Chitty Bang Bang.

Scenic Technology's Stage Command System, which is something of a Broadway staple, has also started providing pre-viz capabilities, along with off-line editing. I have not had a chance to really explore this, but I have heard some pretty cool things about it.

My first foray into this particular realm actually started from an engineering problem. Some context to start with: although from a tech theatre background myself, I currently work for the entertainment engineering division of McLaren Engineering Group. It's a strange and fun niche to find oneself in. Our department provides a variety of services ranging from inspection and review to full-up technical designs for various entertainment events. We work for shops, producers, concert tours, venues, and designers, depending on the nature of the job. We have representative work in most shows currently running on Broadway, a good chunk of Las Vegas, and various venues around the country.

Project #1: Sshh!

One of our larger gigs in recent times was designing the mechanical stage equipment for a new show. It will not yet have opened by publication of this article, so in respect to our client's wishes, I am being intentionally vague about specifics. Our principal, Mal McLaren, has a strong professional relationship with the designer, and we were brought in early on the project. It was a large piece of mechanized scenery with multiple axes of motion, high velocities, and a very large payload. Like any sane person, their first question to Mal was, “Is this possible?” And like any responsible consultant, his first response was, “Let me think about it.” Their next question of course was “What do think it will cost?” The mind boggled.

Estimating cost of structure was not too hard. X tons of steel costs Y dollars, even folding in labor and so forth. It's simplistic, but it's usually pretty accurate. It became apparent that actuation would be the crux of this. If we knew what sort of power was needed to make this behemoth dance, then we could start guessing at a price. We could also start providing some parameters for the inevitable next question: “How do we make this cost less?”

The designer had done quite a good early animation study of the structure moving in 3D Studio MAX, so we had a good idea of the moves and timing required. Engineers have all manner of ways of calculating forces on various parts of a mechanism but when dynamic forces get complex, hand calculations, even with much beloved Hewlett-Packard scientific calculators, get kind of Byzantine in their complexity. That's when they drag out the big guns: Finite Element Analysis (FEA) software and its animated cousin, the Mechanical Event Simulator. FEA applies forces to a mathematically modeled object and measures the stresses that object sees by dividing it up into smaller blocks of matter or finite elements. Mechanical event simulators take virtual machines and run them through their paces and measure the forces required and the stresses seen at the various parts of the system.

There are several software packages out in the world and discussing their relative merits would be an article unto itself. We use VisualNastran4D distributed by the MSC Corporation. It is used mostly for mechanical virtual prototyping, which is exactly what we wanted it for, sort of. Outside of our silly little business, machines tend to do things that are well, mechanical, predictable, repetitive. Strange and constantly changing sequences for what are largely abstract criteria, like aesthetics, don't often show up in most disciplines of industrial and mechanical engineering. Therefore Mechanical Event Simulators know nothing from cues. To get the information we needed from our model, we had to teach it to dance. We had to set up a virtual automation console for our Nastran model and we would also just as soon not code it from scratch.

I'm about to do something potentially subversive and sing the praises of a Microsoft product. I refer to the World's Smartest Graph Paper: MS Excel is, for what it is, the most perfect piece of software around. What it does, it does really well, and it can do darn near anything in the realm of storing and manipulating numbers. We love it to death. In this context, it became our automation desk. Nastran, like many software packages of this type, is already set up to import and export data to and from Excel so we could keep our coding down to some low level scripting. It will go look in a particular cell of a particular worksheet, and bring the data into its simulation as a velocity or a position target. It will also take information like torque, horsepower, or stress and output it to another spreadsheet.

We set up an MS Workbook with multiple worksheets: one worksheet per cue plus an output sheet. This output sheet looked up cue values depending on which cue was active and where the clock was in the sequence. We could run one cue at a time with a “go” type command, or preprogram a multi-cue sequence over time. To change a cue, you just changed the cue sheet and reran the simulation. We learned a great deal from all of this. We learned that one cue in particular, a multi-axes move that should be utterly breathtaking when performed, was the make or break move. If the system could do this it could do everything else, no problem. We were able to tell our client that horsepower requirements (and therefore both cost and impossibility) swept sharply upward if that killer move sped up past a certain velocity. We were also able to determine that during that same killer move, a performer in the right place could pull up to 3G's of acceleration, which is a little much to ask of even the most dedicated performer.

Armed with all this data (the output spreadsheet for one cue covers an entire wall of our conference room in 14 pt. type), we were able to suggest a redesign that kept costs reasonable (relatively speaking) while providing the moves the design needed with a far less monstrous (again relatively) power plant. Then we went on to design it.

Although the goal in this case was not visualization but analysis, an added benefit of the process was metered animations. Running this was a high overhead program, using mathematically modeled real-world physics, so solving the simulation was not real time by any stretch of the imagination. But once the simulation was run, you could output the sequence as a real-time avi format movie, with meters running in the corner showing forces, velocities, horsepower, etc. These animations were not as sexy as a CG package, but they showed a very clear relationship between an action and its associated data, which helped hugely with communicating our findings. It also got me thinking that a set designer or director could find something like this a really useful rehearsal tool. Another aspect of mechanical event simulators, one that was not utilized for our purposes, was collision detection. Once you mathematically model and locate an object in space, you can tell if it intersects with another mathematically modeled object in space and react accordingly. Having been present at a few ugly rehearsal-stopping automation train wrecks in my day, this also seemed to be a handy feature that bore further investigation.

Project #2: Dracula

The new musical Dracula, which hit Broadway this season, defines its changes in looks with, among other things, a series of irising portals that are, not surprisingly, automated. What the stage manager asked Chuck Adomanis at Hudson Scenic for was a way to configure these panels virtually during the rehearsal process. Knowing how much the portals were used and how critical to the rhythm of the piece their cueing was, he wanted a head start on setting those looks. He wanted on-the-fly pre-viz that could be brought into the rehearsal room, without having to hire a CG animator for six weeks. And if he was going to the trouble, then he wanted this work done in a recordable format that could be uploaded to the automation console. I had done some pre-viz animation for Hudson in the past, so I got tapped for this drollery.

We had two basic choices in approach; one was take Hudson's existing automation software, and get it to control a virtual model instead of a physical machine. The other was to take an animation software package, and fake up an automation controller for it.

What we came up with worked quite nicely. We modeled the set fairly simply, and mapped the designer's paint elevation onto the panels. With MAX's exhaustive texturing capabilities, we were even able to give them adjustable translucency, to mimic the scrim-like qualities of the plastic laminate panels. We set up a basic interface that allowed the user to manipulate the panels to a given configuration and save it as a cue with timing. The cues were saved as simple text files, which could be massaged into an appropriate format for the actual automation desk.

The coding was “slash and burn” to say the least, but it did what it needed to do, and you could operate it without having to cross train as an animator. The major piece of complexity for us, such as it was, was key-framing a motion profile into the animation, to get a realistic accel and decel curve to the motion. In the course of setting this up, we discovered an interesting logic shift.

We started out driving our model much like we would drive real automation; i.e. set a goal of either position or velocity, with a desired motion profile, and go. Like in real automation, we stored timing and targets, but not starting points. This doesn't really work when your end result is an animation over time. Key frame animation, which is how CG generally works, requires all the key positions (hence the term), including the start position. The software can then work out the ‘tween frames using mathematical interpolation. We had to rebuild our logic to keep track of where the starting position was for each cue, and then strip this info out when we saved the cue for actual automation. We could also then establish a key frame at the top of the accel and decel curves, using some basic motor math, and leave the software to do the rest.

To aid with visualization, we provided a cyc we could color mix on and a scale human figure that we could move around on the stage to get a sense of proportion during an animation. We could also view the scene from three house positions, as well as orthogonal views at the touch of a button. Finally, we disabled the standard MAX interface of menus and toolbars, so an operator would not be distracted by extraneous stuff or accidentally change a setting and make a hash of things.

“Happy” Accidents

Which brings me, via a slightly hamfisted segue, to the subject of accidents. All the usual cool-new-toy techie mentality arguments that we all know and love aside, I think this type of technology, properly and responsibly implemented, might just save a few lives both literally as well as professionally. Steel is hard, flesh is soft, and sometimes pushing that Go button can feel a lot like pulling a trigger of a loaded weapon. Moving scenery is getting more and more ambitious, complicated, heavy, and ubiquitous in our industry.

The simple fact is, we are trying to do more and more, in less time, with a lot less margin for error. Something like this is needed. If we aren't going to start rehearsing for three months on the actual set, like state-funded opera in Europe sometimes does (although not nearly as often these days), then we are going to need ways to pre-plan automation moves without risking horrific smash-ups or cripplingly expensive drawn-out tech rehearsals. Because we will keep trying to push the envelope, our market demands it and besides, it's what we do and why we do it.

Each of these aforementioned projects got me a little closer to what I really want to see, which is pretty much what I described at the beginning of the article. The Nastran version had a lot of useful engineering information but was a little dense for day-to-day use, whereas the Dracula solution was eminently usable, but pretty limited to a specific instance.

So we're not there yet, but boy do I want to be. I want WYSIWIG Automation! I want a readily available piece of software with a built-in 3D engine that you can upload your model into, wire up to a console, write your cues, and output to a usable and configurable dataset that most motion controllers can make use of. Of course I'm trying to write one, but don't let that stop anyone else out there because I think this is a worthwhile goal and the more minds, the merrier. Start thinking about this if you haven't already, because I think it's worth doing and by no means impossible.

Besides, you have to admit it's kind of cool.

Murphy Gigliotti works for the Entertainment Division of McLaren Engineering. He also teaches at the Entertainment Technology Department of The New York College of Technology in Brooklyn. He can be reached at [email protected].