Mixed Media

Sorting Out the New Digital Media Servers

If you follow new technology, you're probably aware of the trend toward merging lighting hardware, control, and video projection technology. But you may not know how these systems really work — how they interface to the lighting console, what products are available, and what they do best. This article attempts to answer some of those questions. It's a very broad topic, and a separate article could be written on video projection technology alone. We're going to focus on everything but the projector — software features, image resolution and processor load, integration, and the necessary hardware. We'll break it down into three parts — media servers, rendering engines, and, in the case of High End Systems' Catalyst, the orbital image movement system.

What is a media server and when do you need one?

As video has been integrated into various projects, including trade shows, concerts, industrials, theme parks, and retail applications, the need to offer precise cueing of video content, lighting, and special effects has become important. It is difficult, however. Often the lighting designer calls cues to the video director, along with followspots, lasers, etc. The video director deals with several live video feeds from cameras, pre-recorded material from tape, DVD, or hard disk, and directs this content to the playback medium(s). These may be video projectors, LED video walls, plasma screens — maybe all three. In this day of tight budgets, a full-time video person isn't always possible. Thus digital media servers have sparked interest, especially if limited live video content is involved.

The fundamental difference in the systems under discussion here is the method of cueing and playback, which, in our case, is by a DMX lighting console. See diagram below:

This, supplied by Radical Lighting, is a very basic system. The lighting console outputs DMX to the interface unit, which then communicates to the computer/media server what content (still images and MPEG or QuickTime movies) is to be output to the video projector and plasma screen. A system like this (with no additional features) requires that all of the content used in the production be ready to go, with no additional editing or changes needed. Many lighting designers and video people say, “I'm already doing this with three DVD players, a switcher, and a video guy; what do I need a digital media server for?” The answer: Very consistent, repeatable playback of the content. Once the lighting cues are written, the recall of the video content can be programmed with repeatable, editable timing, along with the lighting cues, all from a single control point (the lighting console).

Once the content is created and is reproduced in the desired manner, you don't need an additional person to play it back. This has been a big issue for LDs, some of whom don't want to be responsible for eliminating a position. In fact, the video person need not go away. His or her creative input regarding the video systems' artistic content is highly important. The use of a media server only allows the video director more, not less, precise control over the reproduction of the content. If there's a healthy respect between the LD and video director, digital media servers won't cause problems. On tours using digital media servers, with a video person directing the live IMAG video, I've been told, “It's another channel of content that I don't have to be concerned about cueing — I just flip to it when I need it. And I miss it when I'm on shows where it's not there.”

On-the-spot modification

But what happens when the content isn't what the artist or video director envisioned — it's too bright or too dark, too fast, too long? In the old days, you went back to your video editing/rendering package (products like Apple's Final Cut Pro® and Adobe Premiere®) and started over. This took hours or days, depending on the complexity of the content and the speed of your machine. Now you can non-destructively modify the content on the spot with the real-time rendering engines included with many of the digital media servers on the market today.

All of the systems discussed here offer more than simple playback of images on a DMX cue. This is where the image rendering/manipulation ability (and associated creative opportunities) of the computer and software package come into play. Speed and direction of playback, looping or one-shot playback, multiple layers of still or moving images, color mixing/correction on all or individual layers, fade in/out, resizing, tiling, zooming, shaking, morphing, blending, X/Y/Z positioning, character generation, gobo overlays, 3D objects, keystone correction — all are potential functions of the rendering software and hardware in the computer. In the case of Catalyst, a greater degree of attention is paid to parameters that are useful if images are repositioned (like keystone correction); other products that don't move images (everything else) focus on other features like 3D objects.

Whatever the application, these rendering engines allow the user to modify content on the spot. Example: a corporate show utilizing an LED video wall or DLP video projector. Let's say you import a corporate logo into the digital media server, and, at the appropriate time, the logo zooms in and fades up while doing a 3D tumbling effect, then stops. The creative director comes in an hour before the show and says, “I wish it didn't tumble so fast; can you change it?” You move a few faders, write a few cues, play them back, and ask, “How about that?” For traditional production applications where video is used, like corporate and rock shows, this feature alone justifies the cost of a system.

Warning: time delays are possible

Be careful, however. Much of the image rendering occurs on the computer's video card, along with the host computer's microprocessor. Compared to “conventional” software rendering engines like Final Cut Pro, Premiere, and Adobe After Effects®, which can take hours, even days, to render images, the milliseconds that these video card-based rendering systems take to manipulate the image is “real-time” in comparison. However, if a live video camera is routed to the input of the rendering engine, you may see a slight processing delay in a live situation, especially if viewed along with live dialogue or music and if the processor is adding a number of effects. It's an important distinction. The broadcast industry's accepted processing delay is 33.3 milliseconds, or one frame at 30 frames per second (fps). In reality, most systems can operate with up to three frames of processing delay and most people won't notice. At five frames, the delay is clearly noticeable and unacceptable for most professional applications.

The basic architecture of the digital media server's rendering software/hardware typically consists of one or more layers of still or moving images or 3D objects (see diagram below).

This drawing illustrates the architecture for Catalyst. Some systems may offer more or fewer layers, but additional full-motion effects on multiple layers will sometimes load down the host and video processors too much. The result is frame-rate slowdowns, image “stuttering,” and aliasing artifacts in the image. Depending on the resolution of the files and the processing power on hand, this situation can be avoided most of the time. As processing power increases every day it become less of an issue — but developers are hard-pressed not to take full advantage of every ounce of power from their host computers. From what I've seen of the systems available today, if you're calling up video clips, stills, and masks with a reasonable amount of rendered effects, you'll be fine.

What can you do?

Some features that could be potentially applied to a layer (or layers) would include:

  • Playback of live motion video clips
  • Set the video start and stop points
  • Speed and direction of motion playback
  • Looping or one-shot motion playback
  • Color-mixing/correction
  • Fade in/out and transitions
  • Resizing
  • Tiling
  • Zooming
  • Shaking horizontally and/or vertically
  • Morphing and blending
  • X/Y/Z positioning and/or rotation of the image
  • Character generation, allowing addition of text to image
  • Gobo overlays
  • Masks
  • 3D objects: instead of a layer, a 3D shape (sphere, rectangle, triangle) is offered that can act as a layer. A user could apply a corporate logo to it, spin it around, resize it, etc.
  • Keystone correction: a Catalyst feature that is very useful with the orbital mirror head. It allows the user to correct for projector positioning to render a correct image no matter what the angle of incidence is.
  • Live video input: apply live video to layers or 3D objects and then apply effects in real time.

Obviously, not all systems offer all features on all layers, nor do they need to, in most cases. Each manufacturer appears to be optimizing its combination of features, effects, frame rates, and output resolution to suit the processing power of the host computer and video card being utilized — a very practical approach. But as computer processing power continues to increase, expect to see more layers and more effects on those layers.

Here's a brief look at products on the market today and what they offer:

High End Systems Catalyst

The Catalyst system consists of the Catalyst digital media server/real rendering engine software, an Apple Macintosh G4 computer, Catalyst interface unit to connect the DMX data stream to the media server and provide the composite video output, and the optional orbital mirror head and its drive electronics. All components are packaged in road cases. This system is a combination of technology from three Englishmen: Richard Bleasdale, of SAM control software fame, conceptualized the Catalyst software; Peter Wynne Wilson and Tony Gottelier invented and patented the orbital mirror head concept. This is the most fully realized version of the “virtual automated light” available today.

The server offers one live motion video base layer that supports the Apple QuickTime movie standard for live motion video, along with another gobo layer that features the entire High End and DHA gobo catalog, and one mask layer for applying aspect masks — circles, squares, cinema, etc. The system will accept Final Cut Pro, After Effects, and Adobe Photoshop® files directly along with QuickTime movies, and ships with Artbeats footage and Digital Juice ambient video loops as standard.

The base layer supports a constant 30fps frame rate and 720×480 pixel resolution for NTSC Standard Definition Digital Video with a 4:3 aspect ratio, regardless of rendering effects. Tim Grivas of High End Systems says, “This resolution is considered broadcast quality, and is maintained regardless of what effects are applied to the base layer. This is one of the key points that has compelled professional A/V people to view Catalyst as a serious video product. Additionally, Catalyst allows remote control of the projector's zoom and focus systems on projector models that support the feature. And there's the movement aspect — people are still not used to seeing video images move around like Catalyst.” New features in the latest software release include keystone correction and multiple media server sync with frame accuracy.

The base layer offers the widest selection of image manipulation parameters, like start and stop frame, play mode, playback speed, X/Y image positioning, scaling, X/Y axis image panning, aspect ratio, and keystone correction. The gobo layer allows selection of the pattern, X/Y/Z axis positioning, sizing, and visual effects. High End Systems is the worldwide distributor for Catalyst.

Martin Eureka 3D

Eureka 3D is a PC-based, DMX-controlled digital media server and real-time rendering engine that was created by Case Console, the Danish developer of the new Martin Maxxyz lighting console. Eureka 3D is aptly named; its focus is on 3D video effects. The operating system is Windows 98 SE, but an XP-embedded version is supposedly in the works. The system includes a rack containing the PC, DMX/video interface unit, and a keyboard. The unit will output SVGA, S-Video, or RGB component video. Multiple units can be synched via Ethernet. Carl Wake at Martin says, “There are actually two processors running concurrently in the system; one handles the database and DMX interfacing functions, and the other handles everything else. Ethernet connects the processors. The system can store 800 still images and 400 MPEG-1 movies. Also, there are different DMX protocols offered, with full-blown 16-bit profiles along with leaner channel count profiles.”

The unit offers two base layers that can be full-motion video or still images. The resolution of these layers is MPEG-1, or 320×240 lines of resolution at 30fps. There is also an RGB “fog” layer, along with a filter layer that includes B&W gradients and other overlays. Three-D objects, such as spheres, pyramids, and other objects can also be chosen as a layer, and images and effects can be applied to that 3D object as if it were a layer. This feature would be great for trade shows and corporate events; you can take a customer's logo, apply it to the 3D object and then add effects with another background running. Those effects include camera position, pan and tilt, zoom, image advance rate, and fade-in and -out. Autodesk 3D Studio VIZ® files can be directly imported and used as 3D objects in the Eureka 3D system.

IRAD RADlite

RADlite, developed by Simon Carter at UK-based IRAD, is a digital media server and video manipulation system that can be controlled by any DMX lighting desk. It combines images and vector shapes with digital video, color backgrounds, and text messages to create visual effects. The system consists of a 4u rack-mountable, high-powered PC, a DMX-to-Ethernet converter, and the RADlite software. RADlite comes with a standard set of libraries for graphics, palettes, shapes, and videos, but adding your own is easy enough. It can be used to control any video display accepting SVGA, S-Video, or composite signals. This includes most LCD and DLP projectors, video walls, and plasma screens and can be used in a variety of resolutions and aspect ratios.

The developers of the product say that RADlite behaves as much like a light as possible, so it has been divided into five separate fixtures. These are as follows:

  • RLcanvas is concerned with the background layer. Video, trails, and background color are all controlled from this fixture.
  • RLgraphics is the fixture concerned with images, shapes, and palettes. Several layers of graphics can be patched into one show. This works by giving precedence to the latest layer.
  • RLmask can also be used more than once in each show. This fixture works by creating a mask and allowing rotation and movement of that shape as well as several other features.
  • RLtext is yet another layerable fixture. Create text in the control panel and add effects.
  • RLswitch enables the user to switch between live video sources. For example, if there are six different camera feeds in the show, then the RADlite can be used to switch between these using the DMX controller. Additional hardware is required for this feature.

Luminosity Sales Ltd. handles worldwide distribution. TMB is an authorized master distributor for RADlite.

Delicate Productions NEV Series 6

NEV Series 6 is a DMX-controlled digital media server and effects system developed by Breck Haggerty and Steve Gilbard in California. NEV Series 6 uses a multiple component approach as opposed to doing everything on one computer host platform. It also takes a different marketing approach; systems are only available for rental through Delicate Productions in Camarillo, CA.

NEV Series 6 allows DMX console programmers to generate multiple streams of broadcast video in real time and is designed to allow a high degree of control over recorded footage. Each NEV channel has two layers: a foreground and a background. You can play recorded footage or show live cameras on each layer with a one-frame delay. The two layers are then mixed with an assortment of transfer effects including mixing, keying, and a growing number of digital effects. Playback is totally random access, and dedicated digital hardware on all layers ensures independent control and keeps the frame rate steady at 30fps. Multiple systems are easily synchronized for large multichannel systems. Outputs on the NEV Series 6 are component video. The NEV 6 was used as media server for the LSD Icon M luminaires on the recent Korn tour. Gilbard claims, “The systems are highly reliable and have toured for 18 months without a showtime failure. NEV is built for the long haul.”

NEV Series 7 prototypes are expected to be complete within the month, with the full release slated for early 2003. New features will include support for SDI Digital and RGB output, along with Y/C and component video, as well as transcoding to all formats. The system will be supplied with over 500 transitions and digital effects to choose from. An additional processor will allow another 1,000 digital video effects, including 3D mapping to objects, turns, clips, etc., all without any visible artifacts. A DMX-controlled router/switcher will also be included.

Look for all of these products to become increasingly visible in different applications over the next year.

Robert Mokry is an 18-year veteran of the lighting industry. For more information please visit www.robertmokry.com.