Video Terminology For The Rest Of Us

Are you thinking about venturing out into the converging world of video and lighting? Are you intimidated by the preponderance of video geek terminology? Do the instruction manuals for video editing software appear to be written in a foreign language (I guess we could call it “Vidish)? If that is or has ever been the case for you, I highly recommend reading the following glossary of video terminology, which I've tweaked for the live event media producer. It covers all the basics and, if memorized, will make you the envy of all your friends.

Analog Video: Analog video is the old-fashioned method of recording or transmitting a video signal that uses waves or amplitude modulation over time to represent the video or audio signal. Still very much in use, analog video is slowly being replaced by digital, which is more accurate and less susceptible to degradation.

Anamorphic: Anamorphic filming technique was developed to make widescreen movies using 4:3 film negatives. An anamorphic lens distorts the image picked up by the camera before it reaches the film, squeezing a wide image horizontally toward the center. By using a lens that reverses the original distortion when projecting the film back on screen, the correct, intended aspect ratio is restored. A similar technique is used to store 16:9 video material at 4:3 aspect ratios, for example, when a widescreen movie is recorded on a 4:3 format such as DVD.

Artifacts: A visible alteration to the original image caused by compression or other image processing. Artifacts are typically visible attached to an object in motion and may be apparent as jagged lines bordering an object within an image. While insignificant when displayed on small screens, artifacts can be problematic in large-screen display systems.

Artifacts, Compression: The boxy patterns visible in compressed images. Generally, compression artifacts are more visible as the level of compression increases.

Aspect ratio: Aspect ratio pertains to the shape of the video image, with the ratio being image width divided by the image height (1.33:1, 1.8:1, etc.). Aspect ratio is also expressed as the numerical relationship of image height and width (4:3, 16:9, etc.). The easiest way to explain aspect ratio is to say that it is the ratio between the width of the picture and the height of the picture. Normal TV aspect ratio is 4:3 (1.33:1); HDTV's aspect ratio is 16:9 (1.85:1); and CinemaScope movie aspect ratio is 2.35:1.

Aspect Ratio, 4:3: A standard TV aspect ratio is four units wide by three units high — nearly square (4:3). This is the fast-fading standard for most displays, which are quickly being superseded by wide-screen formats.

Audio-Video Interleaved (AVI): A video-audio file format for Windows. Often called “video for Windows,” it was developed by Microsoft to play full-motion interleaved video and audio in Windows using Windows Media Player. AVI movies can utilize a number of different codecs or compression schemes. AVI and QuickTime are the two most used moving video file formats in multimedia.

Bandwidth: The capacity of a network to transmit data is called bandwidth, and it is expressed in bits per second (BPS). Sending data through a phone wire is like sending water down a pipe. The wider the pipe, the more information you can send faster. In telecommunications, transport capacity (the size of the pipe) is called bandwidth. The narrower the bandwidth, the less amount of information can be squeezed through it at any one time, and the longer it takes.

Bit-rate: Often expressed as BPS, the rate at which bits (1 or 0, a single piece of information) are streamed through a digital video system such as a DVD player. The higher the bit-rate, the better the video performance.

Capture: Generally used to describe the process of digitizing images. For example, capture hardware and software can take a digital still picture from a video camera, or capturing from a moving video (VCR) can digitize the analog video signal so that the moving video can be stored on a computer hard disk.

Codec: Codec stands for compression-decompression and is a concept used in the creation of digital video files. Digital video files and formats contain a lot of information and are, therefore, very large. To make files and formats less bulky and easier to store and transmit, the information is compressed using a particular compression scheme and then decompressed during viewing. Each compression scheme has a different way of handling compression and decompression. Some codecs include: MPEG 2, H-264, Windows Media 9, Apple uncompressed, and Digital Video (DV). Compression and decompression can be done with either software or hardware.

Codec, Animation: As the name implies, animation codec is best used on animation, although it does all right on video as well. It primarily compresses long lines of identical horizontal pixels (which is why it's good for animation). At 100%, the codec is lossless. Another plus is that the animation includes alpha channel support.

Codec, Authoring: The main intent of authoring codecs is to preserve image quality, with little concern for file size. Authoring codecs are used as intermediary steps during the production process and are not suitable for delivery because of their often immense file sizes. Authoring codecs would be employed to export video from one program to another, without addition of any compression artifacts (for example, the various DV codecs are considered authoring codecs).

Codec, Delivery: Delivery codecs are used to distribute movies to an audience, usually over the web. These codecs provide small file sizes but at the expense of image quality. The type of delivery codec used is usually based on the installed base of users of that codec.

Codec, Digital Video (DV): DV is a video format launched in 1996 and — in its smaller tape form-factor, MiniDV — has since become one of the standards for consumer and semiprofessional video production. The DV specification (originally known as the Blue Book, current official name IEC 61834) defines both the codec and the tape format. Features include intraframe compression for uncomplicated editing, a standard interface for transfer to nonlinear editing systems (FireWire, also known as IEEE 1394), and good video quality, especially compared to earlier consumer analog formats such as 8mm, Hi-8, and VHS-C. DV now enables filmmakers to produce movies inexpensively, associated with no-budget cinema. There have been some variations on the DV standard, most notably the more professional DVCAM and DVCPRO standards by Sony and Panasonic. Also, there is a recent high-definition version called HDV that uses MPEG-2 compression to create high-definition video streams recorded to DV or MiniDV tape formats.

Codec, Hardware: Using specialized modules or cards inserted in the workstation, hardware compression is able to compress and decompress full-motion, full-screen video in real time. In general, this is a more expensive but better solution.

Codec, Software: Software codecs, where the compression and decompression are executed through software, are slower and generally aren't capable of handling full-screen, full-motion (30 frames per second), high-quality video.

Codec, Uncompressed: This video file format is the codec that best represents the original source material, without compression, in its digital form. It is an 8-bit or 10-bit uncompressed video codec, and what makes it special is that it was designed to be used with high-end video capture cards, but it can be used on its own, as well. Its high-quality nature means that it is good for creating videos out of animations and stills where you want to preserve smooth gradients that may be otherwise lost using a compressed codec, such as DV. The only disadvantage is extremely large file sizes, which can be hard to manage.

Compositing: A tool that allows the combining of different layers into a single image, so that all the elements blend together seamlessly and appear to have been photographed at the same time.

Compression: A mathematical method used to reduce the amount of digital information stored in a particular file. Graphics files and moving video files are good candidates for compression, because they are generally very large in size. Compressing these files can greatly reduce the amount of information required. However, compression does result in the alteration of the file, which may result in degradation of image quality.

Digital Video: The video signal is converted to a series of binary numbers that create a numerical representation of the image.

Digitizing: The process of converting standard analog waveform information into digital form (binary code), as used by a computer. This is done with a special card that plugs into a slot in a computer, allowing either audio or video to be sent into the computer. The signal from the audio or video is then converted into digital information (data) that can be stored and manipulated.

GOP Structure: GOP is an MPEG term meaning Group of Pictures. It is a collection of consecutive frames of video. Usually between 0.5 and 1 second of video will be held in one GOP. There are many possible structures, but a common one is 15 frames long. GOP structure is determined by the nature of the video stream and the bandwidth constraints on the output stream.

High-Definition Television (HDTV): High-definition video formats are differentiated from standard definition television formats by an increased number of TV lines (resolution) and the widescreen shape of the resulting image. The most common formats are 1080i or 720p. 1080i stands for a resolution of 1920×1080 pixels, and the “i” means that the video is being interlaced. 720p stands for resolution of 1280×720 pixels, and the “p” means that the video is in progressive format.

I (Intra) Frame: An MPEG-2 compressed video frame that is a complete picture containing most of the original information. It is similar to a JPEG still image and is used as a reference to build subsequent B and P frames in the GOP structure within the MPEG compression scheme.

Interlaced/De-Interlacing Video: Interlaced video has two fields that make up one frame for playback on a standard video monitor. One field is made up of the odd horizontal lines in a frame. This is called the odd field or the top field, since it contains the top line of the image. The other field is made up of the even horizontal lines in a frame. This is called the even field or bottom field. On a progressive monitor, such as a computer display, these fields create artifacts and scan lines, so they need to be made progressive by the process of de-interlacing. The reason for interlacing: PAL and NTSC are old technologies that were invented before modern digital compression methods were devised, and interlacing was a way to compress the video signal to save bandwidth, reducing the information being transmitted by half.

Lossy Compression: Some compression schemes reduce the amount of data in an image file and, therefore, the file size, by selectively deleting data. This is referred to as lossy compression. This can be done by finding repetitive (or unchanging screen areas in moving video) pixels in an image, throwing out the repetition, and then recreating the original image as closely as possible when it is uncompressed. Within this process, while the image is recreated faithfully, some loss of data is inevitable and, in some cases, can result in visible artifacts.

MPEG-2: A mathematical algorithm for the compressing of moving pictures and associated audio that creates a video stream out of three types of frame data (intraframes, forward predictive frames, and bidirectional predicted frames) that can be arranged in a specified order called the GOP structure. The MPEG-2 codec is used in DVD players and other digital playback devices, which often utilize built-in hardware decoding for smoother playback.

Nonlinear Editing (NLE): Refers to the fact that digital video can be retrieved from a storage hard drive that doesn't need to be physically rewound or fast-forwarded (as with conventional videotape) to access particular sections of video. Access is almost instantaneous. Also, nonlinear editing allows for insertion of material in the middle of a video segment, whereas conventional videotape editing does not. Videotape is not physically spliced but edited electronically by dubbing (copying) from one tape to another. Therefore, the insertion of material in the middle cannot be done without re-editing from that point on (or by dubbing onto yet another tape, resulting in generational loss).

NTSC: The color TV standard developed in the US in 1953 by the National Television System Committee. NTSC is used in the US, Canada, Japan, most of the American continent countries, and various Asian countries. The rest of the world uses either some variety of PAL or SECAM standards. NTSC runs on 525 lines/frame, and its vertical frequency is 60Hz. NTSC's frame rate is 29.97 frames/sec, with a resolution of 720×480 in digital video.

PAL (Phase Alternating Line): This TV standard was introduced in the early 1960s in Europe. It has better resolution than in NTSC, with 625 lines/frame, but the frame rate is slightly lower, at 25 frames/second, with a resolution of 720×576 for digital video. PAL is used in most western European countries (except France, where SECAM is used), Australia, and in parts of Africa, South America, and Asia.

QuickTime: Apple Computer's video file format, QuickTime, is a particular format for utilizing digitized video or animation (moving images in general). Computers running Windows can utilize QuickTime for Windows to playback QuickTime movies. These movies can utilize a number of different codecs or compression schemes. QuickTime and AVI are the two most popular video file types in multimedia applications. Adobe Premiere can utilize either QuickTime or AVI video file formats.

Josh Nissim is the general manager of Scharff Weisberg Media Resource Center. He has over 20 years experience in technical theatre, video production, live events, entertainment, and post-production.