Digital Intermediate

| March 16, 2011

Avatar has moved the bar for digitizing motion pictures and manipulating color and adding dimensions and image characteristics. This has created a new bottleneck in storage access and final creation adjustment to deliver final movies to theaters.

Digital Intermediate (DI): The three traditional steps include:
• Image capture and ingest
• Intermediate (accepting shot material, production of finished ‘film’ deliverables)
• Mastering for distribution, projection, and transmission

The output from DI is expected to match, or supersede, the quality of a film intermediary. DI work is performed at High Definition (HD), 2K, and 4K resolutions. From a business perspective, image size costs money. An uncompressed HD image requires about 8 MB of data, while a 2K image requires about 12 MB of data per 10-bit log RGB frame. A 4K image requires about 48 MB of data, quadrupling storage and networking bandwidth requirements.

The main task of a DI infrastructure is to move digital film images between various pieces of equipment in a DI facility. As high resolution image files predominate, film sequences require extremely large amounts of data, from 200 to 1200 MB for every second (24 film frames). A DI facility is typically forced to use several types of data networking technology, applied to different areas, to achieve an efficient workflow and avoid bottlenecks. To maintain this performance level, in addition to sophisticated networking technology, applications and storage systems must continuously handle data at the required rate and handle the demands on the network by other users. Therefore, choosing the correct infrastructure hardware and software components, and using networking technology advantageously, is imperative.

Storage Area Networks (SAN) with dedicated Fibre Channel networking are the primary method for providing high-performance shared storage in DI environments. SANs provide applications with direct access to files and provide faster access to large files. A shared file system is a critical component of a DI SAN infrastructure. Shared file systems are cross- platform software packages that support clients and applications on different operating systems (e.g., Mac® OS, Windows®, UNIX, etc.) to access and share the same storage.

Shared file systems also provide a single, centralized point of control for managing DI files and databases, which can help lower total costs by simplifying administration. Shared file systems typically allow administrators to manage volumes, content replication, and point-in-time copies from the network. This capability provides a single point of control and management across multiple storage subsystems.

Shared file systems can accommodate both SAN and Gigabit Ethernet-based Network Attached Storage (NAS) clients side-by-side to offer a wide scope for sharing and transferring content. Although NAS does not perform as well as SAN, it is easier to scale and manage, and is often used for lower resolution projects.
Shared file systems require metadata servers to support real-time demands of media applications. In large concurrent post-production facilities, thousands of file requests for video and audio files come from each application. In DI applications, requests could number as many as 24 file requests per second per user. Metadata servers and the networks that support shared file systems must be able to sustain these access demands. Out-of-band metadata networks can provide a significant advantage over in- band servers that share the same network link as the media content because metadata and content are not sharing the same bandwidth.

In a hardware-based RAID, as the number of concurrent users increases, the stripe group must be increased to met the total bandwidth demand and not drop frames. High resolution files require significant increases in bandwidth for each additional user forcing RAID expansion. As stripe groups increase, it becomes increasingly difficult to maintain data synchronization, calculate parity, drive ports, and maintain data integrity.
When concurrent high-resolution content users must rely on large file-based RAIDs and large network switches, performance is difficult to maintain and infrastructure problems arise. Spindle contention becomes an issue when multiple users request the same content within a stripe group; available bandwidth is reduced, variable latencies are created and the file system cannot deliver frame content accurately. If a RAID storage system becomes more than 50% full, content data fragments over time, storage performance drops, and users lose bandwidth. These infrastructure issues must be resolved before users can take full advantage of shared file systems in a high resolution digital environment.

Tags:

Category: Digital Intermediate

About the Author ()

Comments are closed.