- This article focuses on digital image capture for movies. For digital projection and distribution, see digital cinema.
Digital cinematography is the process of capturing motion pictures as digital video images, as opposed to the historical use of motion picture film. Digital capture may occur on video tape, hard disks, flash memory, or other media which can record digital data through the use of a digital movie video camera or other digital video camera. As digital technology has improved, this practice has become increasingly common. Many mainstream Hollywood movies now are shot partly or fully digitally.
Many vendors have brought products to market, including traditional film camera vendors like Arri and Panavision, as well as new vendors like RED, Silicon Imaging, Vision Research and companies which have traditionally focused on consumer and broadcast video equipment, like Sony, GoPro, and Panasonic.
Beginning in the late 1980s, Sony began marketing the concept of “electronic cinematography,” utilizing its analog Sony HDVS professional video cameras. The effort met with very little success. However, this led to one of the earliest digitally shot feature movies Julia and Julia to be produced in 1987. In 1998, with the introduction of HDCAM recorders and 1920 × 1080 pixel digital professional video cameras based on CCD technology, the idea, now re-branded as “digital cinematography,” began to gain traction in the market. Shot and released in 1998, The Last Broadcast is believed by some to be the first feature-length video shot and edited entirely on consumer-level digital equipment.
In May 1999 George Lucas challenged the supremacy of the movie-making medium of film for the first time by including footage filmed with high-definition digital cameras in Star Wars Episode I: The Phantom Menace. The digital footage blended seamlessly with the footage shot on film and he announced later that year he would film its sequels entirely on digital video. Also in 1999, digital projectors were installed in four theaters for the showing of The Phantom Menace. In May 2001 Once Upon a Time in Mexico became the first well known movie to be shot in 24 frame-per-second high-definition digital video, partially developed by George Lucas using a Sony HDW-F900 camera, following Robert Rodriguez’s introduction to the camera at Lucas’ ranch whilst editing the sound for Spy Kids. In May 2002 Star Wars Episode II: Attack of the Clones was released having also been shot using a Sony HDW-F900 camera. Two lesser-known movies, Vidocq (2001) and Russian Ark (2002), had also previously been shot with the same camera, the latter notably consisting of a single long take.
Today, cameras from companies like Sony, Panasonic, JVC and Canon offer a variety of choices for shooting high-definition video. At the high-end of the market, there has been an emergence of cameras aimed specifically at the digital cinema market. These cameras from Sony, Vision Research, Arri, Silicon Imaging, Panavision, Grass Valley and Red offer resolution and dynamic range that exceeds that of traditional video cameras, which are designed for the limited needs of broadcast television.
In 2009, Slumdog Millionaire became the first movie shot mainly in digital to be awarded the Academy Award for Best Cinematography and the highest grossing movie in the history of cinema, Avatar, not only was shot on digital cameras as well, but also made the main revenues at the box office no longer by film, but digital projection.
In late 2013, Paramount became the first major studio to distribute movies to theaters in digital format eliminating 35mm film entirely. Anchorman 2 was the last Paramount production to include a 35mm film version, while The Wolf of Wall Street was the first major movie distributed entirely digitally.
Digital cinematography captures motion pictures digitally in a process analogous to digital photography. While there is no clear technical distinction that separates the images captured in digital cinematography from video, the term “digital cinematography” is usually applied only in cases where digital acquisition is substituted for film acquisition, such as when shooting a feature film. The term is seldom applied when digital acquisition is substituted for analog video acquisition, as with live broadcast television programs.
Professional cameras include the Sony CineAlta(F) Series, Blackmagic Cinema Camera, RED ONE, Arriflex D-20, D-21 and Alexa, Panavisions Genesis, Silicon Imaging SI-2K, Thomson Viper, Vision Research Phantom, IMAX 3D camera based on two Vision Research Phantom cores, Weisscam HS-1 and HS-2, GS Vitec noX, and the Fusion Camera System. Independent filmmakers have also pressed low-cost consumer and prosumer cameras into service for digital filmmaking.
Single chip cameras designed specifically for the digital cinematography market often use a single sensor (much like digital photo cameras), with dimensions similar in size to a 16 or 35 mm film frame or even (as with the Vision 65) a 65 mm film frame. An image can be projected onto a single large sensor exactly the same way it can be projected onto a film frame, so cameras with this design can be made with PL, PV and similar mounts, in order to use the wide range of existing high-end cinematography lenses available. Their large sensors also let these cameras achieve the same shallow depth of field as 35 or 65 mm motion picture film cameras, which many cinematographers consider an essential visual tool.
Unlike other video formats, which are specified in terms of vertical resolution (for example, 1080p, which is 1920×1080 pixels), digital cinema formats are usually specified in terms of horizontal resolution. As a shorthand, these resolutions are often given in “nK” notation, where n is the multiplier of 1024 such that the horizontal resolution of a corresponding full-aperture, digitized film frame is exactly pixels. Here the “K” has a customary meaning corresponding to the binary prefix “kibi” (ki).
For instance, a 2K image is 2048 pixels wide, and a 4K image is 4096 pixels wide. Vertical resolutions vary with aspect ratios though; so a 2K image with an HDTV (16:9) aspect ratio is 2048×1152 pixels, while a 2K image with a SDTV or Academy ratio (4:3) is 2048×1536 pixels, and one with a Panavision ratio (2.39:1) would be 2048×856 pixels, and so on. Due to the “nK” notation not corresponding to specific horizontal resolutions per format a 2K image lacking, for example, the typical 35mm film soundtrack space, is only 1828 pixels wide, with vertical resolutions rescaling accordingly. This led to a plethora of motion-picture related video resolutions, which is quite confusing and often redundant with respect to nowadays few projection standards.
All formats designed for digital cinematography are progressive scan, and capture usually occurs at the same 24 frame per second rate established as the standard for 35mm film. Some films such as The Hobbit: An Unexpected Journey has a High Frame Rate of 48 fps.
The DCI standard for cinema usually relies on a 1.89:1 aspect ratio, thus defining the maximum container size for 4K as 4096×2160 pixels and for 2K as 2048×1080 pixels. When distributed in the form of a Digital Cinema Package (DCP), content is letterboxed or pillarboxed as appropriate to fit within one of these container formats.
In the last few years[when?], 2K has been the most common format for digitally acquired major motion pictures however, as new camera systems gain acceptance, 4K is becoming more prominent (as the 1080p format has been before). During 2009 at least two major Hollywood films, Knowing and District 9, were shot in 4K on the RED ONE camera, followed by The Social Network in 2010. The Arri Alexa captures a 2.8k image.
Broadly, two workflow paradigms are used for data acquisition and storage in digital cinematography.
With video-tape-based workflow, video is recorded to tape on set. This video is then ingested into a computer running non-linear editing software, using a deck. Upon ingestion, a digital video stream from tape is converted to computer files. These files can be edited directly or converted to an intermediate format for editing. Then video is output in its final format, possibly to a film recorder for theatrical exhibition, or back to video tape for broadcast use. Original video tapes are kept as an archival medium. The files generated by the non-linear editing application contain the information necessary to retrieve footage from the proper tapes, should the footage stored on the computer’s hard disk be lost.
Digital cinematography is gradually shifting towards “tapeless” or “file-based” workflows. This trend has accelerated with increased capacity and reduced cost of non-linear storage solutions such as hard disk drives, optical discs, and solid-state memory. With tapeless workflows digital video is recorded as digital files onto random-access media like optical discs, hard disk drives or flash memory-based digital “magazines”. These files can be easily copied to another storage device, typically to a large RAID (array of computer disks) connected to an editing system. Once data is copied from the on-set media to the storage array, they are erased and returned to the set for more shooting.
Such RAID arrays, both of “managed” (for example, SANs and NASs) and “unmanaged” (for example, JBoDs on a single computer workstation), are necessary due to the enormous throughput required for real-time (320 MB/s for 2K @ 24fps) or near-real-time playback in post-production, compared to throughput available from a single, yet fast, hard disk drive. Such requirements are often termed as “on-line” storage. Post-production not requiring real-time playback performances (typically for lettering, subtitling, versioning and other similar visual effects) can be migrated to slightly slower RAID stores.
Short-term archiving, “if ever”, is accomplished by moving the digital files into “slower” RAID arrays (still of either managed and unmanaged type, but with lower performances), where playback capability is poor to non-existent (unless via proxy images), but minimal editing and metadata harvesting still feasible. Such intermediate requirements easily fall into the “mid-line” storage category.
Most digital cinematography systems further reduce data rate by subsampling color information. Because the human visual system is much more sensitive to luminance than to color, lower resolution color information can be overlaid with higher resolution luma (brightness) information, to create an image that looks very similar to one in which both color and luma information are sampled at full resolution. This scheme may cause pixelation or color bleeding under some circumstances. High quality digital cinematography systems are capable of recording full resolution color data (4:4:4) or raw sensor data.
Intra- vs. Inter-frame compression
Most compression systems used for acquisition in the digital cinematography world compress footage one frame at a time, as if a video stream is a series of still images. This is called intra-frame compression. Inter frame compression systems can further compress data by examining and eliminating redundancy between frames. This leads to higher compression ratios, but displaying a single frame will usually require the playback system to decompress a number of frames from before & after it. In normal playback this is not a problem, as each successive frame is played in order, so the preceding frames have already been decompressed. In editing, however, it is common to jump around to specific frames and to play footage backwards or at different speeds. Because of the need to decompress extra frames in these situations, inter-frame compression can cause performance problems for editing systems. Inter-frame compression is also disadvantageous because the loss of a single frame (say, due to a flaw writing data to a tape) will typically ruin all the frames until the next keyframe occurs. In the case of the HDV format, for instance, this may result in as many as 6 frames being lost with 720p recording, or 15 with 1080i. An inter-frame compressed video stream consists of groups of pictures (GOPs), each of which has only one full frame, and a handful of other frames referring to this frame. If the full frame, called I-frame, is lost due to transmission or media error, none of the P-frames or B-frames (the referenced images) can be displayed. In this case, the whole GOP is lost.
Digital theatrical distribution
For theaters with digital projectors, digital films may be distributed digitally, either shipped to theaters on hard drives or sent via the Internet or satellite networks. Digital Cinema Initiatives, LLC, a joint venture of Disney, Fox, MGM, Paramount, Sony Pictures Entertainment, Universal and Warner Bros. Studios, has established standards for digital cinema projection. In July 2005, they released the first version of the Digital Cinema System Specification, which encompasses 2K and 4K theatrical projection. They also offer compliance testing for exhibitors and equipment suppliers.
Theater owners initially balked at installing digital projection systems because of high cost and concern over increased technical complexity. However new funding models, in which distributors pay a “digital print” fee to theater owners, have helped to alleviate these concerns. Digital projection also offers increased flexibility with respect to showing trailers and pre-show advertisements and allowing theater owners to more easily move films between screens or change how many screens a film is playing on, and the higher quality of digital projection provides a better experience to help attract consumers who can now access high-definition content at home. These factors have resulted in digital projection becoming an increasingly attractive prospect for theater owners, and the pace of adoption has increased.
Since not all theaters currently have digital projection systems, even if a movie is shot and post-produced digitally, it must be transferred to film if a large theatrical release is planned. Typically, a film recorder will be used to print digital image data to film, to create a 35 mm internegative. After that the duplication process is identical to that of a traditional negative from a film camera.
Comparison with film cinematography
Dynamic Range Latitude
Digital sensors lack the extended dynamic range of film. In particular, they tend to ‘blow out’ highlights, losing detail in very bright parts of the image. If highlight detail is lost, it is nearly impossible to recapture in post-production.
In general, film can be underexposed and overexposed, retaining detail and information in the camera negative.
Unlike a digital sensor, a film frame does not have a regular grid of discrete pixels. Instead, it has an irregular pattern of differently sized grains. Conclusions from scientific studies conclude that film has an extremely high amount of resolution and information in the original negative, which digital cameras will be hard pressed to match, especially in formats such as IMAX film, which uses 70mm film.
Determining resolution in digital acquisition seems straightforward, but it is significantly complicated by the way digital camera sensors work in the real world. This is particularly true in the case of high-end digital cinematography cameras that use a single large bayer pattern CMOS sensor. A bayer pattern sensor does not sample full RGB data at every point; instead, each pixel is biased toward red, green or blue, and a full color image is assembled from this checkerboard of color by processing the image through a demosaicking algorithm. Generally with a bayer pattern sensor, actual resolution will fall somewhere between the “native” value and half this figure, with different demosaicking algorithms producing different results. Additionally, most digital cameras (both bayer and three-chip designs) employ optical low-pass filters to avoid aliasing. Such filters reduce resolution.
Grain and noise
Film has a characteristic grain structure. Different film stocks have different grain.
Digitally acquired footage lacks this grain structure. It has electronic noise.
Digital Intermediate Workflow and Archiving
In order to utilize digital intermediate workflow with film, the camera negative must first be processed and then scanned to a digital format. Some filmmakers have years of experience achieving their artistic vision using the techniques available in a traditional photochemical workflow, and prefer that finishing/editing process.
Digitally shot movies can be printed, transferred or archived on film. Large scale digital productions are often archived on film, as it provides a safer medium for storage, benefiting insurance and storage costs. As long as the negative does not completely degrade, it will always be possible to recover the images from it in the future, regardless of changes in technology, since all that will be involved is simple photographic reproduction.
In contrast, even if digital data is stored on a medium that will preserve its integrity, highly specialized digital equipment will always be required to reproduce it. Changes in technology may thus render the format unreadable or expensive to recover over time. For this reason, film studios distributing digitally-originated films often make film-based separation masters of them for archival purposes.
Film proponents have argued that digital cameras lack the reliability of film, particularly when filming sequences at high speed or in chaotic environments, due to digital cameras technical glitches. Cinematographer Wally Pfister noted that for his shoot on the film Inception, “Out of six times that we shot on the digital format, we only had one useable piece and it didn’t end up in the film. Out of the six times we shot with the Photo-Sonics camera and 35mm running through it, every single shot was in the movie.” Michael Bay stated that when filming Transformers: Dark of the Moon, 35mm cameras had to be used when filming in slow-motion and sequences where the digital cameras were subject to strobing or electrical damage from dust.
Criticism and concerns
High profile film directors such as Christopher Nolan, Paul Thomas Anderson and Quentin Tarantino have all publicly criticized digital cinema, and advocated the use of film and film prints. Most famously, Tarantino has suggested he may retire because he will no longer be able to have his films projected in 35mm in most American cinemas. Tarantino considers digital cinema to be simply “television in public.” Christopher Nolan has speculated that the film industries adoption of digital formats has been driven purely by economic factors as opposed to digital being a superior medium to film: “I think, truthfully, it boils down to the economic interest of manufacturers and [a production] industry that makes more money through change rather than through maintaining the status quo.”
Another concern with digital image capture is how to archive all the digital material. Archiving digital material is turning out to be extremely costly, and it creates issues in terms of long-term preservation. In a 2007 study, the Academy of Motion Picture Arts and Sciences found that the cost of storing 4K digital masters is “enormously higher – 1100% higher – than the cost of storing film masters.” Furthermore, digital archiving faces challenges due to the insufficient longevity of today’s digital storage: no current media, be it, magnetic hard drives or digital tape, can reliably store a film for a hundred years, something that properly stored and handled film can do. Although this also used to be the case with optical disc, in 2012, Millenniata, Inc. a digital storage company based in Utah, released M-DISC, an optical storage solution, designed to last up to 1,000 years, thus, offering a possibility of digital storage as a viable storage solution.