A History Of Color Imaging
Back in the 1800’s black and white imaging was just being perfected, but the desire to fix images on some medium in full color was on the forefront of every inventor’s mind.
Initially dye baths were used to tint paper photographs purple, orange or brown. These so-called “sepia” images were somewhat the rage for a while.
Eventually artists were employed to hand color black and white photos using watercolor paints.
It was only a few years after the advent of motion pictures, around 1908, that the first color movie processes were employed commercially. It was widely known by scientists that if you shot three black and white photos through the primary color filters and then project these images back through the same filters that a projected full color image was possible. This full color image was quite accurate but requires a camera that shot three simultaneous images. The problem here is that if you put three strips of films behind three lenses a small spatial difference occurs. This difference is known as parallax and it gave birth to the binocular concept of stereo 3-D photography.
To get the same image photographed required that three strips of film be exposed to the same exact image generated from a single lens and split into three identical images by a mirror or prism. This meant you needed to shoot three rolls of film instead of one and project all three of these back. It also meant the projectionist had to get all three rolls to start at the same exact spot. One frame of error and the whole image looks messy.
Back in 1908 the shooting speeds for motion picture film were erratic because many cameras worked with a hand crank, so speed was up to the adrenaline of the cameraman.
Some companies put a governor on the cameras and eventually an electric motor was implemented with a running speed for silent moves of 16 frames per second.
One camera and film company experimented with color movies by rotating a filter in front of the camera and projector lens and taking more pictures (frames) per second through an intermittent red and cyan filter. The result was a very pastel looking color movie. This company abandoned the process after a few years.
Pathe studios tried out the hand color method using cut stencils to mass produce the final prints. A black and white project print was passed through a machine with cut stencils through which color inks were embedded through the stencil openings. This created a very true color, lifelike image, but it was expensive and hard to produce on a regular basis.
Technicolor corporation revived the two color process in the 1920’s. They made a camera with two strips of film that used a prism to split the image, one through a cyan filter and another through a red filter. Then they used a process akin to offset ink printing done by newspapers and magazines. An oil and water process in which oil based color inks stick only to a specially prepared surface on clear film stock. A chemical mixes with a special surface in varied amounts. The less chemical the more ink sticks to the surface. That surface (often called a platen in offset printing work) then transfers varied amounts of red and blue colored ink to clear film. The result is a single strip of film with a two color image. Again this was sort of pastel and lacked a variety of colors. Using only two colors renders far less than the eye can see. We rate the eye as seeing nearly 17 million total shades of color and a two color process only delivers about 65,000 shades.
By 1930 Technicolor replaced this two strip process with a three strip process using the primary colors red, green and blue. The results were astonishing with over 17 million hues possible in this combination. The cost to movie studios, however, was three times more in filming costs because of special cameras and lots of extra film. On the up side, a black and white negative image never fades with age, so if you take the original Technicolor materials for a movie like Gone With The Wind or The Wizard of Oz you get a full color print that looks as sharp, clear and colorful as it did in the 1930s when it was first made.
This same three color additive process was used when they created color television. Originally CBS TV networks in the United States worked with a three color filter process that was similar to the two color process tried by movies in 1908. A black and white video camera had a spinning filter of red, blue and green colors. A special synchronization signal was sent so that the color wheel on your home TV set started with the correct color and stayed in perfect sync throughout the entire broadcast. This process was quite spectacular and gave true color with no aberrations or error.
Another process developed by RCA and the NTSC (Nations Television Standards Committee) was aimed at making a color signal that was compatible with already existing black and white TV sets. The RCA/NTSC concept also worked with the primary colors red, blue and green, but using a special color camera that split a single image into three images, which were sent to three black and white sensors each having one of the primary color filters.
Instead of simply sending out this signal (which would have created an almost perfect color image on a color TV set that had three colors of phosphors) which would have been incompatible with black and white TV sets, they created two weak signal hues from the full color image, plus a black and white signal.
The black and white signal is called luminance and it is simply an intensity from nothing (black) to full strength (white), in-between are shades of gray. That’s how a black and white image is made, by varying light.
You got this luminance signal from all three color sensors in the taking camera. This simply specifies and intensity of the light signal. And it is sampled continuously.
To get the color they subtracted red from the luminance and blue from the luminance, cut this in half. Then they converted this into two very weak signals representing a greenish tinge and a purple tinge. The cut the intensity of this signal down even further and composted these two colors into the same broadcast wave as the luminance, because they are still part of the luminance signal. This process is called Y-R (or Yr) and Y-B (or Yb).
To extract this information on the television set they added two color frequencies or sub-carriers near the end of the luminance wave signal. When the color set sees this frequency is extracts first the greenish wave, then the next frequency extracts and even smaller purplish wave.
In a black and white TV set there is no circuit to detect these color sub-carriers so they go unnoticed. The embedded greenish and purplish waves only corrupt the end of the luminance signal a little by boosting some of the light intensity, but since this comes as the luminance wave is dying down and ending most of the work in generating a picture intensity is already completed.
They probably chose the level of green and purple waves based on the point where the eye can’t see the difference in a black and white image. Add more of these waves and the gray tones might become altered a bit, reduce the wave too much and a building might block or caused color ghosts. So you want enough wave to generate a solid signal, but not interfere with the black and white tones. Now, remember the broadcast image is not in color, but a monochrome representation of the greenish and purple extracted elements, plus one black and white element.
Now, the colors TV set uses the luminance signal as a part of the process of generating the final image. The set also knows how much to increase the green and purple waves to achieve approximately what was used to create the signal back at the studio. Then a formula or algorithm is used to generate the missing color based on the two known color strengths, plus the black and white strength. This process is called YUV or YIQ.
Meanwhile in the realm of color still photography Kodak had invented a process by which black and white silver grains were made to bind with color dyes. The film, itself, was black and white, but with these special silvers. In the processing lab color dyes were added which cling to given silvers. Then the silvers were washed away leaving only the color dyes. This was known as the Kodachrome process and it was also made available for home color movies in 8mm and 16mm.
Just around the time color television was starting Kodak also invented the first single strip color movie film process which used dye-coupled silvers and had an orange base that made printing easy because it was an 85A filter which converted tungsten light to that of daylight automatically. This process is still used today in both color movies and home still photos and that is why you see the amber colored negatives, which are made this way to filter light. This is known as Kodacolor or Eastmancolor in most circles. Because the colors reproduce very faithfully and only one strip of film is used this eventually replaced Technicolor 3 strip process in use with most color motion pictures made commercially. Some films, however, were still made in the three strip process or preserved from Eastmancolor prints into the three strip process, because as the years go by the Eastmancolor film fades and turns pink or blue, as the dyes react to light and air.
In this process the dyes are already in the film on the silver grains and during processing the amount of silver (and dye) is developed, with the excess washed away. This renders a negative image from which a positive print is made by sandwiching the negative with unexposed film and shining light through to make a new exposure on the virgin film. This is then processed and the result is a positive image with a deep blue blacking (the opposite or positive of the amber shade).
Kodak also made a film that was capable of being “reversed” half way through processing using the same dye coupled silvers. In this type of film, used for slides and home movies under the Extachrome name, the negative image is developed then washed away leaving unexposed silver that represents the positive image. This is exposed to light then processed a second time giving a reversed or direct reading image.
Both Kodacolor/Eastmancolor and Extachrome are still in use as of this writing, with the slide and move film Ektachrome now almost an extinct item in this world of color prints, video tape and digital technologies.
In the 1980’s with developments by Sony and JVC in helical scan video technology home video taping of television and moving images became possible. The first cameras and recorders were separate units, with the cameras splitting the image using red and blue filters, plus a clear area to cast an image in three distinct locations on one video sensing tube. Green was created by math, subtracting red and blue from the clear image (or pure luminance).
To make home video work in color they reduced the color frequency samples in half from the professionals standards, so only 1/4th the color signal (1/8th red and 1/8th blue) were recorded. They also had to reduce the luminance signal to fit the tape size and slow recording speed. This process introduced a lot of color impurities or artifacts into the recorded image, but the picture quality was still far superior to 8mm home movies and you got sound along with the images!
In the 1990’s as silicon chips replaced tubes as sensors in cameras and LCD screens were included for viewing back the TV image on something larger than a little tube inside the camera, the sensing technologies changed. They started using mosaic filters, applying the technologies learned from generating TV images for forty years. If you’ll remember the TV signal is black and white, with a small wave of greenish and purple. This, plus a lot of math, reconstitutes those two weak tint signals back into full color.
This technology borrows, in part, from the Eastman one strip color technologies using dye couplers. If you’ll recall Kodak put color sensitive silver grains on the film. A layer of these grains react to red, another layer reacts to blue and a third layer reacts to green. Sometimes these layers overlap and sometimes they don’t. But when you hold a picture back a few feet or look at a slide or movie projected on the wall 20 feet from you, your eye blends these little grains that don’t quite line up and fuses them to create a full color image.
Now, with Technicolor three strip and the three sensor color TV processes you see an almost perfect overlap of red, blue and green on every square inch of the image. Not so with the Eastman film or the mosaic color filter processes. Some film grains may overlap. Like your color TV set which has groups of three color dots in the red, blue and green spectrum. When you stand back a few feet these dots fuse together and fool your eye.
Top row a full color original image (and reconstructed final image) next to the green and purple tints that are extracted mathamatically.
Bottom row is an actual YIQ/YUV image. This is what is broadcast over the air for color televison around the world. These three black and white representations of the light and color tints are converted back into the top left image on your TV screen after signal processing.
The differences between U.S. and European PAL is how the color pixels are sampled (the U.S. does it evenly, Europe alternates phase for each line). Because of extra artifcts in the U.S. system you may need to adjust the picture slightly with the Color and Tint controls on your TV set (they never had these in Europe until recently).
If you take the YIQ/YUV color signal of monochrome light, and signals representing green and purple tins, amplify those colors back to their original strengths and add the missing color mathematically, then you get a very close approximate of true color. If it’s off a little bit you simply turn the Color and Saturation knobs on your TV set (that’s why they are there, to make up for the confusion inside the set as it may not get all the colors just right).
Above is how 4 pixel sensors using the Bayer method see light intensities (the numbers inside the color squares) and then math is used to amplify red and blue levels, along with other math to determine the actual color to give a 4 pixel screen area. In this case a shade of purple.
Some mosaic filters are alternating r-g-b filters, some may have clear areas as well. Some may have more or lager green filters, half as many (or smaller) red filters and very few or really tiny blue filters. This method is what is used to approximate how your eye work and how they made color TV with math and a little color tint. It’s called the Bayer filter pattern (named after the Kodak engineer who invented it) which is based on the fact your eye sees more green than anything else, so green is prominent by 60%, red by 30% and blue by only 10%. This, when held back a few feet, really tricks your eye and brain into seeing full color. It’s much like the two color processes used in 1908 and later in the 1920s by Technicolor, except we use math and electronic circuits to add more color into all the pixels when viewing. This kind of real time color enhancement wasn’t possible back in 1908. It wasn’t even possible back in 1985 when they were using massive memory and dithering color to get 65,000 shades on to a PC, Amiga or Atari ST.
JVC uses a green, yellow, cyan and clear filter matrix in their new HD camcorder designed for semi-professional use. The lighter colors cyan and yellow probably won’t make as much of a noticeable color artifact as the more prominent red and blue used by everyone else, but this technology is new and we have yet to see how well it gets accepted by people. Nikon, among other still camera makers, uses a green, yellow, cyan and magenta filter array in their higher end Coopix and digital SLR cameras.
Finally there is the new Foveon technology which uses the strata or layers inside the sensing chip to extract the colors at different depths. This allows each sensor area (or pixel) on the chip to see both red, green and blue, for a total overlapping. In theory this concept should make an image that rivals the old Technicolor three strip process.
This same color process of mosaics is also used in still photography. The amount of pixels in the sensor determines how much color sampling is done. In a camcorder this is as little as 330,000 pixels, of which only 33,000 see blue under the Bayer filtering concept (with 99,000 seeing red and 199,000 seeing green). In and alternating patter style 110,000 would be filtered through various pixels.
Above see how blonde hair and flesh skin tones are really made up of sahdes of reddish and greenish pixels that are blended together by the eye and brain into full color tones.
Artifacts and aberrations come from the fact that one pixel worth of image is only getting one color sample. The assumption is the next pixel is so close it must be the same image, but in practice this is not the case. One pixel may be skin and the next may be pure white for several pixels. If the skin readings calls for this much blue, this much green and this much red, the white pixel may end up with more of one color that it should, making pure white pink or blue. That is why color fringe happens.
In any still or video camera the “raw image” is actually ugly because one pixel is reading blue, three pixels next to it read red and six pixels to the side, top and bottom are reading green. A little computer chip inside the computer then averages this 10 pixel zone (and in a close up photo 10 pixels can be a person’s eye, which can be green, blue or brown depending on eye color) and adds the missing colors it “thinks” should be there based on the samples made. So when viewed as a JPEG image or on a TV set or LCD screen each individual pixel of a 1 megapixel image is adjusted by a computer algorithm to read a certain hue made by a combination of RGB intensities. On the sensor that made this image, however, only one color was read per pixel. The computer in the camera did all the dithering and color mixing based on the adjacent pixels color reading.
The on-board computer first has to determine if the light falling is from a red, blue or green filter. Then it reads the intensity of that color. Next it looks at other pixels nearby to find the values for the missing colors. The computer then averages this out among various groups in landscape ranges. If one cluster of pixels varies widely from another, the averages along the join lines are adjusted to compensate for this. This, too, is where color artifacts start to become visible.
Last year Sigma started selling the SD9 SLR camera using the new Foveon 3.5 megapixel CMOS (not a CCD as used by most companies) chips that reads all three primary colors inside of each pixel in layers (see the above graphic for a look at how other cameras work, top, and the new SD9 with Foveon CMOS chip on bottom). This new process may eventually negate the need to have three separate CCD chips with filters. The Foveon CMOS chip claims to deliver an image better than a 5 or 6 mega pixel single CCD camera and one that rivals professional systems like the Kodak 12 mega pixel back for professional cameras.
As of this writing the Sigma/Foveon has not yet displaced the competition and may not even survive, but if it does and professionals embace this concept we may see all the other technology concepts fall to the side like hand painted pictures...
Camcorders 2004 | Capture Cards 2004 | HD-DVD
From our archives we have these articles from the 2002 Issues: