colour space

 Sensor technology has some intriguing implications to the way images are captured, exposed and edited. We demystify the RAW process, from capture to output

RAW file is not an image format at all. Ita set of unprocessed data recorded from the camera s image sensor at the time of capture, along with a record of the camera settings. This unprocessed sensor data is encoded differently into a RAW file by each camera manufacturer, so a RAW file is in fact a general term describing a number of different proprietary RAW file formats. They each share common characteristics, but they differ between models as well as between manufacturers. When discussing the RAW data and the process of RAW capture, it helps to consider the various technologies involved in sensor design and what it is that the sensor records during capture.

The sensor itself is a wafer, made up of an array of equal-sized photosensitive detectors. These are usually aligned in two-dimensional rows and columns, though there are some exceptions. Each photosensitive detector produces an electrical charge when struck by photons (particles that transmit light) during exposure. This electrical charge produced is directionally proportional, or linear, to the number of photons that strike the detector. As a result, the sensor can measure the intensity of light falling on it at each individual photo-detector, and from that data an image can be formed.

At this stage, the image would be displayed in shades of grey. To create a colour image, the sensor requires a ’of colour filters. These are known as Colour Filter Array (CFA) sensors, and are found in most digital cameras. Most sensors adopt the filters as a Bayer pattern; a repeating 2x2 set of RGB filters consisting of two green for every one red and blue filter. Twice as many green filters are used because they transmit more light to the photo-detector below (boosting the sensitivity) and also because our eyes are more receptive to the colour green.

A single-colour filter covers each photodetector in the array, but can capture only one colour per site. This means that roughly two-thirds of the light is blocked, affecting sensitivity and signal-to-noise ratio, and it also contributes to missing colour information at adjacent pixels.

There are some variations to this type of filter arrangement. Some older Sony CFAs adopted a fourth emerald-green colour filter, while Fujifilmintriguing X-Trans CMOS sensor in the X-Prol has a more random 3x3 arrangement of RGB filters. Fujifilm claims this minimizes moire and false colour generation, while eliminating the need for an anti-aliasing filter to enhance detail.

These so-called CFA sensors, using either CCD (Charge Coupled Device) or CMOS (Complimentary Metal Oxide Semiconductor) technology, still only deliver greyscale output, but the intensity values measured at each photodetector, or site, are comparative to the amount of red, green or blue light falling on each pixel.

Itimportant to realise that itstill not a colour image at this stage, and has to undergo decoding.

As well as the camerasettings, which are recorded as EXIF (Exchangeable Image Format) data, the RAW utility converts the grayscale image data into an RGB image. Powerful mathematical algorithms are employed by the converter to interpolate the missing colour information from the adjacent pixels. This is done by rather grand-sounding term for approximating the full colour information at each pixel. This is fine for large areas of colour, such as the sky or an expanse of grass, but Bayer-pattern sensors can have difficulty interpreting highly detailed subjects where colours meet or merge, such as checkered fabrics or with distant multicoloured subjects. This incomplete, or inaccurate, data can lead to a number of potential pitfalls, such as false-colour generation, edge artifacting or even troublesome moire when the fine colour detail is close to the maximum resolution of the sensor. Most RAW converters perform some anti-aliasing, noise reduction and sharpening to improve the demosaicing process, but it s an inherent issue with conventional sensor architecture.

 colour image

In order to combat this, the image from a conventional Bayer-pattern sensor is often softened using an optical blur filter (better known as an anti-aliasing filter) and sometimes clever software, or both. Thatwhy younearly always find RAW images are inherently soft looking,

especially when compared with the in-camera default JPEG (which, will likely have been sharpened), and usually why more sharpening is required with higher-resolution files. Thatnot always the case though, as some popular RAW converters, such AppleAperture, add some sharpening along with noise reduction upon rendering the preview image. This is taken into account when additional sharpening is applied during the conversion process, but it can be misleading when comparing RAW and JPEG previews side-by-side in RAW conversion software.

A RAW converter does more than convert grayscale data and interpolate colours around pixels. There are many more processes involving various algorithms, which accounts for the different performance of RAW converters.

Many photographers adjust the white balance in-camera when taking RAW captures, but it has no bearing on the final image at all. However, it is still worth using a white balance card or similar tool during capture for improved accuracy and to save valuable time when using the RAW conversion utility. RAW files embed the camerawhite balance settings in metadata, but a RAW converter may choose to ignore this completely or use it for each image.

Therealso the RAW converterinterpretation of the camera modelspecific colour filter array to consider. The RAW converter must accurately assign those red, green and blue-dyed filter colours in a specific, colourimetrically defined colour space. Any of the popular colour spaces such as sRGB and Adobe RGB can be easily defined from this initial colour space, and converted to in the RAW software utility. Camera makers donclaim a specific output colour space for their various models. But in-camera JPEGs can be converted to either sRGB or Adobe RGB.

Choosing either sRGB and Adobe RGB in-camera only affects JPEGs, but choosing Adobe RGB when shooting RAW files can be troublesome for a number of reasons.

First, the majority of cameras’rear LCD panels can just about display sRGB at best and wonbe able to display all of the colours of the larger Adobe RGB colour space. Italso doubtful that the camera would be able to emulate that colour space, so the image colour will be clipped or compressed to fit, resulting in flat-looking and desaturated colours. Another issue is that unless yousending the JPEG to a magazine, where they might expect to see an Adobe RGB JPEG, practically everyone else will likely expect it as sRGB. Unless itconverted, therea risk again that it will be displayed incorrectly and look both dull and flat.

RAW conversion software must also apply a gamma tone curve to the RAW capture data and that s in addition to any corrections you may make afterwards. Itall down to the fact that the sensor captures intensity, or gamma, in a linear fashion as we mentioned earlier. Even with correct exposure at the time of capture, without a gamma correction tone curve applied, RAW images look very dark. If you could view the RAW histogram before this gamma adjustment, you would see the bulk of the data bunched up at the left, darker end, and so the tonal values must be redistributed to make it look acceptable. Itjust one of the numerous tweaks a RAW converter makes without the user knowing.

 colour image

Indeed, the digital sensorlinear gamma capture, unlike film s logarithmic response, has several shortcomings when considering exposure and both dynamic and tonal range (especially in the shadows). In a typical 12-bit sensor, the capture is encoded into 4,096 levels per channel, but the linear capture means that half of those 4,096 levels are attributed to the brightest stop, half of whatleft to the next-brightest stop and so on over the dynamic range.

In a camera with a 12-stop dynamic range, the stop that represents the deepest shadows is represented by just one level, or only 63 levels over the darkest six stops. That s why care is needed with exposure. If you underexpose to keep the highlights from clipping, youlose a disproportionate amount of the bit-data youpaid for. If you have to open up the shadows, you risk stretching those scant few 63 levels, introducing noise and posterisation. Thatwhy cameras that encode at a higher bit-depth have a better tonal response in the shadows than a camera with a lower-bit depth (all other characteristics being equal). It s also the reason why the histogram displayed on the camera screen is inaccurate, as the image and the tonal data has been converted to just 8-bit (256 levels) per channel.

Wetalked about how a RAW converter utility caries out the process of conversion, but the same process occurs in-camera when producing a JPEG, after which the RAW capture data is discarded (unless the user has opted to save both RAW and JPEG files). The authors of the RAW conversion utilities have to deconstruct the camera manufacturers  proprietary processes. As a result, this means that you can end up with very different-looking results compared with the in-camera JPEGs. If you need the same look as those produced in-camera, using the camera manufacturerRAW software is usually the only option.

However, the real advantage of selecting RAW in-camera as opposed to JPEG is that you get unmatched control over the interpretation of the final image. In-camera JPEGs often have a strong S-shape gamma correction curve applied, as well as all the camerasettings, white balance, exposure, colour space, sharpening and noise reduction all in  to form a colour image. Then the image is compressed. This discards both luminance and colour data, as the JPEG compression routine includes encoding the image at just 8-bits (256 colours) per channel. This process of throwing away data offers very limited scope for editing later on, and accounts for poor colour rendition and harsh jumps in tones (revealed as jagged histograms).

With a RAW file, all of those steps remain under your control during post-production, before being compressed as a JPEG. Even that step can be avoided if you elect to output the image as a 16-bit TIFF or Photoshop PSD file, thereby retaining most if not all of the original data captured by the sensor. There s one further potential advantage of selecting RAW over JPEG: improvements in RAW software, even over relatively short periods, mean that reprocessed RAW files may benefit from the latest converters. Thatsomething youunlikely to see with the JPEG image file.

About bytes and bit-depth

The camera s ability to quantify both the tonal value and colour of a scene is determined by the bit-depth.

Cameras use a series of Os and Is, or  bits  to describe the shades or colours. In an RGB image, each colour is created using the three primary colours, but the bit-depth determines how many colours can be individually represented.

In a typical 8-bit greyscale or an RGB image, there are two (0,1) bits to the power of 8 (28), which is 256, combinations or shades per colour, or channel. That256 x 256 x 256, or 16.7 million colours that can be described at a single (interpolated) pixel. This is often called 24-bit colour (24-bit per pixel, or 24bpp).

Most 35mm-based DSLR sensors encode at 12-bit (4,096 shades) per channel (36-bit per pixel), though some recent models can do so at 14-bit (16,384 shades). Medium-format cameras can capture 16-bit per channel (65,536 shades or levels) or 48-bit per pixel, which equates to the potential to map 281 trillion colour tones.

Comments are closed.