Light, colour and underwater photography. Fixing colours using Adobe Photoshop (and Elements)

Background

All photography is dependant on one thing: light.

The aim of taking a photo is to get enough light into the camera and detected by the film/sensor to produce a properly exposed image.

This means shadows are not too dark, highlights are not too bright and there’s a good distribution of light between those two extremes. Underexposing an image will discard shadows in favour of black. Overexposing an image will discard highlights in favour of bright white. For colour photography, in addition to receiving the right quantity of light, we also needs to capture light of the right colour.

Cameras have three factors which control the quantity of light and how it’s detected:

  • ISO: the film or sensor sensitivity.
  • Shutter speed: the amount of time the film or sensor is exposed to light.
  • Aperture: the size of the hole the light travels through towards the sensor.

When there’s lots of light a camera can be set with:

  • a low ISO rating (to decrease grain in the image)
  • a high shutter speed (to freeze the action)
  • a small aperture size (to increase the depth of field which makes more of the scene in focus).

When there’s less light, the opposite needs to be set:

  • a higher ISO rating (increasing grain in the image)
  • a lower shutter speed (which can make the photos blurry due to camera shake)
  • a wider aperture (which reduces depth of field making items in the extreme foreground or far background out of focus).

During the day above water when there’s lots of light, a camera can be left on automatic and you’ll generally get a properly exposed image.

Not all photographers use automatic settings and often isolate and manually set their ISO, shutter speed and aperture to get more creative control over their images.

What is light?

Electromagnetic Spectrum
Electromagnetic Spectrum

Light is energy that has a wavelength between 400nm (nano-metres) and 700nm. The source of this energy could be a light bulb, the sun, a candle, a hot piece of metal, some uranium etc.

These energy sources will invariably be radiating energy in lots of other wavelengths too (including gamma, x-ray, ultraviolet, thermal, infrared, microwave etc.), but its just the 400nm to 700nm wavelengths that our eyes can detect.

Colour

Retina Rods and Cones
Retina Rods and Cones

We have red, green and blue receptors (cones) on our retinas that detect these different wavelengths of light. Different proportions of different wavelengths are interpreted by our eyes as different colours. It’s the same principle for film or a sensor in a camera.

Every light source emits its energy at different wavelengths and therefore the colour of the light is different for every light source. This is called its colour temperature.

Whereas paint colours are created by mixing the primary pigment colours: red, yellow and blue (sometimes called magenta, yellow and cyan), light colours are created by mixing the primary light colours: red, green and blue.

The light emitted from the sun has roughly equal quantities of red, green and blue which results in white light with a colour temperature of between 5,000 to 6,000 Kelvins. Fluorescent tube lights produce much cooler, “bluer” light with a temperature of 4,000 to 5000 Kelvins. Incandescent filament bulbs produce warmer, more yellow light with a colour temperature of around 2,700 to 3,000 Kelvins.

Transmission

Light energy travels in a straight line until it’s reflected, refracted or absorbed (or any combination of the three).

  • When light is reflected, it bounces and travels in a straight line in a different direction.
  • When it’s absorbed, the light energy is converted to a different sort of energy (i.e. heat).
  • When it’s refracted, the light continues in a straight line, but its direction of travel has changed.

There is no known material that reflects, refracts or absorbs 100% of light energy, there’s always some degree of reflection or absorption of light.

Every time light bounces off something, the material of that “something” absorbs some of the energy of the light affecting it’s colour and intensity.

Consider white light hitting a red snooker ball. The material of the ball absorbs blue and green light and reflects the remaining red light. The red light hits our retina and we see a red snooker ball.

The same white light hits the green baize under the snooker ball. The baize absorbs the red and blue light and reflects the remaining green light to our retina and we see the green baize.

The red light bouncing off the snooker ball also hits the green baize, the baize absorbs the red and traces of blue light and nothing perceivable is reflected. Similarly the green light bouncing off the baize hits the red snooker ball which in turn absorbs the green light and any traces of blue light and again nothing perceivable is reflected.

Light is bouncing around all the time constantly having its colour, direction and intensity changed by anything it interacts with.

Light and water

When we try and take photos underwater, we have an additional medium to deal with, which affects the colour and intensity of light: water.

In order for our retinas (and camera film/sensors) to see something, here’s what happens:

  1. White light is emitted by the sun and hits the surface of the water above you.
    • Some of the light is reflected back up into the air.
    • Some of the light energy is absorbed and increases the temperature of the water.
    • Some of the light is refracted by the water surface and travels down into the water at a slightly different angle.
  2. As the light travels through the water, it starts losing energy. It’s red energy is absorbed first, followed by green, then blue. The more water it passes though, the more energy is absorbed.
  3. The reduced energy light hits an orange fish in front of you.
  4. The orange fish absorbs some of the blue parts of the light (orange = red + green) and reflects the remaining red and green light to your retina.
  5. Unfortunately there wasn’t much red and green light left after it travelled through the water above you, so the fish appears a dull blue instead of its vibrant orange.

If you had a dive torch and shone it at the orange fish, here’s what would happen:

  1. You shine your dive torch (which emits white light) at the orange fish.
  2. There’s not much water between the torch and the fish or between the fish and your retina, so very little of the red part of the spectrum is absorbed by the water.
  3. The orange fish absorbs some of the blue parts of the light and reflects the remaining red and green light to your retina (orange = red + green).
  4. There’s now lots of reflected red and green light hitting your retina, so you now see the fish as a vivid orange.

Underwater photography

If you’re near the surface, then the water won’t absorb too much red light and your photos will usually look OK. They might have a very subtle blue cast.

If you’re deeper underwater and not using a light source such as a dive torch or underwater flash gun (called a strobe) then your photos will definitely have a blue or green cast to them.

Although the absorbed red data can never be recovered, the remaining few scraps combined with an approximation of what might have been can be generated using photo manipulation software such as Adobe Photoshop and Photoshop Elements.

I’ll be focusing on Adobe Photoshop CC in this article.

What is a digital image?

A digital image is a graph with X and Y axes. These are the horizontal and vertical dimensions of the image expressed in pixels.

Each X and Y location on the graph has three values: red, green and blue (RGB) each typically expressed as a value between 0 and 255. When these three values are combined, they are interpreted as a single point of colour of varying brightness and saturation (the strength of the colour).

When the graph is filled with all this RGB data, a digital representation of the image is visible.

A properly exposed image has a good spread of red, green and blue data with varying intensities. This can be seen by analysing an image’s RGB data on a set of charts known as histograms. In Photoshop, these are termed levels.

Here’s a photo taken at depth without additional light along with its RGB histograms.

The histograms show the quantity of red, green and blue data at varying levels of brightness in the image. The X axis is pixel brightness left to right, black to white. The Y axis is the quantity of pixels starting with zero at the bottom.

The red histogram shows there’s a lot of dark red data, but very little mid-level or bright red data in the image.

The green and blue histograms show there’s a good distribution of green and blue data across the image with the majority being at mid-level without too much dark or very bright data.

Blue and green data is good, the red is lacking. This is why the image has a blue/cyan cast (blue + green = cyan).

Here’s an example of a well-balanced image:

In this instance, the red data has a good spread of intensities from dark to light. Although there’s more dark red in the image, the image is balanced as the rest of the red data is in the mid tones and highlights.

Its blue and green data is similar to the previous image with a good spread, lots of mid-range and not too much very dark or very light data.

The effect of all three channels having a good spread of data, results in a pleasing image with no discernible colour cast.

We need to do something about the red data in the first image so it looks as good as the second image.

The missing red data

There are two methods for adding red data back into your underwater photos:

Method 1: Spread out the red data

We need to brighten the red highlights and mid-tones while leaving the shadow data untouched.

This will have the effect of stretching the existing red data out across the histogram to look more like the green and blue histograms.

Here’s how to do it:

  1. Open your image in Photoshop or Photoshop Elements
  2. Select Image -> Adjustments -> Levels
  3. Change the Channel dropdown to Red
  4. There are three sliders under the histogram, black, grey and white. Drag the white slider left to where the red data ends in the histogram. As you drag the slider, your image will be adjusted in real-time. Also try dragging the middle (grey) slider left or right to change the distribution of the mid-level data.
  5. Click OK when you’re happy.

You’ve now adjusted the distribution of the image’s red data. You can verify the change by viewing the red data histogram again.

The image looks a bit better but the updated histogram shows what happened. The small amount of red data was stretched out across the dynamic range which has resulted in some nasty fringing and obvious red pixels. It doesn’t look very natural.

Photoshop can’t fill-in the gaps in the histogram because it can’t know where the red data should have been in the image. The histogram only shows the distribution of light and dark data in an image, not it’s exact X and Y placement.

Method 2: Generate new red data

A new red channel can be generated by doing the following:

    1. Create a grey channel based on the average luminosity of the pixels in the original image.



    2. Remove the blue and green data from this new grey channel leaving only a red channel.



    3. Replace the red channel in the original image with the new luminosity-based red channel.

    4. Auto-balance the new composite image.

      [twenty20 img1=”1913″ img2=”1914″ offset=”0.5″ before=”Before” after=”After” width=”600px”]

This is quite an involved process, so I’ve created a Photoshop action which will work with a flattened image (i.e. a Background) and apply all these processed in one click.

You can download the action here

Unzip the file and drag and drop the contained .atn file into the Actions panel in Photoshop.

The action will come up as colourcorrect_red in the Underwater folder.

Double click the action name (colourcorrect_red) to run it or alternatively, select the action and hit the play triangle ► in the Actions panel.

Method 3: Use Photoshop’s new Match Color -> Neutralize function

There’s a new feature in Photoshop and Elements that approximates the effect of manually generating red data based on the luminosity of other channels. This is Photoshop’s new Match Color function.

Here how to use it:

  1. Open your image in Photoshop or Elements
  2. Select Image -> Adjustments -> Match Color
  3. Click Neutralize. The colour cast will be reduced.
  4. Click OK

The filter can be controlled by using the Luminance, Colour Intensity and Fade sliders.

Colour correcting underwater video using Adobe Premiere CC

Taken from Gustav Ovier’s site (offline since Dec 2017).
Retrieved using the WayBack Machine then translated from Portuguese to English using Google Translate.

Filters for Adobe Premiere CS6 and CC

I’ve made available to download some filters I created for Adobe Premiere CS6 and CC (I don’t know if it works in other versions) for colour correction of underwater images.

These filters enhance the colour of videos made during dives. You should test multiple filters to see what is best for your video. Sometimes the filter is good at the start of the shot, but it can get bad as the light changes. So it’s good to go through each clip with the filter applied before finalising it.

Underwater at depth, the colours fade due to light being absorbed by the water. Many people use a physical filter in front of the camera lens: usually red or magenta. I do’t really like this technique because it makes the video more red when there is light.

The best result can be obtained with continuous artificial light, so even at depth the colour capture will be accurate. It really is amazing the colours of fish, crustaceans and corals from the bottom of the sea.

To install the filters go to the effects tab of your Adobe Premiere CS6/CC and import them (one by one).

Here’s an example of the filters in action:

The filters can be downloaded here:

UnderWater_SAC_Adobe_Premiere_CS6_CC.zip