Showing posts with label dynamic range. Show all posts
Showing posts with label dynamic range. Show all posts

Sunday, 6 November 2011

Adobe Photoshop Lightroom Tutorial - Part XII - Presence Controls

Part I - Introduction
Part II - Lightroom Workflow
Part III - Organizing the Library
Part IV - Keywording and Metadata
Part V - Using Filters
Part VI - Importing Your Images
Part VII - Basic Editing Tools
Part VIII - Developing Your Images, The Basics
Part IX - Reading and Interpreting the Histogram - Basic Adjustments
Part X - White Balance
Part XI - The Tonal Scale
Part XII - Presence Controls
Part XIII - Coming Soon!

Presence Controls

Last but not least, the Basic panel offers a group of control labelled Presence and made up of:
  • Clarity.
  • Vibrance.
  • Saturation.
For the sake of simplicity, let's start from the last one: saturation.

Saturation

To make a long story short, saturation controls the intensity of a color. The Saturation slider in Lightroom lets you uniformly modify the saturation of the colors in an image in a [-100, 100] range:
  • A -100 saturation adjustment corresponds to no saturation at all and the result will be a monochrome image.
  • A 100 saturation adjustment corresponds to doubling the saturation of the colors.
Beware that when you increase the saturation of an image, colors channels may be clipped and as a result color shifting may occur.

I don't use the saturation tool very often, partly because I'm not so fond of too "punchy" images, and partly because of its own non-selective nature: more often than not, I only need a more selective kind of saturation enhancement:
  • I need to boost the saturation of a selected set of colors.
  • I don't want skin tones to saturate and have that orange cast.
Truth be told: I almost only use the saturation slider to completely remove the saturation of an image to produce a monochrome one. Why not simply tell Lightroom to convert the image to black and white, then? Well, just because Lightroom, by default, uses a black and white color mix that's not uniform across the spectrum (as we'll see in a future post). Sometimes it's a good starting point, sometimes it's not. Only trying with a specific image can tell.

Lightroom, as we're going to see in the following seections, provides good solutions to many of the aforementioned problems, and that's the reason why I don't use the saturation adjustment so much.

Vibrance

The Vibrance adjustment is a selective and non uniform saturation adjustment with the following characteristics:
  • It tries to avoid channel clipping.
  • When raising the saturation, it has more effect on lesser saturated colors than on more saturated ones.
  • It tries to preserve skin tones.
As a quick example, have a look at the following series of image:

Original Image
Saturation -75
Saturation +75
You can see as Saturation raises and lowers uniformly the saturation of the colors in the image.

However, this is the effect of the same adjustment using the Vibrance tool:

Vibrance -75
Vibrance +75
The Vibrance tool reduces and raises the saturation of the least saturated colors, in this case the background greens, leaving the more saturated colors, in this case the yellow petals and its orange shades toward the center, almost unmodified. In this specific case, you could also use Lightroom to selectively raise the saturation of the green channel (as we'll see in a future post).

However, the Vibrance tools is a handy tool that can help you apply saturation adjustment in a really quick way, especially in portrait photography when things can get really tricky when trying to preserve the skin tones of our models. Had we applied such a saturation adjustment in a portrait, orange shades would probably had popped up in our subject's skin pretty much as they popped up in the petals of this flower. In those cases, only Photoshop would be your friend: you'd use layers and layer masks to preserve the tones in selected image regions. Fortunately, Lightroom gets in the way and provides this quick solution to such a common problem.

Clarity (And Local Contrast)


The Clarity adjustment lets you add depth to an image by modifying local contrast. But what's local contrast, first of all? Here's a really quick introduction about this problem.

The physiology of the human eye is such that, in certain circumstances, a photo cannot be a faithful representation of what we perceive. This is especially true when dynamic range is taken into account. You have surely noticed how a well your eyes can see a scene with a high dynamic range (such as a landscape with both strong lights and deep shadows) and how bad the same scene look when you shoot it, no matter the effort you put into it. When we move our eyes around, they'll quickly adapt to the light conditions of the part of the scene we're focusing on, although you may have the impression of perceiving the entire scene as if no adaptation has occurred. On the other hand, when you take a shot you have to choose an exposure, and if the dynamic range of the scene you're taking is too high, you will simply miss the shot: either you burn the highlight or you lose details in the shadows. If you've heard about HDR (High Dynamic Range) imaging, you'll probably realize what the central idea of that technique is: you take multiple shots of the same imaging bracketing the exposure and blend them together in a single image, preserving the intensities you need across the dynamic range of the picture. This way, you artificially build a visual representation of what our eyes perceive when they move, and compensate, across regions of the image with big luminance differences.

You may think of local contrast as a technique to achieve the opposite goal: you selectively raise the contrast of light-shadow transitions to give more dynamic range to your image. However, instead of raising the overall contrast of the image (where both white and black points are moved and clippings may occur), the adjustment in only performed in smaller regions of the picture, leaving the overall black-white difference unmodified.

Lightroom lets you adjust the local contrast by both positive and negative values, in the [-100, 100] range.  Let's see some examples of how Clarity works.

Original
In this picture, the majority of pixels find themselves on the opposite sides of the range: the background is nearly black and the flower is white. Neither white nor blacks are clipped. Since the maximum luminance difference in the flower's pixels is less than 1 f-stop away, you may want to apply some adjustment to give the flower some more depth. Raising the overall contrast, as we've seen in a previous part, will only make things worse:

Contrast +100

Raising the overall contrast is going to raise the differences between black and white, but doing so means compressing whites even further, and the result will be a flower even flatter than before. You may be tempted to apply the opposite adjustment and bring overall contrast down:

Contrast -50
In fact, reducing the overall contrast expands the histogram in the recovery and in the rightest part of the exposure zone pushing them towards the center of the histogram. The flower will indeed gain depth. Unfortunately, the overall contrast reduction is going to expand the blacks and the shadows as well, pushing them towards the center of the histogram. The final result is not probably what we were looking for since the image is now flatter and duller.

Once again, there's no need to leave Lightroom and open Photoshop. The Clarity adjustment is just what we were looking for:

Clarity +50
Increasing the local contrast has raised the depth of both the flower and the leaves, leaving the overall contrast unmodified. The white and the black points do not move and we introduce no clipping at all.

Clarity adjustments are often subtle, and I suggest you zoom out when you apply it. Nevertheless, it's a very handy adjustment which many pictures can benefit from. Look at the following example (to appreciate the difference you may need to open the liked image):

Original Image
Clarity +60
Original Image
Clarity +75

Negative Clarity

The Clarity tool lets you apply negative clarity adjustment. Why would you want to reduce the local contrast of an image? Reducing local contrast has a softening effect that may be result pleasing in some kind of photos. Some portraits and some nature images, for instance, may benefit from a slight reduction of clarity, if you want to achieve a "dreamy" or "soft" mood without spending much time on applying more complex effects such as the Orton effect. In the following example you can see how a negative clarity affects the image of the two purple flowers that we've previously seen:

Clarity -50


If you want to help me keep on writing this blog, buy your Adobe Photoshop licenses at the best price on Amazon using the links below.

Wednesday, 12 October 2011

Tones and Dynamic Range. Why You Should Shoot RAW

To help me keep on writing this blog, please consider buying your Adobe Photoshop Lightroom license at the best price on Amazon using the link below.


I'm writing this post to provide a small introduction about how a digital camera sensor reads and stores data, how human eyes perceives this data and the impact that all of this has on your decisions as a photographer. There's some mathematics in it that I'm not willing to avoid: although photographers should not worry about mathematics and such "technicalities", it's important to understand what's going on inside your camera, first, and in post-processing, later, if you want to get the better out of your images.

How The Sensor Reads and Stores Data

The first thing that's important to understand is that your camera sensor is linear. That's the easiest part of all: the sensor reads light intensity, samples it and stores its value. It's plain old sampling going on here. Cameras typically use three channels of a fixed bit-height to store light intensities of the three primary colors. The bit width of the channel depends on the camera and the wider the channel, more information can potentially be stored into it.

Photographers usually think in terms of f-stops or zones. Changing the exposure by an f-stop means doubling the light that reaches the sensor, if the compensation is positive, or halving it if the compensation is negative. Photographers use f-stops because the eye is a logarithmic sensor: in its working range, it is sensible to relative differences in light intensities, regardless its the absolute value.

Since sensors are linear, how is a specific zone's data distributed into the RAW file? If we think in binary representation, doubling the light intensity (stepping up 1 f-stop) means shifting a sensor reading (for each pixel) to the left (we're using 8 bits as an example):

7 6 5 4 3 2 1 0    7 6 5 4 3 2 1 0
--------------- -> ---------------
f 0 0 0 0 a b c    0 0 0 0 a b c 0

On the other hand, halving the light intensity (stepping down 1 f-stop):


7 6 5 4 3 2 1 0    7 6 5 4 3 2 1 0
--------------- -> ---------------
0 0 0 0 0 a b c    0 0 0 0 0 0 a b


Let's now think in terms of zones. Basically, an n-bit channel can store data of up to n zones and zones don't use the same number of bits to be stored (and thus they cannot store the same level of detail). The highest zone, in this case the 1st, is 7 bit wide, the 2nd is 6 bit wide and so on, up to the 8th, which is 0-bit wide.

If you're proficient in Mathematics, this is in fact pretty obvious and derives from the very nature of the binary representation: adding 1 bit to the representation means doubling the range of values you can express with those bits. But stepping up 1-stop from a zone to another precisely means doubling the light intensity. That's why the maximum number of zones you can store in an n-bit number is n, and each zone will have half of the bits of the previous one (from the lightest to the darkest), to store its information.

You know understand why a camera such as a Nikon D5100, that produces 14 bit RAW files, may have a dynamic range of about 13 EV.

How Many Levels Can Be Stored In Each Zone?

As we've seen, the number of bits available to store the data of each zone decrease to a half from one zone to the next one (from the lightest to the darkest). Since the number of distinct unsigned integer values that you can store with an n-bits representation is 2^(n), it comes out that each in each zone you will be able to store an exponentially decreasing number of levels. For an 8-bit file and a 14-bit file, you will have the following:

Zone  | Levels | Levels |
------+--------+--------+
    1 |    128 |   8142 |
    2 |     64 |   4096 |
    3 |     32 |   2048 |
    4 |     16 |   1024 |
    5 |      8 |    512 |
    6 |      4 |    256 |
    7 |      2 |    128 |
    8 |      1 |     64 |
    9 |      - |     32 |
   10 |      - |     16 |
   11 |      - |      8 |
   12 |      - |      4 |
   13 |      - |      2 |
   14 |      - |      1 |
------+--------+--------+

Awful numbers in the 8-bit case, aren't they? If you've heard about the zone systems, you're probably expecting at least 8 zones in your shots. If you're starting to worry about them, and thinking something like: "Am I saving my images in 8-bits JPG files?", then: yes. There's plenty of reasons to worry about this. But still wait some minutes and read on.

Human Vision

The human eye response to light intensity is logarithmic, under all our practical purposes, amongst the visible spectrum. We have to factor this into the equations to correct the estimations we did in the previous section.

The correction we've got to apply is well known, and you probably heard about it: it's called gamma correction. I won't go into details in this post, the linked Wikipedia article is pretty well done and useful for our introductory purposes. However, it's important to stress out that gamma correction partially compensates for the great (exponential) unbalance in the number of levels we can store for each zone.

The gamma corrected intensities are calculated with:

v_o = v_i ^ gamma

where the gamma exponent is a number that commonly takes the value 2.2. The monitor you're using is applying a gamma correction right now, as well as whichever photographic software you have.

Since we're interested in the zones we perceive (v_o in the previous equation), we should apply the inverse transformation to the values we've calculated and update our estimates accordingly.

Let's then assign some values to the zones so that we can gamma correct them. We will use the following (rounded at 2 decimal digits):

Zone  | Value  | Gamma     |
      |        | Corrected |
------+--------+-----------+
    1 |   8142 |     60.09 |
    2 |   4096 |     43.85 |
    3 |   2048 |     32.00 |
    4 |   1024 |     23.35 |
    5 |    512 |     17.04 |
    6 |    256 |     12.43 |
    7 |    128 |      9.07 |
    8 |     64 |      6.62 |
    9 |     32 |      4.83 |
   10 |     16 |      3.52 |
   11 |      8 |      2.57 |
   12 |      4 |      1.88 |
   13 |      2 |      1.37 |
   14 |      1 |      1    |
------+--------+-----------+

To determine how the corrected values distribute into an n-bit channel, we apply a linear transformation, to "stretch" them into the desired interval. For an 8-bit channel the scale factor is 255/60.09 while for a 14-bit channel the scale will be 16383/60.09. Results are:

Zone  | Maximum | Maximum |
      | Zone    | Zone    |
      | Value   | Value   |
------+---------+---------+
    1 |     255 |   16383 |
    2 |     186 |   11955 |
    3 |     136 |    8724 |
    4 |      99 |    6366 |
    5 |      72 |    4646 |
    6 |      53 |    3390 |
    7 |      39 |    2474 |
    8 |      28 |    1805 |
    9 |      21 |    1317 |
   10 |      15 |     961 |
   11 |      11 |     702 |
   12 |       8 |     512 |
   13 |       6 |     374 |
   14 |       4 |     273 |
------+---------+---------+

It's pretty evident that the width of the zones is more balanced than it was in the non gamma corrected case. The last step is to calculate the number of levels per zone, subtracting from the maximum value of a zone the maximum value of the next zone:

Zone  | Levels | Levels |
------+--------+--------+
    1 |     69 |   4428 |
    2 |     50 |   3231 |
    3 |     37 |   2358 |
    4 |     27 |   1721 |
    5 |     20 |   1256 |
    6 |     14 |    916 |
    7 |     10 |    669 |
    8 |      8 |    488 |
    9 |      6 |    356 |
   10 |      4 |    260 |
   11 |      3 |    190 |
   12 |      2 |    138 |
   13 |      2 |    101 |
   14 |      1 |     74 |
------+--------+--------+

8-bit images aren't that bad, in fact, but aren't that good, either.

It's clear that using RAW is a huge improvement. Also, when converting from RAW to another format, you should try and avoid 8-bit formats such as JPEG, unless you're willing to lose all that information. You should try and stick with 16-bit image file formats, although few programs can use them. Notably, Photoshop Elements is able to open them but not manipulate them. It's a good selling point for Photoshop, if you're a professional.

How Bad Is Using 8 Bits Image Files?

To fully understand how bad 8-bits images can be, it's necessary to understand how sensible human eyes are to light intensity. It turns out that this question is answered by the Weber-Fechner Law. This law states that human eyes are sensible enough to distinguish a difference in light intensity of about 1%.

How many such levels are in a zone? To calculate it, using the definition of a zone (an interval in which the intensity of light doubles), you must find the number x such that:

(1.01)^x = 2

That number is approximately 70. There are 70 levels in a zone that human eyes can distinguish. Let's observe once more the zone levels in the 8 bit and 14 bit case. We notice that 8 bit images provide a good level of detail only in the brightest zone. In the darkest ones artifacts such as "banding" will easily occur. A 14-bit file, on the other hand, will provide good zone level details up to the darkest zone. That's one more reason why you should always use RAW when shooting and post processing your images. If you use narrowest channels, artifacts will soon pop up.

Useful Tips for Shooting

Now that we've learnt how zones are stored into our image files and the level of details we can expect from each one of them, we can draw some conclusions that may help us shoot the perfect photo.

Although gamma correction introduces some balance into the number of levels that can be stored for each zone, it's clear that brightest zones can use much more information than darkest one. Under this aspect, digital sensors like a bit of overexposure. To capture the highest level of details, you should guarantee that the zones you're interested are exposed to the right. If you slightly overexpose, provided you don't burn out any channel, you can lower the exposition in post-production while retaining the maximum number of details.

Even with dynamic ranges as wide as 14 EV, you must be careful not to burn a channel out. When a channel has filled up, you start clipping information and the first side effects will be: a partial loss of color saturation, a color drift and, finally, the saturation of all the channels to a pure white.

Many RAW files will let you recover 1 f-stop of light, but if you clip the white, as well as the black, you will be losing information. That's why on camera histograms are a good tool to check your exposure and ensure you're using the channels efficiently.

Do These Recommendations Apply Only to Pro?

No. I'm not a pro, either. However, it's pretty easy to see how artifacts will quickly appear in a typical 8-bit image with relatively light post-processing. For example, bands quickly appear in the dark areas and information is not sufficient to modify the exposure of a shot even for only 1 f-stop.

If you can shoot RAW, do it. And if you can distribute and store 16-bit images, do it as well. Today, very few people is going to run out of storage space for his photographies and, at least, you should be aware that you're going to lose a great deal of information with an apparently innocuous transformation (8-bits images can use more than 16 millions colors, but now you know that this is insufficient in many cases).