About HDR

Bits & Files

HDR is an abbreviation for High Dynamic Range. The dynamic range is the amount of light captured or displayed by a camera or monitor. In the most common file types nowadays, the dynamic range is 8 bits per color. This is the case for a standard .JPG file, resulting in 255 intervals for every color. Some file formats like TIFF and camera RAW  can hold between 8 and 16 bits per color, resulting in approx. 65.000 intervals per color. You will understand that these 16 bits per color gives a lot more “resolution” in color depth, thus having a wide color gamut, or an extended dynamic range. HDR, TGA, PNG and EXR files for example are, or can be 32 bits per color, giving millions of intervals per color, producing a true high dynamic range.

Per color or per pixel?

A little side note here is the confusion made -for example- with a 24-bit BMP. Looking at its name it might indicate a wide dynamic range, but this file still holds 8 bit per color, but the three Red, Green and Blue colors give 8 x 3 = 24 bits PER PIXEL. The bits-per-color and bits-per-pixel values are often randomly used, sometimes not pointing out if the bit count is per-color or per-pixel. It gets even more complicated when we include alpha-channels used in PNG or GIF, where the alpha (transparency) channel can be made up of any number ranging from 1 to 8 bits depending on the file type… but this will not be of any interest here in this subject, so for clarity I will stick to the bits-per-color value.

Displaying HDR

Most monitors, cameras and printers nowadays unfortunately cannot display or capture all these information at once. When viewing a true HDR file on your desktop screen, you will have an exposure slider which allows you to shift the exposure to view the darker or lighter parts of the image correctly. The rest of the image will clip back or white, because the color information simply goes out of the monitors range. There are some display techniques right now in the market which can produce true HDR or EDR (Extended Dynamic Range) results. OLED is one candidate to become a standard in the near future due to its extended dynamic range. At present days these displays are already incorporated in some smartphones. At monitor or even TV sizes these displays are still pretty expensive, but within a affordable region. Brightside has developed a monitor with an extreme dynamic range, but still costs around 50.000 dollars at the time this is wrote Dolby is also prototyping a HDR display, and Sunnybrook is another manufacturer of HDR displays. Its light output should be close to looking out of a window on a sunny day, you will have to squeeze your eyes when it hits pure white. When displaying pure black, it is truly black as night. You won’t even see a light glow like a normal screen. It would make a nice Christmas present for sure! The technique behind this is to put an array of controllable ultra-bright LEDs in place of the normal backlight system of an LCD screen. Theres enough info on the net about this so i won't dig in too deep here. Just one image to give an impression, as far as possible with an 8 bit image: both screens are fully black, with a full white area in the middle. You see the left monior has brightness visible even when fully black. The right display is pinchblack around the overexposed white area. guess wich one is the HDR...

(full article by bit-tech.net here)

Another way to view a HDR image on a standard monitor is to compress the total of 32 bits captured back to 8 bit again. This is not the same as an original 8 bit image, because all the info of the HDR image is kept visible, its just pushed back to fit inside the 8 bit format throuh contrast reduction or local operating. This process is called Tone-Mapping. I will explain this alittle further down on this page.

Capturing HDR

Capturing this High Dynamic Range is done more or less in the same way as viewing it on a standard monitor, by shifting exposures. An 8 bit -let’s call it a JPG- image can be taken the traditional way, with the exposure set so dark, that there is no absolute white in the image, so it is underexposed. Now we start shifting the exposure up, to make the image brighter, and keep taking pictures at regular exposure intervals until we have an overexposed image containing no absolute blacks. Now when we put this set of exposures together, we can generate a HDR image with the appropriate software, because all the available light ranging from dark to bright is captured, compared to a single image which will only hold a fragment of all the available information.


The Units

Now we are talking about color information which is available and captured, you might be wondering about the relations of these two: how much light info is available, and how can we capture this amount? Before we throw in some numbers to compare, I will briefly explain the units used to measure the available, or captured light: 1- photographic exposure stops, also called the Exposure Value (or EV) and 2- the contrast ratio. Both indicate the dynamic range in their own way.

1- The good old exposure stops, or exposure value: EV is used since the dawn of photography. There are 2 main settings that allow exposure shifting: aperture and exposure time. (we will talk about the ISO value later) The aperture is the adjustable hole in the lens where the light has to pass through. By widening or closing this hole, we can let more or less light in to reach the imaging sensor. just like the exposure time: more time means more light, and the other way around. An exposure stop simply means doubling or dividing by 2. This is true for the aperture as well as the exposure time. So when we adjust exposure time from 1 second to 2 seconds, we adjusted +1 stop, or +1EV (Exposure Value). If the aperture is doubled in size, it will have the exact same result as doubling the exposure time.

2- The contrast ratio is the amount of detectable brightness values. This can be easily calculated by the exponentiation of  the number of possible binairy states (the "0" and the "1") with the total amounts of bits in the color value. Simply put, if we have an 8 bit color, these 8 bits will generate 28 values. This is read as 2x2x2x2x2x2x2x2 (value 2 doubled 8 times). The result is 256. This tells us that the contrast ratio is 256:1. This is true for a JPG image. A HDR image would have a 32 bit color and producing a dynamic range of 232, read as 2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2 (value 2 doubled 32 times), which results in 4.294.967.296, giving a dynamic range of 4.294.967.296:1. when you examine this calculation closely, you will see that with every bit added, the dynamic range will double.


The Facts

Now we know the numbers, let’s see some facts. On a bright day the exposure (or dynamic) range is something between 15 and 20 stops. The difference between the most faint, barely recognizable starlight and the extreme brightness of the sun is about 24 stops, but they will never be present all at once at the same time of the day. Compare this to the 6-8 stops exposure range captured by a regular DSLR camera and you can see why images often have under- and overexposed regions. This can be tackled by shooting a set of exposure shifted images. So when we have, say 15 stops available light outside, and we want to make a true HDR image out of this, we would have to subtract the 6-8 stops captured by the camera, leaving some 8 stops to shift over to capture all available light. This action of shifting exposures is known as bracketing. Most digital cameras, even the better compact cameras offer an option to shoot a bracketed set of different exposed images, which can be used to merge to a HDR image.

Why HDR?

So now we know how a HDR is made, and that it is nearly impossible to view correctly unless you have 50.000 dollars for a monitor, so what is the use of these images rich of information? Well, I will reveal two for you: the photographic use and the application in 3D-software. When a HDR image is showing a golden sunset over a green field of grass with some nice detailed clouds, we could “tone-map” them back to 8 bits. This means that we take the 32 bits, and squeeze them back together in a normally-viewable image. There are many ways to do this. The easiest one is to reduce the contrast, the other one by local operators. A local operator is a tool that calculates the differences in brightness and adapts it locally for maximum contrast. The images which have undergone local operation are extremely vivid as well in contrast as in color (left image), while images tone mapped with a contrast reduction look more realistic as seen by a human eye (right image).

Another application for these HDR images lies in the 3D-software. To be used as a virtual environment, the HDR image should cover all possible angles visible from a distinct point: a so-called 360 degree image, equirectangular image (left example image), or 360 panorama. This panorama is wrapped around an imaginary sphere (right example image), so when viewing from the center of this sphere, it looks just like the environment where the 360 pano is taken. In this environment we can put our 3D-model. When using the image as light source and environment, the shiny surface of the model will reflect the HDR environment, and the light will be projected according to the color and luminance values captured in the HDR pano. This will result in extremely realistic lighting and shadows, and it is achieved in a matter of minutes with some clicks of a mouse (bottom example image). Trying to reconstruct a resembling lighting condition with standard lights available in the 3D-application would take much longer to setup, and will need extensive tweaking to get it all in balance. You would have to create a bright light for the sun, an array of blue lights to fill in the shadows created by the sun, just like the real blue sky does. Also we need to have green lights under the floor when the environment has a grass floor, because the sunlight is bounced back up off the grass.


Free Sample Packs