Thursday, April 9, 2015

Understanding Linear Workflow and Gamma

Understanding Linear Workflow and Gamma by Johan B. aka Seazo Web: www.seazo.no

linear workflow and gamma

Why use linear workflow?

Many 3D artists finds this concept quite confusing. Therefore it’s easy to think that it doesn’t mean that much, but believe me, it does.

With Linear Workflow, the calculation of light and color through the whole process from render setup to post, will be correct. And it needs to, if you’re aiming for a photo-realistic look in your renders.

Linear Workflow is a term that tells something about how the gamma works. To understand this term, you need to know what gamma is, why it exists and how to take control over it. I’m getting back to that. Let’s see some examples first.

livingroom

Here’s an example of an image rendered without any gamma correction. Notice how the light won’t bounce around and fill the room, even thought the scene is rendered with vRay GI.

livingroom

It is possible to adjust the gamma in post after the rendering, so that the light fills up right. The problem is that this will result in a washed out look, and the color bleeding will also be calculated wrong. The colored balls on the floor, is clean shaders from max, so the colors here will appear correct. However, the colors in the textures on the floor and the painting on the wall, does not. Saturation and contrast will not be a good fix either.

correct

With the correct setup (linear workflow) both the colors and the light will end up correct.

There’s also some other advantages with using a linear workflow, such as 32 bits per channel data. Normally a picture (like a JPG) only have 8 bits per channel (R,G and B channel) which is good for the final output, but it doesn’t contain enough information to be very useful in post. Play the movie over, and notice how an 32bit image contains information in overbright areas. You can see how the reflections on the floor are being replaced with new pixel information. However the 8bit image does not “know what hides” under the white pixels, and therefore it appears grey when the exposure decreases. Also, if you are doing DOF or glare/bloom in post, you will notice how a 32bit image will increase the quality of these effects. 
Now, before we go on to the term gamma, you need to know some facts about human perception of light. The senses we have, does not react in a linear way. In other words; if you hold 1/2 kg in your hand, and adds another 1/2 kg, you can easily determine a double amount of weight. But, if you hold a stone of 50 kilograms, and adds another 1/2 kg, you will barely be able to feel the difference.

The same pattern appears in sound and vision
.

lights


As you can see, we can easily distinguish between a 50w light bulb, and a 51w light bulb. We can not distinguish the same difference between a 150w bulb and a 151w bulb, even thought the increased amount is the same. 
Here is a curve that illustrates how humans reacts to light intensities.
curve

Because we can’t distinguish differences in high light values very well, it would be a waste of bit data to encode a image with the same data density over the whole spectrum. (Encode, is for example to save the image as a JPEG) In other words, we don’t need to have the same data density in the bright areas, as we do in the dark areas. In fact, if you were to have a linear encoded image, you would need 14 bits per channel, in order to achieve the same quality as a typical JPEG image, which only have 8 bits per channel. On top of that, you would also have more information in the bright areas than the human eye are able to distinguish – waste of bits.

Yes, that means that a JPEG is encoded non-linear. In fact, the JPEG, and mostly all 8bit images is coded with the same curve as the curve that illustrates human perception of light. This curve is what’s called; Gamma 2.2


tristimulus
Have you ever noticed how the colorpicker values in max range from between 0 to 255? This is called a Tristimulus value.

Here is how an 8 bit image is encoded:



Notice how the tristimulus values gets dense in the dark regions, and wide in the light areas. This results in more information in the dark areas, so in this way, we take advantages of the human perception of light, by concentrating the data in the dark regions.

Now, after applying this encoding, whats actually happens with the image (under the hood), is that the whole image is bleached out. Like this: 


gamma

Before you get confused and asks; Why doesn’t all the 8bit images looks bleached out then? You need to know what happens after this encoding process.

In the old days, when we all were using CRT monitors, we had some good luck with how the power input resulted in the pixel brightness output. In fact, this law of power, was acting almost exactly as the opposite as the human perception of light. (Which we use when encoding an image).



Notice how the brightness value moves slowly up as the voltage increases, before it suddenly gets enough volt to bounce up. As you can see, the curve looks like the opposite of the curve that illustrates human perception of light, which is used to encode images.

With the gamma encoding and the CRT power-law function in mind, the overall result will look like this:

gamma curves



Flat-screens like we use today, doesn’t have the same response to input voltage as the old CRT’s does. But to achieve the same results, the screens today have a pre-programmed gammacurve, in order to display image-data correctly. In other words, the principles are just the same as before.

So, what does all this have to do with your renderings?

Well, it’s quite simple. When doing a photo-realistic render in 3Ds max, it is critical that all the image data is being treated as linear light. This is because it’s easier to compute, and there is no such thing as non-linear light in the real world. That means that when your render is dealing with colors, light and textures, everything has to be used, or converted to linear space. It’s called linear because the gamma curve is no longer a curve, but linear, which is called: Gamma 1.0

linear

Let me explain further. Whats happens if the calculations are done without linear color-space, is that the ratio between the tristimulus values, and the actual output brightness, does not match.

gammaratio

As illustrated above, half of the maximum tristimulus value (50% of 255 ~ 128) Doesn’t match 50% of the output brightness. Only 22%. This is where all the mathematics are ending up wrong, when calculating light in a non-linear space.

If the calculations were done in a linear space thought, the ratio would match, as illustrated below:

linear

o, now as you probably have a better understanding of how gamma works, it’s time to take a look at how all these crazy stuff should be handled throughout the whole workflow.

Displaying a linear color-space directly on your screen, will result in a washed out look. So the trick is to let everything be handled as linear-space “under the hood” while you at the same time, are able to monitor your renders and passes with a gamma 2.2 correction. In other words, everything is calculated with gamma 1, but you are able to see it in gamma 2.2.

Lets take a look at how 3Ds max is dealing with this. First of all, you need to enable Gamma/LUT Correction.

gamma
Customize/Preferences/Gamma and LUT; is the first place to look.

Check both the boxes under Materials and Colors. You will notice that the materialeditor and the color picker will appear washed out now, (sometimes the materialeditor needs to be refreshed) and it will feels a bit odd. But don’t worry, you get used to it. This is to make sure that the colors you choose should render correctly.

The gamma parameter to the left (2,2) is the display gamma. This will be the viewport, render and display gamma for you to monitor (it will not affect whats going on under the hood)

Input gamma: This tells max that it deals with textures that is gamma corrected with 2.2. All your textures (in most cases) are corrected with 2.2 except from HDRI textures. This will be explained later.

Output gamma, is how max saves the render. You want this to be 1, in order to keep the linear data intact when used in your composition software. If you don’t need to keep the linear workflow after rendering, say you are saving a JPEG picture, you would need to set the output gamma to 2.2.
Click the image below to see a detailed description on how this works. It will probably feels a bit chaotic right away, but analyze whats going on in the numbered order, and you will probably get a better understanding of whats going on.
chart
If you are using the max framebuffer, system gamma will adjust the image for you, to monitor how it will look like in the end. This does not mean that the image will be saved this way. If you are using vRay framebuffer, you need to check the sRGB button, in order to see how the image will end up.

Note: If you are rendering with mentalyRay, you’ll need to tell mentalray to render with 32bit information. Mentalray is set to render with 16bit by default. 
Now, make yourself a scene, and try out this method. Make sure to save your images with 32bits per channel. I recommend to use EXR files for that. Note that Photoshop doesn’t deal very well with this, but most composition software’s do. I will use After effects in this case.
ae3

Make sure to set your project to 32 bpc, to make sure that After effects get all the information out of the images.

When loading a 32bit or any EXR image into After effects, After effects will believe that this is a image in linear colorspace, and will therefore automatically treat the image as that. If you have done everything correct, the image will appear fine in AE. Even thought the image is not gamma corrected yet, AE will make sure to show you how it will end up when done. When you export your image, as a movie or not, AE will make sure to give it the right gamma correction when saved. As long as something else isn’t set.

If you are using different passes, like a reflection pass, you should now use the blendingmode: Add. As long as the project is in linear workingspace, the mathematics will be correct. The blendingmode Screen, is a “fakemode” that tries to simulate the correct mathematics for non-linear images.

Now that’s about it. Unless you are using vRay. There is still a problem with this setup when using vRay. vRay is very adaptive when rendering, which means that in dark areas, vRay doens’t use very high samples. This system is called vRay DMC sampling (deterministic Monte Carlo). The problem with the setup, is that when vRay try to figure out the brightness of an area, vRay doesn’t get the correct values. This is because vRay is looking on a gamma corrected image under the hood, when determining this.
bad samples

Here is an example that shows the result when vRay doesn’t get the right values to look for. Notice how noisy some areas looks, even thought they are so bright that they should have a better sampling.


samples
That is because vRay is looking at this, when determining how bright a pixel is.
In order to fix this, some settings needs to be adjusted.
First, you need to tell vRay that it is dealing with a linear working-space. vRay is a bit obsolete in it’s gamma settings, so this have to be done by setting the following parameter at 2,2.
And check the “Don’t affect colors (adaptation only)” box
vraygamma
Notice that I haven’t checked the Linear workingspace box. This is an old method, don’t mind this.
Now vRay DMC sampler gets the right values to look for.

Here you can see the results before and after:
samples
And there you go!

Well, Gamma issues and linear workspace have always been a big discussion out there, and I’m open for being corrected if I’m wrong at any point. This is just the information I’ve managed to find by studying, and doing research myself.

I hope this explained what you were looking for, and thank you for stopping by.
Need further help? Got C&C’s? Just contact me. 

Credits to Charles Poynton, and Martin Breidt, for good information and resources.

0 comments:

Post a Comment

ADF

Popular Posts

Contact Form

Name

Email *

Message *

Powered by Blogger.