Quantization is critical to the final look of your rendered images. It is the last step in the rendering process where Mantra takes the full 32 bit floating point values and crushes them down to your final image's bit depth. The various options allow you to control the quality of the output image in the same way that blur or sharpen filters do in Photoshop or in a compositing package. Therefore quantization is an imporant tool that can be used to control the final image character.
Once you realize that each and every image you render is being quantized and that you actually have choices to make, you will be in a better position to control final look of your images. How you quantize your image has drastic effects on the look and color space down the road and blindly defaulting to Gaussian 2x2 or Catmul-Clark 2x2 limits your options.
There are three main controls you have during the quantization stage: output bit depth, Gamma correction and dithering. I only look at the bit depth and the filter options you have in the quantization process.
Quick Definition of Quantization
Mantra quantizes all the 32 bit output values as the final step in generating your image to whatever bit depth you choose. If you are converting to 8 bit images, the amount of color information that is discarded is tremendous. There is a great potential for all kinds of artifacts if there was no filter applied in the huge step down in values.
To appreciate the amount of information discarded, it's worth a quick look at the numbers involved. Let's look at what happens when you render to a final image with 8 bits per color channel. First for reasons that I don't fully understand but will take as fact, 32 bits of floating point information really only has around 23 bits of real precision for values between 0 and 1 and the remaining bits are used to shift these values above and below zero. With that, 23 floating point bits of information per color channel (RGBA) can represent pow(pow(2,23),3) = 590,295,810,358,705,651,712 distinct colors. 8 bits per color channel can only represent pow(pow(2,8),3) = 16,777,216. This represents a difference of about 13 times!
This means that a huge amount of color information is being tossed away when rendering to 8 bits per channel. No wonder there are issues such as mach banding and the limited ability to properly represent deep blacks and bright whites. This is why the entire film industry has moved seemingly overnight to 16 bits per channel for output images. Applying the same as above, 16 bits really is 13 bits actual so pow(pow(2,13),3) = 549,755,813,888 distinct colors. Still a huge amount of color information is lost but the magnitude difference is substantially reduced to 9 times compared to the 8 bit scenario above.
Quantization Filters And Their Effect
You need filters applied to your image to reduce or eliminate any aliasing issues in your final images. Applying filters at the quantization step gives us the ability to blur in full 32 bit space before the image is chopped down to whatever bit depth we choose. We have all the standard filters available to us in the Mantra ROP Output Driver. The scene I built to test the filters has a ground plane with a checker pattern that has a very high frequency. This is good for checking how the various filters perform on high frequency detail. Note that the aliasing in the gingham is due to the way the procedural texture is filtered in the shader. The teapot has lines and text that is good for seeing how the various filters perform in rendering the text and lines on curving surfaces. One other test you may want to try is to add lots of thin, long geometry to test the geometry anti-aliasing properties of the various filters.
The Quantization options for Mantra are found in the Specific tab of the ROP Output driver. For the render tests below, I am only adjusting the Pixel Filter. The default Quantization value is 16 bit float for all render tests. Each filter has an x and y sample width measured in pixels. The larger the area samples, the more effect the filter has, whether it blurs, sharpens or attempts to do both.
Effect Of Changing xy Filter Width
Starting with the default Gaussian 2x2 sample filter then increasing the filter width, the final image is blurred more and more. The Gaussian filter is definitely one that has a blurring effect.
Various Filters Compared
Each filer imparts it's own look on the final image. I roughly sorted the images from sharpening filters to blurring filters. You may like the filters that produce a sharp crisp output, but in motion these filters can be prone to buzzing or sparkling when you play a sequence of rendered images. The box filter somehow always comes up with the least impressive results...
No Filtering Whatsoever!
It's nice to see what the raw image looks like without any filtering. Note that the image is extremely jaggy. It is critical to apply the right filters to layers used for imagery. The only time you would use no anti-aliasing filtering is for rendering P position, N normals, v velocity or other shading attributes that you don't want to anti-alias.
Filtering With Motion Blur
The first two images compare the results of using a Catmul-Rom and a Gaussian filter with low xy samples on the motion blurred teapot. The different results are stunning. Now you know why the default option is Gaussian 2x2. Download the zipped images below and compare the other xy sample sizes between Catmul-Rom and Gaussian. I'll leave it to you to try out all the other filter options with similar dramatic results! The filters really show their character on motion blurred objects and large xy filter sizes.
The xy sample sizes are then increased to see the smoothing effect on the rendered image using the Gaussian filter.
Favorites: Catmul-Rom First Then Gaussian For Motion Blur
I personally like the Catmul Rom filter on objects that have little to no motion blur. It has a clear and crisp way of filtering the image and it does a decent job across a sequence of images. It generally doesn't flash too much with high frequency content and minimal motion blur. For objects that have a lot of motion blur, I turn to the Gaussian filter and will increase the xy samples to taste as the motion blur increases. In practice in order to apply different filters to moving objects, they should be rendered out as layers and comped on top of the plate. If the entire plate is in motion blur (camera transforms) then you can get away with rendering the entire plate with high xy samples.
These are my starting preferences. You should try out the filters on your scenes with varying xy widths. Depending on the shaders and the geometry, you may find one of the filters does a better job than the others. You need to test them in motion as well, especially if you choose sinc and box.
Here are the files I used to do the tests above including the actual 16 bit tiff files. The conversion to jpeg images above introduced some slight degradation.