Hi Guys,
Currently doing a Pyro Sim of a small/medium size smoke explosion. Using a division size of 0.02, which is already stressing a bit the server space. Each frame is about 1gb a frame, and I am only saving vel and density.
The issue I am having is lack of detail/definition on my final picture. It's a shame but I can't upload images of this example.
What method would you guys use to get a sharper image? Is upresing an option nowadays? Can anything be done when the SIM is already cached?
Any help would be appreciated!!
Pyro definition
7601 7 3- cgcris
- Member
- 67 posts
- Joined: May 2014
- Offline
- old_school
- Staff
- 2540 posts
- Joined: July 2005
- Offline
Using H15 here but same goes for H14.
Sharp volume checklist:
Mantra ROP:
- Stochastic Samples to 16 if volume is transparent
Object Containing Volumes:
> Shading > Volume Filter: Gaussian ( get a nice smooth edge in to shading)
> Shading > Volume Filter Width: 1.2 ( blur the density to sharpen later)
> Dicing > Shading Quality: 1.0 to 1.5 (dicing smaller than a pixel)
Pyro Shader assigned to Volume:
> Smoke Field > Volume
Then go right to the Pyro shader and remap the density field to your liking. See the two snapshots below for example settings in H15. Same as in H14 pyro shader.
Adjust the overall density to suit. I rendered somewhat transparent volumes. Increasing the overall density will resolve sharp volumes.
I personally use Density just as a mask and would rather shade/render the temperature field instead.
Density just goes from 0-1 very quickly on the surface and is challenging to shade sharply. See the values I use below to etch a bit of the outer most density then cut back.
With Temperature, you get a nice 0 to 5, 10, 20 or 30 or higher depending on the simulation. This gives you lots of wonderful detail to pull all these neat bands of density using the ramps.
If you start to over sharpen and get aliasing on the edges, either increase the number of primary Pixel Samples on the ROP, increase the shading quality to dice the volume higher (no higher than 2 please as memory usage will increase dramatically), and increase the Volume Quality on the ROP to 0.5, 0.75 or as high as 1 to render true square voxels.
Hope this helps.
-jeff
Sharp volume checklist:
Mantra ROP:
- Stochastic Samples to 16 if volume is transparent
Object Containing Volumes:
> Shading > Volume Filter: Gaussian ( get a nice smooth edge in to shading)
> Shading > Volume Filter Width: 1.2 ( blur the density to sharpen later)
> Dicing > Shading Quality: 1.0 to 1.5 (dicing smaller than a pixel)
Pyro Shader assigned to Volume:
> Smoke Field > Volume
Then go right to the Pyro shader and remap the density field to your liking. See the two snapshots below for example settings in H15. Same as in H14 pyro shader.
Adjust the overall density to suit. I rendered somewhat transparent volumes. Increasing the overall density will resolve sharp volumes.
I personally use Density just as a mask and would rather shade/render the temperature field instead.
Density just goes from 0-1 very quickly on the surface and is challenging to shade sharply. See the values I use below to etch a bit of the outer most density then cut back.
With Temperature, you get a nice 0 to 5, 10, 20 or 30 or higher depending on the simulation. This gives you lots of wonderful detail to pull all these neat bands of density using the ramps.
If you start to over sharpen and get aliasing on the edges, either increase the number of primary Pixel Samples on the ROP, increase the shading quality to dice the volume higher (no higher than 2 please as memory usage will increase dramatically), and increase the Volume Quality on the ROP to 0.5, 0.75 or as high as 1 to render true square voxels.
Hope this helps.
-jeff
There's at least one school like the old school!
- cgcris
- Member
- 67 posts
- Joined: May 2014
- Offline
Hi Jeff,
Thank you so much for detail explanation, and invaluable information.
Going a little more into basic settings. I am guessing you would generally use PBR to render Pyro or micropolygon?. Wondering if Stochastic Samples are mainly used for PBR or not.
Also in regards to min and max ray, and Pixel samples. What is your guideline in regard to those?
Upping pixel samples does seem to make a bit of a difference, but it's difficult to see what mix/max are doing.
Thanks again,
Cristobal
Thank you so much for detail explanation, and invaluable information.
Going a little more into basic settings. I am guessing you would generally use PBR to render Pyro or micropolygon?. Wondering if Stochastic Samples are mainly used for PBR or not.
Also in regards to min and max ray, and Pixel samples. What is your guideline in regard to those?
Upping pixel samples does seem to make a bit of a difference, but it's difficult to see what mix/max are doing.
Thanks again,
Cristobal
- old_school
- Staff
- 2540 posts
- Joined: July 2005
- Offline
PBR vs Raytracing vs Micropolygons when Rendering Volumes
If you want indirect light scattering in your volumes, PBR. Most want light scattering. Actually it is this that is having more VFX shots handed over to Lighting to do single-pass renders that include hard surfaces and volumes to get that nice light bleed.
If you don't want indirect light scattering, then it is a crap shoot as to what method to choose: PBR, raytracing or micropolygon. In this scenario, lump PBR and raytracing together. Arguably if your volume is super-dense, the only scattering will be in the fringes and for that you can use a simple back light to highlight this region.
PBR and raytracing engines will use less memory and render faster if you have indirect light in many cases when the settings are set up correctly. PBR will be much faster than raytracing (final gathering/global illumination) in all cases so PBR is used exclusively when you want bounce light.
Direct lighting only and Micropolygon will consume less memory as only the micro-voxels in the current bucket are loaded, vertices pre-shaded and then rendered via ray-casting.
And just what are the differences in choosing which render engine?
PBR, after calculating opacity (with Stochastic Transparency enabled) will fire rays in to the micro-voxels and shade each ray hit. Only the micro-voxels are held in memory. This is key when rendering with PBR and indirect lighting as the entire volume needs to be kept in memory. Therefore not pre-shading the micro-voxels really helps to keep memory down.
Micropolygon rendering is a good choice when there is no indirect light computed anywhere in the shot. In this scenario, only the geometry contained in each bucket is loaded in to memory and rendered.
After calculating opacity (with Stochastic Transparency enabled) will then pre-shade all the micro-voxels. They can be considered as small quads and the four points in the quad pre-evaluate the shaders to build all the attributes on them: colour, etc.
Only primary rays are fired by Mantra in to the pre-shaded micro-voxels and on each ray hit on each micro-voxel face, the shader values are bi-linearly interpolated to get the values and then illuminance loops are run over all the lights in the light mask to shade the surface.
This means a lot more memory is consumed to hold the volume data in memory which is fine if you are just loading the volume data for each bucket in a non-indirect light scenario.
With Micropolygon rendering, area lights are very expensive so keep illumination to just point lights and if really needed, environment lights (be sure to increase samples on the environment lights ONLY for micropolygon and retrace rendering, NOT for PBR). With Micropolygon rendering of volumes it may be worth the effort to construct a light dome of instanced point lights that each projects a portion of an environment map. Do google searches on OdForce and this forum for this older technique that started the entire IBL (Image Based Lighting) with spherical HDR images (High Dynamic Range images).
There are techniques to try to pre-compute illumination in volumes to minimize the light calculation overhead at render time but they require additional geometry to be created and cached on disk. A LOT of data. That along with the volumetric data saved on disk. Was a viable method a few years ago but with the advancements in PBR and 32 bit hardware busting past the 4GB memory limit, not so much any more.
Note that with Micropolygon rendering, Stochastic Transparency is irrelevant therefore disabled.
One plus with Micropolygon rendering is that the results tend to be smoother as the micro-voxels are pre-shaded and rays interpolate the results across the microvoxel using a filter of your choice. Gaussian is inherently smooth.
Pixel Samples and Ray Variance Antialiasing
You asked about a guideline between min/max ray samples and primary Pixel Samples.
Pixel Samples I sometimes call Primary Samples are the bundles of rays that are initially cast from the camera in to the scene. These are the rays that “find” geometry to shade. The more rays you initially fire, the better chance you have at finding geometry and getting cleaner edges and finer detail.
If you have very fine, detailed geometry, you will need to fire more Pixel Samples as primary rays to resolve this detail. By detail I include fine detailed displacement or bump maps, texture maps with a lot of detail, very fine dense geometry, very detailed volumes with voxels smaller than a pixel (when Shading Quality is greater than 1 for example to resolve a lot of sub-pixel detail). You need these primary rays to find that geometry.
With volume data, you fire more primary ray Pixel Samples to resolve lots of detail. If your volume data has features that are a couple pixels or larger in size, you can leave Pixel Samples pretty low and rely on Ray Variance Antialiasing to reduce noise in the render due to insufficient lighting samples, whether directly or indirectly illuminated.
If you enable Ray Variance Antialiasing then you now have a strategy to re-inforce the primary Pixel Samples to reduce noise due to lighting, either direct or indirect lighting.
Each primary Pixel Sample ray hits a surface and then fires additional rays to find direct and indirect light sources each in turn until the noise threshold is met or it hits the max ray limit.
The min and max values drive and cap the number of rays that Mantra can fire to reduce the noise until the noise threshold is met. H15 brings a more simple route to proportionately fire more or less rays per shading component while decreasing or increasing noise threshold concurrently to balance ray propagation and try to reduce render times by putting rays where they have the most effect on mitigating noise.
Yes increasing the primary samples in to a volume that only has a couple lights will not see much benefit by increasing the max ray's as the lighting is simplistic. If you are using an environment map with a lot of detail, then yes increasing the max rays will cause more rays to be cast in to the HDR image to reduce noise caused by all that light.
To recap, Pixel Samples as primary samples find and shade geometry as well as compute direct lighting = find and render Geometry and find Lights for illumination
Ray Variance Antialiasing using min and max rays try to reduce noise due to direct and indirect illumination = primarily Lighting.
This is why it is cheaper to use Ray Variance Antialiasing as a strategy to reduce noise due to complex lighting scenes than firing more Pixel Samples that also have to actually shade the geometry. Plus use less memory.
It also gives you a guideline as to when to fire more primary Pixel Samples (lots of volumetric detail causing aliasing issues) versus working with Ray Variance Antialiasing to reduce noise due to direct and indirect lighting.
One strategy I commonly use with volumes (and objects with high levels of detail) is to choose PBR, set the Diffuse Limit to 0, simplify the lighting to one or two area lights, crank the Stochastic Samples to 16 or higher to reduce the salt-n-pepper look in the transparent regions then increase or decrease the primary Pixel Samples to resolve the volume geometry to my liking.
Then if I want light bleed through the volume, turn on all the lights, turn up Diffuse Limit to 2 and work with the Ray Variance Antialiasing parameters to reduce noise due to illumination by the scene.
-jeff
P.S.: apologize for the long post, but you did ask very open ended questions.
If you want indirect light scattering in your volumes, PBR. Most want light scattering. Actually it is this that is having more VFX shots handed over to Lighting to do single-pass renders that include hard surfaces and volumes to get that nice light bleed.
If you don't want indirect light scattering, then it is a crap shoot as to what method to choose: PBR, raytracing or micropolygon. In this scenario, lump PBR and raytracing together. Arguably if your volume is super-dense, the only scattering will be in the fringes and for that you can use a simple back light to highlight this region.
PBR and raytracing engines will use less memory and render faster if you have indirect light in many cases when the settings are set up correctly. PBR will be much faster than raytracing (final gathering/global illumination) in all cases so PBR is used exclusively when you want bounce light.
Direct lighting only and Micropolygon will consume less memory as only the micro-voxels in the current bucket are loaded, vertices pre-shaded and then rendered via ray-casting.
And just what are the differences in choosing which render engine?
PBR, after calculating opacity (with Stochastic Transparency enabled) will fire rays in to the micro-voxels and shade each ray hit. Only the micro-voxels are held in memory. This is key when rendering with PBR and indirect lighting as the entire volume needs to be kept in memory. Therefore not pre-shading the micro-voxels really helps to keep memory down.
Micropolygon rendering is a good choice when there is no indirect light computed anywhere in the shot. In this scenario, only the geometry contained in each bucket is loaded in to memory and rendered.
After calculating opacity (with Stochastic Transparency enabled) will then pre-shade all the micro-voxels. They can be considered as small quads and the four points in the quad pre-evaluate the shaders to build all the attributes on them: colour, etc.
Only primary rays are fired by Mantra in to the pre-shaded micro-voxels and on each ray hit on each micro-voxel face, the shader values are bi-linearly interpolated to get the values and then illuminance loops are run over all the lights in the light mask to shade the surface.
This means a lot more memory is consumed to hold the volume data in memory which is fine if you are just loading the volume data for each bucket in a non-indirect light scenario.
With Micropolygon rendering, area lights are very expensive so keep illumination to just point lights and if really needed, environment lights (be sure to increase samples on the environment lights ONLY for micropolygon and retrace rendering, NOT for PBR). With Micropolygon rendering of volumes it may be worth the effort to construct a light dome of instanced point lights that each projects a portion of an environment map. Do google searches on OdForce and this forum for this older technique that started the entire IBL (Image Based Lighting) with spherical HDR images (High Dynamic Range images).
There are techniques to try to pre-compute illumination in volumes to minimize the light calculation overhead at render time but they require additional geometry to be created and cached on disk. A LOT of data. That along with the volumetric data saved on disk. Was a viable method a few years ago but with the advancements in PBR and 32 bit hardware busting past the 4GB memory limit, not so much any more.
Note that with Micropolygon rendering, Stochastic Transparency is irrelevant therefore disabled.
One plus with Micropolygon rendering is that the results tend to be smoother as the micro-voxels are pre-shaded and rays interpolate the results across the microvoxel using a filter of your choice. Gaussian is inherently smooth.
Pixel Samples and Ray Variance Antialiasing
You asked about a guideline between min/max ray samples and primary Pixel Samples.
Pixel Samples I sometimes call Primary Samples are the bundles of rays that are initially cast from the camera in to the scene. These are the rays that “find” geometry to shade. The more rays you initially fire, the better chance you have at finding geometry and getting cleaner edges and finer detail.
If you have very fine, detailed geometry, you will need to fire more Pixel Samples as primary rays to resolve this detail. By detail I include fine detailed displacement or bump maps, texture maps with a lot of detail, very fine dense geometry, very detailed volumes with voxels smaller than a pixel (when Shading Quality is greater than 1 for example to resolve a lot of sub-pixel detail). You need these primary rays to find that geometry.
With volume data, you fire more primary ray Pixel Samples to resolve lots of detail. If your volume data has features that are a couple pixels or larger in size, you can leave Pixel Samples pretty low and rely on Ray Variance Antialiasing to reduce noise in the render due to insufficient lighting samples, whether directly or indirectly illuminated.
If you enable Ray Variance Antialiasing then you now have a strategy to re-inforce the primary Pixel Samples to reduce noise due to lighting, either direct or indirect lighting.
Each primary Pixel Sample ray hits a surface and then fires additional rays to find direct and indirect light sources each in turn until the noise threshold is met or it hits the max ray limit.
The min and max values drive and cap the number of rays that Mantra can fire to reduce the noise until the noise threshold is met. H15 brings a more simple route to proportionately fire more or less rays per shading component while decreasing or increasing noise threshold concurrently to balance ray propagation and try to reduce render times by putting rays where they have the most effect on mitigating noise.
Yes increasing the primary samples in to a volume that only has a couple lights will not see much benefit by increasing the max ray's as the lighting is simplistic. If you are using an environment map with a lot of detail, then yes increasing the max rays will cause more rays to be cast in to the HDR image to reduce noise caused by all that light.
To recap, Pixel Samples as primary samples find and shade geometry as well as compute direct lighting = find and render Geometry and find Lights for illumination
Ray Variance Antialiasing using min and max rays try to reduce noise due to direct and indirect illumination = primarily Lighting.
This is why it is cheaper to use Ray Variance Antialiasing as a strategy to reduce noise due to complex lighting scenes than firing more Pixel Samples that also have to actually shade the geometry. Plus use less memory.
It also gives you a guideline as to when to fire more primary Pixel Samples (lots of volumetric detail causing aliasing issues) versus working with Ray Variance Antialiasing to reduce noise due to direct and indirect lighting.
One strategy I commonly use with volumes (and objects with high levels of detail) is to choose PBR, set the Diffuse Limit to 0, simplify the lighting to one or two area lights, crank the Stochastic Samples to 16 or higher to reduce the salt-n-pepper look in the transparent regions then increase or decrease the primary Pixel Samples to resolve the volume geometry to my liking.
Then if I want light bleed through the volume, turn on all the lights, turn up Diffuse Limit to 2 and work with the Ray Variance Antialiasing parameters to reduce noise due to illumination by the scene.
-jeff
P.S.: apologize for the long post, but you did ask very open ended questions.
There's at least one school like the old school!
- goldleaf
- Staff
- 4195 posts
- Joined: Sept. 2007
- Offline
- pezetko
- Member
- 392 posts
- Joined: Nov. 2008
- Offline
- cgcris
- Member
- 67 posts
- Joined: May 2014
- Offline
- Alejandro Echeverry
- Member
- 691 posts
- Joined: June 2006
- Offline
Thank you Jeff!! This should be part of the help system!
Feel The Knowledge, Kiss The Goat!!!
http://www.linkedin.com/in/alejandroecheverry [linkedin.com]
http://vimeo.com/lordpazuzu/videos [vimeo.com]
http://www.linkedin.com/in/alejandroecheverry [linkedin.com]
http://vimeo.com/lordpazuzu/videos [vimeo.com]
-
- Quick Links