Position, normals and volumes…
Position from volumes is ambiguous.How do you expect a volume to generate a single P position hit? The darn volume is partially transparent and the ray can penetrate quite deeply. What kind of sorcery or manufactured magic you use to derive a P and then make it work in production is dubious at best. I have seen light shadow opacity maps to help in such sorcery but alas it is ultimately a hack and in some cases, a suitable hack to pseudo-light volumes.
Is it the first P hit of the volume or the last hit when the ray either exits the volume or gets to the opacity limit (0.995 by default I believe).
How can you fix this?
You don't. There is no answer. Hopefully you understand now.
This is where Deep Camera Maps come in. Once generated, you can composite the volume with other objects that too have a Deep Camera Map generated. Nuke has some basic tools that take advantage of Deep Camera Maps to do more than just depth compositing which Houdini's tools can do.
Surface Normals on a Volume are as ambiguous as PAnd then you throw out the Normal thing. Gotta love it.
Again you are manufacturing data on top of a volume to hack out lighting in this case. You can use three lights, one red, green and blue light to shade the volume. Top, Left, Right work ok. Each light shades the volume in the single color pass. In comp, you have r, g and b representing the lighting contribution from the three lights respectively. Separating these channels in comp, you now have the light coverage from the three lights to play with. A time honoured hack that works very well. That is why you see many tests with the non-intuitively lit volumes with the characteristic rgb rainbow colors.
As with P, what normal? As the ray penetrates the volume for shading purposes, most lighting algorithms assume that the light scatters in an isotropic fashion. With more advanced volumetric shaders, you can evaluate the current gradient of the volume, the spot normal inside the volume to factor in a bit of anisotropic lighting and then it really gets complex.
So the normal as with P varies as the ray penetrates and traverses the volume.
A very complex problem that currently is still very expensive as the more physically correct the volume shader the slower it gets. Ultimately as computers get faster, you'll see more options as the shaders can get more complex.
The three light trick works well.
How to extract extra image planes?This is relatively straightforward for shaders coded with VOPs. Just put a Parameter VOP inside your Fog Shader, set it to export and voila, new image plane.
Oh I forgot, H11's pyro shader is coded…
You can hack the pyro shader, add a new parameter like:
surface pyro(
….
export float myparm;
….
)
then in the body of the function write any part of the shader to that variable to export. In the Output ROP driver, add a new plane and match the name and type to export. Because you have access to the illuminance loop when you use Mantra in micropolygon or raytrace mode, you could write exports from inside the illuminance loop to get per-light coverage.