Passes for volumes

   25060   11   4
User Avatar
Member
230 posts
Joined: Oct. 2009
Offline
Hello,

Since last week, I am working with volumes inside Houdini, pyro smoke, and I want to have a production rendering with passes. The only image plane which works is the beauty pass (rgb) and alpha.

Also normals N, position P, depth Pz seems to work but it gives very pixelated results. My first question is how can I fix this?

My second question is how can I have more passes, what's the direct/indirect volume passes? They give no results.

I found on the forum an alternative to create normal pass for volumes http://forums.odforce.net/index.php?/topic/14661-normal-passes-on-volume-renders/page__hl__passes%20for%20volumes%20smoke__fromsearch__1 [forums.odforce.net] but with complex mesh it seems not right. Also I cant include it in the pyro shader as extra pass, is there any way to go inside the pyro shader?

Finally last week double negative had a presentation in my Uni and show us that they had build a series of extra passes for volumes which was very useful for compositing. Is there anything like that for Houdini?

Thanks in advance for any info.

Attachments:
volume_customN.jpg (72.3 KB)
volume_N.jpg (30.8 KB)
volume_depth.jpg (23.0 KB)
volume_position.jpg (15.7 KB)
volume_beauty.jpg (40.0 KB)

User Avatar
Staff
2540 posts
Joined: July 2005
Offline
Position, normals and volumes…

Position from volumes is ambiguous.
How do you expect a volume to generate a single P position hit? The darn volume is partially transparent and the ray can penetrate quite deeply. What kind of sorcery or manufactured magic you use to derive a P and then make it work in production is dubious at best. I have seen light shadow opacity maps to help in such sorcery but alas it is ultimately a hack and in some cases, a suitable hack to pseudo-light volumes.

Is it the first P hit of the volume or the last hit when the ray either exits the volume or gets to the opacity limit (0.995 by default I believe).

How can you fix this?
You don't. There is no answer. Hopefully you understand now.
This is where Deep Camera Maps come in. Once generated, you can composite the volume with other objects that too have a Deep Camera Map generated. Nuke has some basic tools that take advantage of Deep Camera Maps to do more than just depth compositing which Houdini's tools can do.

Surface Normals on a Volume are as ambiguous as P
And then you throw out the Normal thing. Gotta love it.
Again you are manufacturing data on top of a volume to hack out lighting in this case. You can use three lights, one red, green and blue light to shade the volume. Top, Left, Right work ok. Each light shades the volume in the single color pass. In comp, you have r, g and b representing the lighting contribution from the three lights respectively. Separating these channels in comp, you now have the light coverage from the three lights to play with. A time honoured hack that works very well. That is why you see many tests with the non-intuitively lit volumes with the characteristic rgb rainbow colors.

As with P, what normal? As the ray penetrates the volume for shading purposes, most lighting algorithms assume that the light scatters in an isotropic fashion. With more advanced volumetric shaders, you can evaluate the current gradient of the volume, the spot normal inside the volume to factor in a bit of anisotropic lighting and then it really gets complex.
So the normal as with P varies as the ray penetrates and traverses the volume.
A very complex problem that currently is still very expensive as the more physically correct the volume shader the slower it gets. Ultimately as computers get faster, you'll see more options as the shaders can get more complex.

The three light trick works well.

How to extract extra image planes?
This is relatively straightforward for shaders coded with VOPs. Just put a Parameter VOP inside your Fog Shader, set it to export and voila, new image plane.

Oh I forgot, H11's pyro shader is coded…

You can hack the pyro shader, add a new parameter like:

surface pyro(
….
export float myparm;
….
)

then in the body of the function write any part of the shader to that variable to export. In the Output ROP driver, add a new plane and match the name and type to export. Because you have access to the illuminance loop when you use Mantra in micropolygon or raytrace mode, you could write exports from inside the illuminance loop to get per-light coverage.
There's at least one school like the old school!
User Avatar
Member
230 posts
Joined: Oct. 2009
Offline
Hi jeff,

thanks for the reply it was very enlightening, but I have one question regarding to deep camera map. I did 2 test renderings, 1st saved the deep camera map with default .rat format and the second as .exr (I am not sure if Houdini 11.1 supports exr 2.0) but both of them have the informations you expect (C,Pz) when I use MPlay to display them but none of them can imported to Nuke, rat is not supported either way but exr also gives an error.

You mention as solution to use deep camera map with Nuke, so I guess there is a workflow for that (regarding to Houdini's side).

I am using Houdini 11.1 and Nuke 6.3

cheers
User Avatar
Member
606 posts
Joined: May 2007
Offline
I think everybody's still eagerly awaiting for the openEXR 2.0 to be finalized.

Another thread: http://forums.odforce.net/index.php?/topic/14281-nuke-deep-compositing/ [forums.odforce.net]
User Avatar
Member
230 posts
Joined: Oct. 2009
Offline
eetu
I think everybody's still eagerly awaiting for the openEXR 2.0 to be finalized.

Another thread: http://forums.odforce.net/index.php?/topic/14281-nuke-deep-compositing/ [forums.odforce.net]

I was under this impression but then I though that maybe there is a workaround because Jeff mentioned deep compositing as a fix to my current problem. Maybe I misunderstood
User Avatar
Staff
2540 posts
Joined: July 2005
Offline
Deep Compositing and Deep Camera Maps are the same thing in practice.

Btw they can get very large on disk. Easily over 1 GB for normal scenes with lots of transparency to capture.

If the OpenEXR team can get their ducks in a row, then in theory Houdini “could” write Deep Camera Map info to exr files in version 2.0.

Wasn't OpenEXR 2.0 supposed to be released already? And where are those ducks?
There's at least one school like the old school!
User Avatar
Member
691 posts
Joined: June 2006
Offline
Hi Jeff!!

I note that the conversion of Deep Camera Maps is single threaded, this will change in H12?

Thanks.
Feel The Knowledge, Kiss The Goat!!!
http://www.linkedin.com/in/alejandroecheverry [linkedin.com]
http://vimeo.com/lordpazuzu/videos [vimeo.com]
User Avatar
Member
12 posts
Joined: Aug. 2012
Offline
this seems to work.
i am new to houdini,do you think it's ok?
it's houdini12.

Attachments:
b.jpg (72.4 KB)
a.jpg (63.8 KB)
pyro_normal_pass_PengZhang.hipnc (1.1 MB)

User Avatar
Member
24 posts
Joined: Feb. 2010
Offline
quack! quack!
User Avatar
Member
120 posts
Joined: Feb. 2008
Offline
well it looks like nuke 7 has openexr 2.0 support for deep data… maybe it's time for houdini to support it now?
User Avatar
Member
1104 posts
Joined: Aug. 2008
Offline
Sorry for bumping a old thread but is pengzhang's suggestion looks like worldspace normals and if so how to convert them to tangentspace for the camera ?
/M

Personal Houdini test videos, http://vimeo.com/magnusl3d/ [vimeo.com]
User Avatar
Member
19 posts
Joined: Sept. 2015
Offline
Heya,

Yeah I'm stuck on that issue as well. Otherwise, I guess you could rotate your simulation to the match the camera view? I….dunno shock
  • Quick Links