Deep camera file

   16975   19   3
User Avatar
Member
789 posts
Joined: April 2020
Offline
Hello all,

Is it possible in mantra, to save out a “deep” camera depth map. (like a deep shadow map for a light, but than from a camera). When rendering volumes I would like to store this information in a file I can use later.

Any help is appreciated,

Koen
User Avatar
Member
696 posts
Joined: March 2006
Offline
if you mean z-depth, yes, you can set this as a image layer in mantra (provided you are not using PBR)

go to the mantra node
properties
output
scroll down and add an extra image plane
set the vex variable to Pz

and now when you render you should get the extra image plane called Pz provided that you save to either .pic or .exr

or if you save your image to disk instead of ip, you may set this image plane to render as a separate file.
Stephen Tucker
VFXTD
User Avatar
Member
789 posts
Joined: April 2020
Offline
I would actually like a deep z-depth file. Just like the deep shadows, it stores the opacity on several levels. the z-depth, just stores 1 value per pixel, the deep version would store a approximation of the opacity vs. depth function. Is this possible too?

Thanks for the help,

Koen
User Avatar
Member
606 posts
Joined: May 2007
Offline
I've been wondering about the exact same thing, for the same purpose.
I'm guessing one could parent a light to the camera and match the fov, and
try to come up with a deep compositing operator in cops..
That would mean two renders tho, not fun.

eetu.
User Avatar
Member
12442 posts
Joined: July 2005
Offline
The most common approach is to parent a Light under the Camera and channel-reference the FOV fields and then just set the light to render a DSM our for you.

However, the “correct way” of doing it: you can add Rendering Parameters to your ROP or Camera called “Deep Resolver” and “DSM Filename” (and I think even “AutoGenerate Shadowmap” is necessary maybe) and set them appropriately. There is a magic combination that gets it to work which I have to re-discover every time.
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
789 posts
Joined: April 2020
Offline
Thanks Jason,

That certainly looks a lot like what I was looking for.

Koen
User Avatar
Member
789 posts
Joined: April 2020
Offline
Jason,

One more question, do you know of any tools in houdini do look at or convert these deep rat files? (For example I would like to split them out into several flat layers to see the deep samples).

Cheers,
Koen

ps.
(within vex, the shadowmap call seems to work well. I am curious for other tools)
User Avatar
Member
12442 posts
Joined: July 2005
Offline
MPlay will view them (just be sure to turn off any LUTs and hit the “Adapt To Pixel Range” button). I'm not aware of a standalone tool that can slice up your ranges in the manner you suggest.

Otherwise (as you've discovered) COPs is your friend - write a quick little VOP COP Generator and formulate your own viewer/slicer with the features you like. FYI, at R+H we've created a preview tool in SOPS to place points in space at a density threshold - which is quite useful for unexpected reasons.

As for conversion, unfortunately the shipped standalone i3dconvert doesn't have much for you, but there is source code for a standalone called “i3ddsmgen.C”, which converts between DSMs and i3d's. If you wanted some other behaviour, perhaps you can write it?
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
789 posts
Joined: April 2020
Offline
Thanks Jason,

the “i3ddsmgen.C” has all the info I need, and pointers to the relevan hdk files. Should be easy to write a converter.

Cheers,
Koen
User Avatar
Member
12442 posts
Joined: July 2005
Offline
Cool, good luck!
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
789 posts
Joined: April 2020
Offline
works perfectly,


For those interested, for a quick preview, I used this very simple vex sop:

sop
DSM_Preview()
{
vector pcam = toNDC(“/obj/cam”, P);
vector clr = shadowmap(“~/tmp/dsm.rat”, pcam,1,1,1);
Cd = clr;
}

Thanks again for your help Jason.

Koen
User Avatar
Member
12442 posts
Joined: July 2005
Offline
What would be interesting for SESI to provide would be a VEX and HOM function similar to intersect3d() which could intersect DSMs rather (at a density threshold). Then it'd be very easy to create the equivalent of the DSM Preview SOP we coded up in HDK but in VEX or in a Python SOP.

In fact, there are quite a few nice VEX functions that I hope end up in HOM, as we move forward Perhaps the could “procedurally” just redefine VEX functions as HOM?
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
696 posts
Joined: March 2006
Offline
Sounds like it would make a decent RFE, Jason ;-)
Stephen Tucker
VFXTD
User Avatar
Member
26 posts
Joined: Nov. 2008
Offline
Has anyone rendered geometry into a deep shadow map, and then used that shadow map to “cut out” a Mantra volume render?

Imaging a furry teapot sitting in a cloud of smoke. I want to render those elements in separate passes that I can composite together.

The deep shadow map encodes the teapot's motion blur and detailed fur from the camera's point of view. I want to use that deep shadow map to “hold out” or “cut out” a Mantra volume render of smoke.

I suspect that R+H has done something like that, and am hoping that Jason might be willing to share his experience with us.
Scott Peterson, Machine Learning Graphics Engineer, Unity
User Avatar
Member
12442 posts
Joined: July 2005
Offline
Yeah, using a DSM as a holdout.. we do this all the time.

For general purpose DSM holdouts, we have created a (very simple) fog shader which holds out surfaces using the DSM's opacity. Be warned, though: holding out using a fog shader does not work with Mantra's light-export functionality.

We also have built holdout support into our smoke volume shaders directly (for this reason).
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
26 posts
Joined: Nov. 2008
Offline
Thanks for your tips so far. They are very helpful. I had a question regarding an earlier post:

Otherwise (as you've discovered) COPs is your friend - write a quick little VOP COP Generator and formulate your own viewer/slicer with the features you like. FYI, at R+H we've created a preview tool in SOPS to place points in space at a density threshold - which is quite useful for unexpected reasons.

I'm having trouble with the COP reading approach. I've added parameters to the camera to output the dsm. I'm pretty sure that the .rat I output is a dsm because when I adjust compression and z-depth tolerances, the rat file changes size dramatically.

The file cop doesn't appear to read different “planes” from the dsm rat file – it only appears to read the first depth in the file (not even opacity). I can't find a way to read the dsm through a cop vop generator like you mentioned, either. Can you give a more specific hint? Should I try to use the “shadowmap()” function in the vop cop? Did I need to add an output parameter from my shader?
Scott Peterson, Machine Learning Graphics Engineer, Unity
User Avatar
Member
789 posts
Joined: April 2020
Offline
Scott,

I dont think there is a function in vex of python currently to get the information per layer as they are stored in the dsm. This is only available through the HDK. As I understand it, that is part of the suggested RFE, to expose this in either vex or python.

You can create a vop cop, where you would have a z-value as a custom parameter and sample the DSM through the shadomap function at this depth.

A positive side effect of this is that you get more coherent images, all the slices are proper z-depth maps. Looking at just the layers, different pixels might be showing samples from different z-depths, giving hard to read images.

Cheers,
Koen
User Avatar
Staff
2591 posts
Joined: July 2005
Offline
koen
Hello all,

Is it possible in mantra, to save out a “deep” camera depth map. (like a deep shadow map for a light, but than from a camera). When rendering volumes I would like to store this information in a file I can use later.

Any help is appreciated,

Koen

Sorry for taking so long to reply to this (months and months since the question was first asked I think).

1) Add the “Deep Resolver” property to the camera
2) Add the “DSM Filename” property to the camera
3) Optional: Add any other DSM properties to the camera

This will generate a DSM from the camera along with the beauty image. If you want to skip the beauty image, you can set the picture name to “null:”.

Edit: Apparently, there may be issues with shading in the beauty image.
User Avatar
Member
1533 posts
Joined: March 2020
Offline
hi guys

digging up a bit of a old thread here but,

i'm trying to do DSM's in 9.5, my problem is, how do you filter the DSM to match your beauty pass? my edges aren't lining up in comp? especially when using motion blur and DOF.

another thing, i tried the parent light to camera trick, but it seems that my camera and light(tried point,spot,area) FOV's don't match up either….

thx
jason
HOD fx and lighting @ blackginger
https://vimeo.com/jasonslabber [vimeo.com]
User Avatar
Member
26 posts
Joined: Nov. 2008
Offline
I have bad news, my friend. Motion-blur and DOF alignment is a very tricky problem. One of the fundamental reasons for halos has to do with the way we render and composite layers.

In compositing, the “over” operation is not the correct operation to put two self-occluding layers together (i.e. an object embedded in smoke). The “correct” way to composite is to mutually “matte” each layer out from the other, and then “add” the two images together.

Consider the example of a one-pixel render of a red and green cube sitting next to each other. Here ascii to show the pixel. ‘R’ represents sub-pixel coverage of a Red object in layer 1, and ‘G’ represents sub-pixel coverage of a Green object in layer 2.

RRRGGG
RRRGGG
RRRGGG

The correct compositing should produce 100% opacity (i.e. the entire pixel is “covered” by geometry). The problem is that “over” compositing will give you only 75% opacity (i.e. dark halo). You must add these two layers together – but “add” only works when the layers mutually matte each other.

Mutual matting is almost never practical, especially with volumetric layers, so what can you do?

I don't have many general-purpose tricks that help all of the time. You're best bet may be to use a combination of add and over when compositing, and finding clever ways to mask when you're adding and when you are overing.


> another thing, i tried the parent light to camera trick, but it seems that my camera and light(tried point,spot,area) FOV's don't match up either….

Don't parent a light to the camera to make a dm. Simply render a dm straight from the camera. I believe Jason tells how to do that earlier in this thread.
Scott Peterson, Machine Learning Graphics Engineer, Unity
  • Quick Links