Space transform to Cd in Sop context

   3316   6   0
User Avatar
Member
61 posts
Joined: Feb. 2006
Offline
Hi,
So I have been working on a customer project that requires delivery in alembic format, output with additional point variables for use in some kind of VR system that can do deep composting type stuff on the-fly thru gpu.

Two variables I output are camera and world space as Cd type values. The sort of thing you normally do for relight passes in nuke. I've approached it pretty much as I would for normal render work - spitting out export variables for render passes in a shop, using the transform space node to give me worldP and camP aov, then baking it onto the geo. Works fine for them but doesn't seem a' houdini' way to do it.

I thought maybe just putting space transform vops in Sop context, set to Cd to visualise, might skip a render stage. Nope, but Sop isn't camera/view aware as a shop context. So I thought maybe first do an NDC space conversion, followed by camera or world transform to Cd might do something. Kinda, but doesn't match the worldP or camP renders.

So… how would you do that, in Sop context. Set point colors to the same values you get from exporting space transforms (world and camera space) inside a shop?

Cheers!
User Avatar
Member
258 posts
Joined:
Offline
Maybe I am doing something similar, maybe for the same people…it all depends on what approach you are taking. If it is a deep style approach where one would Nuke then that is one approach, if it is through pointclouds and light baking for a “volumetric object” that is another…but essentially any attribute can be baked onto some points, and usually the only one that needs to be coded to Cd would be albedo or full color for pre-baked stuff. Any other float or vector values can just be attributes and if they correspond to the correct alembic channel they should carry over. There is always render re-projection too.
User Avatar
Member
61 posts
Joined: Feb. 2006
Offline
Hi,
Thanks for the post but none of that answers the question and would infact all be much longer and convoluted processes than I have currently used. It's no more complicated than getting the space transform correct at the Sop level and set that to an attribute. Setting that to Cd in constant shader should then produce a render that matches worldP or camP aov exported in a shader. That's what I'm not getting. Hence why I'm baking a mantra shader with a render stage when I just want to get an OGL match directly in the viewport.

For example, just set Cd to P, with fit - 1,1,0,1 inside a vop will give you something close but not exact to space transform aov exports in a shader. So why? What is the vop/wrangle setup required to match sop/shop contexts?

So as shown here in SHOP context then the corresponding setup for point color/OGL in SOP
Edited by frankvw - Sept. 2, 2017 11:20:22

Attachments:
Screenshot from 2017-09-02 16-05-28.png (196.4 KB)
Screenshot from 2017-09-02 16-07-50.png (169.4 KB)

User Avatar
Member
7771 posts
Joined: Sept. 2011
Offline
To get camera P, you need to put the name of the camera object in the ‘to’ space. There shouldn't be anything else need to get get the point attribute value to match the render value. The mantra render for ‘P’ will look different in mplay, but that is the viewer remapping the values to be displayable. If you save the image as an exr, the data values will be the same.
User Avatar
Member
61 posts
Joined: Feb. 2006
Offline
Hi,
I must be misunderstanding as setting camera object looks really different to the aov even on constant shader. Just raw P rendered with constant shader will match the camera space render of mantra shader export, since world space will be “camera space” in shader/mantra.

But the tricky bit is matching the “world” space render you export from shader. How do you get that to point attribute or Cd in Sop context? (you know, the other position pass you would usually export for nuke relight and that looks like a projected rgb grid across the surfaces).At the moment I bake the shader exports to point colors and it works. Just doesn't seem efficient on bigger scenes if I could just do it as an attribute wrangle/vop at sop level.

Looks like I'll have to grab a flask of coffee and dust off the rendering math textbook. Kinda doubly complicated as mantra sees the sop context as camera space and visa versa (which isn't the way Arnold sees it).
Edited by frankvw - Sept. 4, 2017 15:37:08

Attachments:
world_camera_space.jpg (23.2 KB)

User Avatar
Member
61 posts
Joined: Feb. 2006
Offline
Hmm. The position does look OK as points in Nuke 3d viewport this way even though it looks different. Sending off alembic file to see if this works on their system. Fingers crossed as this will save me lots of time.
Cheers!
Edited by frankvw - Sept. 4, 2017 15:53:03
User Avatar
Member
7771 posts
Joined: Sept. 2011
Offline
Try this in a wrangle. Put you camera name in the camera parm after hitting the auto parm button. I forgot that camera space has z inverted, so it needs to be negated after applying the transform matrix.

string cam = chs("camera");
vector @localP;
vector @worldP;
vector @camP;
matrix local,camera;
local = getspace("space:world","space:current");
camera = getspace("space:world",cam);
@localP = @P;
@worldP = @localP * invert(local);
@camP = @worldP * camera;
@camP *= set(1,1,-1);
  • Quick Links