Querying info from lops in sops

   3407   14   1
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Hi;

I think this might be a silly question but: I'd like to grab the worldspace position of a camera LOP in a sopnet. In OBJ, I would normally object merge the obj camera's camOrigin point into the sopnet and use it for this purpose. What's a good strategy for this in solaris?

Thanks!
Edited by dhemberg - 2022年5月30日 16:49:32
User Avatar
Member
273 posts
Joined: 11月 2013
Offline
Not a silly question as I'm not sure pulling transform and camera related data into sops is as easy as it could be. But if you create a sphere in lops parented below the camera that should result in sop point with the correct world space position and the full transform in the "usdLocalToWorldTransform" intrinsic.
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Hey thank you for this! I feel a little less ridiculous asking how to do this now.

In a general sense, how does one read attributes in SOPs from prims in Solaris? Like, if I want to read things like focal length from a camera, or even the bounds from a prim? The docs for the usd_* calls that one might use in a wrangle don't really explain how one "talks" to stage items when one isn't using those calls in a wrangle in LOPSs. Trying "/stage" doesn't seem to work; I get a bunch of NANs when I try this:

vector Pcam = usd_getbbox_center("/stage", chs("cam_path"), "default");

To your specific advice, how do I ask for "usdLocalToWorldTransform" in a wrangle in sops?
User Avatar
Member
1742 posts
Joined: 5月 2006
Online
Try using an op: prefix. A lot of these calls assume you're looking for a stage from a usd file on disk, not from inside houdini.


vector Pcam = usd_getbbox_center("op:/stage", chs("cam_path"), "default");
http://www.tokeru.com/cgwiki [www.tokeru.com]
https://www.patreon.com/mattestela [www.patreon.com]
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Awesome, this is super helpful. thank you! I think this gets me closer than I was before.

The thing I'm trying is:

vector Pcam = usd_attrib("op:/stage/move_my_camera", "/world/cam/render_cam", "xformOp:transform");
v@test=Pcam;

I can see in Solaris that this "xformOp:transform" attribute has nonzero values when I inspect it in the Scene Graph Details, but when I look at v@test in my Geometry Spreadsheet it's all zeroes. Better than NANs, so that means I'm at least closer than I was...
User Avatar
Member
1742 posts
Joined: 5月 2006
Online
Try

4@xform = usd_worldtransform("op:/stage", '/cameras/camera1');
@P *= @xform;
http://www.tokeru.com/cgwiki [www.tokeru.com]
https://www.patreon.com/mattestela [www.patreon.com]
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
*facepalm* because I'm asking for a matrix, not a vector. Ugh...thank you.

In a general sense: how does this function understand at what point to evaluate my camera position? Like, if I make a camera, then later down in my LOP graph I move it, then I fork it the graph and move it again...at what point in this graph am I evaluating the camera's transform (or any attribute, for that matter)? Particularly in the case of forking the graph off to, say, prune some stuff or modify things for different render pass purposes, etc.

(this is a slight digression from my original question, I understand...I'm just trying to understand USD/Solaris better.

P.S. @mestela I consult Tokeru on a near-hourly basis as I muddle through this; many thanks for all the work you've invested in that, it is immensely helpful!
User Avatar
Member
7789 posts
Joined: 9月 2011
Offline
dhemberg
Like, if I make a camera, then later down in my LOP graph I move it, then I fork it the graph and move it again...at what point in this graph am I evaluating the camera's transform (or any attribute, for that matter)? Particularly in the case of forking the graph off to, say, prune some stuff or modify things for different render pass purposes, etc.

Since the op:/ refers to a specific node, then it evaluates to the state of the composed stage at that node.
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Ok, so in the case of just using op:/stage, it presumably evaluates to whatever the currently-selected/viewed node might be?
User Avatar
Member
7789 posts
Joined: 9月 2011
Offline
dhemberg
Ok, so in the case of just using op:/stage, it presumably evaluates to whatever the currently-selected/viewed node might be?

Probably by display flag, but I'm not sure.
User Avatar
Member
273 posts
Joined: 11月 2013
Offline
Sorry for the bad info - I didn't realize those functions worked from sops!
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Not at all! It's quite helpful to hear different strategies, there's always more than one way to do anything in Houdini it seems.

At the risk of noising up this thread even more, a followup question: am I right in understanding that there doesn't seem to be a clear way to get camera projection info from a LOP camera into Sops? For example, I have a little snippet of nodes that creates backplate geometry that perfectly fits the camera field of view by using the "fromNDC" VOP, which doesn't seem to want to let me choose a LOP camera, and seems to behave unexpectedly when I try to feed it a USD scene tree location.
User Avatar
スタッフ
4441 posts
Joined: 7月 2005
Offline
The easiest way to get camera information from LOPs into SOPs might be using a LOP Import Camera object.
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Hm; I'm trying this, but it seems that something differs between the camera projection (as I understand it) in LOPs and what I'm able to pull into Sops. To be clear, here what I seem to have to do:

--make a camera in Solaris (I can see no "projection matrix" attribute on this camera, which becomes significant in a moment)
--create an OBJ network, then try to pull the camera from Lops into this OBJ network
--Create a SOP network, then try to pull various aspects from the camera in the obj network into my sop network

But things do not behave as I expect. I'm vaguely aware that there is a scene units difference (a scale by 1000?) that might be contributing to this?

In trying to get a little closer to the metal and avoid the black box of these import nodes (which seem like they might not all the way be designed to be used as I'm using them), I'm reading a little about how to do world-to-camera matrix transforms, but again it seems like I'm missing something critical, or there is an invisible scale factor being applied somewhere that is not being passed through this context jump I'm doing.

It's entirely possible I'm mistaken - I'm admittedly trying to feel my way in the dark here, as this is unfamiliar territory for me.
User Avatar
Member
7789 posts
Joined: 9月 2011
Offline
dhemberg
Hm; I'm trying this, but it seems that something differs between the camera projection (as I understand it) in LOPs and what I'm able to pull into Sops. To be clear, here what I seem to have to do:

What's different? The imported camera should match the one from LOPs.

There's a convenience function in VEX for creating a perspective matrix from camera view parameters. You can get the view parameters from the usd camera or from the imported camera object.

There might be a way to get a perspective matrix from a usd camera in Python, but I've not found one. All the methods I've found just resolve to attributes of the camera but not the computed matrix. Maybe there's something in husd.
  • Quick Links