Ah classic. Click around for a couple hours. Post to forum. Find answer oneself seconds later.
Please ignore question above.
For info of anyone reading - there's a USD Character Import SOP, does exactly what I want.
Cheers,
r.
Found 48 posts.
Search results Show results as topic list.
Solaris and Karma » FBX char rigs incoming via USD
- rangi
- 306 posts
- Offline
Solaris and Karma » FBX char rigs incoming via USD
- rangi
- 306 posts
- Offline
Hi,
I'm experimenting a bit with workflow for UE5->Houdini, sending shots through as USD files. Looking super promising with some good wins so far.
It's bringing in the fbx based char rigs (SKMs) with their animation being applied as viewed through the delegates in Solaris. It's clear this isn't a vertex cache, it's doing the bone deforms; there's a SkelAnim, Skeleton and Mesh primitives in the Composed Scene graph. Brilliant.
What I'd love to do is pull that into houdini proper. If I use a lopimport sop I just get the mesh in it's bind position - no anim. If I use Sop Modify Lop I again just get the mesh in the bind position.
Is there a way to pull the SkelAnim and Skeleton primitives out of the stage and into sops so I can apply a bone deform there? Any access at all to any of that?
Cheers,
r.
I'm experimenting a bit with workflow for UE5->Houdini, sending shots through as USD files. Looking super promising with some good wins so far.
It's bringing in the fbx based char rigs (SKMs) with their animation being applied as viewed through the delegates in Solaris. It's clear this isn't a vertex cache, it's doing the bone deforms; there's a SkelAnim, Skeleton and Mesh primitives in the Composed Scene graph. Brilliant.
What I'd love to do is pull that into houdini proper. If I use a lopimport sop I just get the mesh in it's bind position - no anim. If I use Sop Modify Lop I again just get the mesh in the bind position.
Is there a way to pull the SkelAnim and Skeleton primitives out of the stage and into sops so I can apply a bone deform there? Any access at all to any of that?
Cheers,
r.
PDG/TOPs » hqueue MQ server issue, windows py3
- rangi
- 306 posts
- Offline
I never got anywhere with this, so I deleted the 19.5.x versions of hqueue client and server and went back to the launcher. I installed highest version available there which was 19.0.944 py2. Usual schenanigans with SMB mount ownership stuff ( which I resolve by simply running the client bat script as my regular user ) and I seem to be back in business.
I do still a string of the same "Could not find output file for job" errors but they resolve after 15 seconds or so - guessing that's how long it takes for the MQ server to start for each job...? Anyway, can live with this.
I do still a string of the same "Could not find output file for job" errors but they resolve after 15 seconds or so - guessing that's how long it takes for the MQ server to start for each job...? Anyway, can live with this.
Edited by rangi - April 9, 2023 01:22:41
PDG/TOPs » hqueue MQ server issue, windows py3
- rangi
- 306 posts
- Offline
Hi,
Just trying to get back to a setup I had last year, which was using the py2 version of hqueue.
I'd like to be able to process File Caches "in Background" using hqueue on my two machine Indie setup.
Been chasing down the regular gremlins of file mounts and permissions - this seems OK - like before I'm launching the client by calling bat script from cmd. I can render mantra with a Hqueue render rop.
Now I've created and setup Hqueue Scheduler top, put that in a File Cache sop, and hit the cook in Background button.
I get the errors listed below. Can anyone tell me what file it is looking for when it says "Could not find output file for job XX".. Any other tips?
Thanks!
Windows 10
H19.5.534 - including hqueue client and server
PDG: (webb_win) ('MQ usage: 0',)
PDG: (webb_win) ('Using new MQ server',)
PDG: (webb_win) ('Using client id: 080edb964582431fb32553420e72feb4',)
PDG: (webb_win) ('Working Directory Local: E:/projects/pdg/hip',)
PDG: (webb_win) ('Working Directory Remote: $HQROOT/projects/pdg/hip',)
PDG: (webb_win) ('MQ count: 1',)
PDG: (webb_win) ('Submitting New Job',)
PDG: (webb_win) ('Waiting for MQ job 84',)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ('Stopping MQ Relay',)
08:07:21: PDGNet MQRelay stopping
PDG: (webb_win) ('MQ count: 0',)
PDG: (webb_win) ('MQ server not reachable: Invalid MQ address for stopping MQ remotely',)
Traceback (most recent call last):
File "C:\PROGRA~1/SIDEEF~1/HOUDIN~1.534/houdini/pdg/types\houdini\hqueue.py", line 580, in stopService
MQUtility.stopMQRemotely(mq_addr, relay_port, rpc_port, self._verboseLog)
File "C:\PROGRA~1/SIDEEF~1/HOUDIN~1.534/houdini/python3.9libs\pdg\utils\mq.py", line 399, in stopMQRemotely
raise RuntimeError("Invalid MQ address for stopping MQ remotely")
RuntimeError: Invalid MQ address for stopping MQ remotely
Just trying to get back to a setup I had last year, which was using the py2 version of hqueue.
I'd like to be able to process File Caches "in Background" using hqueue on my two machine Indie setup.
Been chasing down the regular gremlins of file mounts and permissions - this seems OK - like before I'm launching the client by calling bat script from cmd. I can render mantra with a Hqueue render rop.
Now I've created and setup Hqueue Scheduler top, put that in a File Cache sop, and hit the cook in Background button.
I get the errors listed below. Can anyone tell me what file it is looking for when it says "Could not find output file for job XX".. Any other tips?
Thanks!
Windows 10
H19.5.534 - including hqueue client and server
PDG: (webb_win) ('MQ usage: 0',)
PDG: (webb_win) ('Using new MQ server',)
PDG: (webb_win) ('Using client id: 080edb964582431fb32553420e72feb4',)
PDG: (webb_win) ('Working Directory Local: E:/projects/pdg/hip',)
PDG: (webb_win) ('Working Directory Remote: $HQROOT/projects/pdg/hip',)
PDG: (webb_win) ('MQ count: 1',)
PDG: (webb_win) ('Submitting New Job',)
PDG: (webb_win) ('Waiting for MQ job 84',)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ("Got PDG MQ server info: Could not find output file for job '84'.",)
PDG: (webb_win) ('Stopping MQ Relay',)
08:07:21: PDGNet MQRelay stopping
PDG: (webb_win) ('MQ count: 0',)
PDG: (webb_win) ('MQ server not reachable: Invalid MQ address for stopping MQ remotely',)
Traceback (most recent call last):
File "C:\PROGRA~1/SIDEEF~1/HOUDIN~1.534/houdini/pdg/types\houdini\hqueue.py", line 580, in stopService
MQUtility.stopMQRemotely(mq_addr, relay_port, rpc_port, self._verboseLog)
File "C:\PROGRA~1/SIDEEF~1/HOUDIN~1.534/houdini/python3.9libs\pdg\utils\mq.py", line 399, in stopMQRemotely
raise RuntimeError("Invalid MQ address for stopping MQ remotely")
RuntimeError: Invalid MQ address for stopping MQ remotely
Solaris and Karma » Anamorphic workflows in Solaris
- rangi
- 306 posts
- Offline
This works if I halve the vertical aperture first.
Cheers.
Cheers.
Edited by rangi - June 21, 2022 23:22:22
Solaris and Karma » Anamorphic workflows in Solaris
- rangi
- 306 posts
- Offline
I've not been able to get it to work as described either.
I'm basically using different settings for the render and the viewport..
I set "Aspect Ratio Conform Policy as Crop Aperture" to "Crop Aperture".
Then for the render to disk I have "Pixel Aspect Ratio" as 2 and leave resolution as it's intended. This reads in to nuke and reformats correctly to non-square pixels.
For rendering in the viewport and mplay I again have "Aspect Ratio Conform Policy" as "Crop Aperture" set to "Crop Aperture", I leave "Pixed Aspect Ratio" as 2, and half the vertical resolution.
I'm doing this with seperate Karma LOPs. It's all way less than ideal workflow, I'm sure there must be something simple I'm missing.
I'm basically using different settings for the render and the viewport..
I set "Aspect Ratio Conform Policy as Crop Aperture" to "Crop Aperture".
Then for the render to disk I have "Pixel Aspect Ratio" as 2 and leave resolution as it's intended. This reads in to nuke and reformats correctly to non-square pixels.
For rendering in the viewport and mplay I again have "Aspect Ratio Conform Policy" as "Crop Aperture" set to "Crop Aperture", I leave "Pixed Aspect Ratio" as 2, and half the vertical resolution.
I'm doing this with seperate Karma LOPs. It's all way less than ideal workflow, I'm sure there must be something simple I'm missing.
Edited by rangi - June 21, 2022 04:39:38
Solaris and Karma » Expressions to prim attributes
- rangi
- 306 posts
- Offline
Thanks Mark - that illuminating. So a rate can be defined in metadata and that effectively gives USDTimeCodes a unit. if TimeCodesPerSecond = FPS then USDTimeCodes are frames. If TimeCodesPerSecond = 1 then USDTimecodes are seconds. And the key info there is that Solaris syncs USD TCPS to Houdini FPS making it $FF I'd be putting in function calls to reference current place on timeline.
Solaris and Karma » Expressions to prim attributes
- rangi
- 306 posts
- Offline
Those variables are specific to houdini, but the units are universal ( time = fps*(frame-1) )
Docs say it's a USDTimeCode which is unit-less. So if it's $F or $TT I guess is determined by the way Solaris is handling things, which sounds like it's frames. I would have expected time.
Docs say it's a USDTimeCode which is unit-less. So if it's $F or $TT I guess is determined by the way Solaris is handling things, which sounds like it's frames. I would have expected time.
Solaris and Karma » Expressions to prim attributes
- rangi
- 306 posts
- Offline
Ah OK - so once we hit USD defined data types we can find the methods in the pixar docs and make good assumption there's a matching python method. Or that's what I'm hearing. Thanks for the tip!
That said - glancing at those docs doesn't tell me if it would be $FF or $T I'd be passing it. You've told me (thanks) and it's a pretty quick experiment, but I think it's exemplatory of the accessibilty/user-friendlyness of this system. It's tricky to learn from one's home studio - I imagine if I was in a bigger team sharing esoteric tips then understanding would be quicker.
That said - glancing at those docs doesn't tell me if it would be $FF or $T I'd be passing it. You've told me (thanks) and it's a pretty quick experiment, but I think it's exemplatory of the accessibilty/user-friendlyness of this system. It's tricky to learn from one's home studio - I imagine if I was in a bigger team sharing esoteric tips then understanding would be quicker.
Solaris and Karma » Expressions to prim attributes
- rangi
- 306 posts
- Offline
mtucker
Yep, that seems right to me, assuming the data isn't time dependent.
For current use case - no, it's not time dependent. Can imagine they could be in the future. What's the impact on that?
How do I find any docs on that Get() method? I'm passing it an int because that made it work. Is that actually a float time value or something?
Cheers,
r.
Solaris and Karma » Expressions to prim attributes
- rangi
- 306 posts
- Offline
Dragging and dropping things into the python terminal and using the method completion thing in the python shell I've discovered this incantation:
hou.node("/stage/cam").stage().GetAttributeAtPath("/cameras/s2730_cam.stabshot").Get(0)
Is that the most straightforward way for me to read my attribute?
It's working, so happy days.
hou.node("/stage/cam").stage().GetAttributeAtPath("/cameras/s2730_cam.stabshot").Get(0)
Is that the most straightforward way for me to read my attribute?
It's working, so happy days.
Solaris and Karma » Expressions to prim attributes
- rangi
- 306 posts
- Offline
Hey loppers,
I feel like I'm fundamentally still not getting a few things with solaris, so excuse me if I'm just going about it arse backwards. Please correct me!
My workflow is to write out a usd file for each shot as it's authored - camera, instanced geo, instanced lights. I'm sublayer-ing those in to another hip which applies materials and light trims for final batch rendering. Standard stuff, I hope.
I want to pass some information along from layout to lighting - such as if the lights should be flickering. Since this goes to disk it's not a candidate for storeparametervalues. Tags essentially.
An attribute wrangle easily adds an attribute to my camera. I can access in another wrangle.
How do I access it in a parameter such as a input parm of the switch LOP?
My guess is python expression, and I've found this: hou.pwd().inputs().stage() on the forum.. which feels like the right path - but where from there? Can't find docs... sorry.
Or is there a much better way of thinking about this?
Thanks!
I feel like I'm fundamentally still not getting a few things with solaris, so excuse me if I'm just going about it arse backwards. Please correct me!
My workflow is to write out a usd file for each shot as it's authored - camera, instanced geo, instanced lights. I'm sublayer-ing those in to another hip which applies materials and light trims for final batch rendering. Standard stuff, I hope.
I want to pass some information along from layout to lighting - such as if the lights should be flickering. Since this goes to disk it's not a candidate for storeparametervalues. Tags essentially.
An attribute wrangle easily adds an attribute to my camera. I can access in another wrangle.
How do I access it in a parameter such as a input parm of the switch LOP?
My guess is python expression, and I've found this: hou.pwd().inputs().stage() on the forum.. which feels like the right path - but where from there? Can't find docs... sorry.
Or is there a much better way of thinking about this?
Thanks!
Solaris and Karma » Solaris and hqueue - out of the box
- rangi
- 306 posts
- Offline
Hey Brad,
Thanks again for sharing your HDA... got me in the right direction. It is tonnes faster and more elegant than sending the whole rop to hqueue.
I'm attaching my version of the asset in case anyone's reading along.. I've
* Moved to code to the python module
* Fixed issue with missing last frame
* Add redshift flag
* setup expressions to pull most of the info from a USD Render ROP LOP.
* Added eval so supports per frame usd
There's a little bit of guss (ver_cntl_path and jobname expression) that'll only work with my setup - but hopefully this helps others the way your asset helped me.
Cheers,
r.
Thanks again for sharing your HDA... got me in the right direction. It is tonnes faster and more elegant than sending the whole rop to hqueue.
I'm attaching my version of the asset in case anyone's reading along.. I've
* Moved to code to the python module
* Fixed issue with missing last frame
* Add redshift flag
* setup expressions to pull most of the info from a USD Render ROP LOP.
* Added eval so supports per frame usd
There's a little bit of guss (ver_cntl_path and jobname expression) that'll only work with my setup - but hopefully this helps others the way your asset helped me.
Cheers,
r.
Solaris and Karma » Solaris and hqueue - out of the box
- rangi
- 306 posts
- Offline
Yeah - generating the USD - I guess that's defined with the config-layer LOPs - I don't have a final one in my scene... and there's no specifying a path like there is with IFDs. So I need to do more reading / playing to work out what exactly is happening. I'm pretty sure it's cooking way too much stuff given the time it's taking. Solaris still pretty mysterious to me.
Thanks for sharing your asset - that's exactly the workflow I'd expect. Will check it out next week when I have a moment.
Thanks for sharing your asset - that's exactly the workflow I'd expect. Will check it out next week when I have a moment.
3rd Party » When I render with Karma output is spamming about Redshift
- rangi
- 306 posts
- Offline
I'm seeing the same thing. I've been ignoring it, though I'm wondering if it affects load times - must do at least a little. Removing the env-vars that load RS seem to be the only obvious solution. For me I'm rendering with both so just living with it.
Solaris and Karma » Solaris and hqueue - out of the box
- rangi
- 306 posts
- Offline
Hi,
I'm rendering on a second workstation using hqueue. Submitting from the stage with a fetch rop grabbing the "USD Render Rop", which is then wired in to an hqueue rop.
The behaviour I'm seeing is that it creates the job with the first task generating the subsequent tasks which are the frame batches.
I'm used to a workflow where that would be generating ifds, then the subsequent tasks are just mantra. Quick loads, only requires render license.
What I see is those tasks for rending the frames are just pointing back to the original hip. Fine - means it uses the license but it's a one-node farm so who cares.
However - the first tasks creating those frame tasks is incredibly slow - tying up the machine for perhaps half an hour. Really significant. Since I'm not waiting for it to generate USDs, what is it that is taking the time here?
( I know the proper response is to write submission and wrapper scripts - which I've not done for hqueue and wanted to avoid doing. This is the out-of-the-box experience which I was hoping would suffice for my one man show )
Thanks in advance for any tips.
r.
I'm rendering on a second workstation using hqueue. Submitting from the stage with a fetch rop grabbing the "USD Render Rop", which is then wired in to an hqueue rop.
The behaviour I'm seeing is that it creates the job with the first task generating the subsequent tasks which are the frame batches.
I'm used to a workflow where that would be generating ifds, then the subsequent tasks are just mantra. Quick loads, only requires render license.
What I see is those tasks for rending the frames are just pointing back to the original hip. Fine - means it uses the license but it's a one-node farm so who cares.
However - the first tasks creating those frame tasks is incredibly slow - tying up the machine for perhaps half an hour. Really significant. Since I'm not waiting for it to generate USDs, what is it that is taking the time here?
( I know the proper response is to write submission and wrapper scripts - which I've not done for hqueue and wanted to avoid doing. This is the out-of-the-box experience which I was hoping would suffice for my one man show )
Thanks in advance for any tips.
r.
Technical Discussion » VEX cracktransform() equivalent in python
- rangi
- 306 posts
- Offline
Technical Discussion » VEX cracktransform() equivalent in python
- rangi
- 306 posts
- Offline
Hi,
Writing a standalone utility to process fspy json output. Have a nice example from VEX land which pulls the rotations from the supplied transform like this:
v@rot = cracktransform(0, 0, 1, {0,0,0}, transpose);
Is there a python library I can grab that'll allow me a similiar function?
nuke.math.Matrix4()has a rotationsZXY() method which is exactly what I'm after. That'd be an answer for me if it didn't require a nuke license which I can't see a way of calling it without one.
Cheers,
r.
Writing a standalone utility to process fspy json output. Have a nice example from VEX land which pulls the rotations from the supplied transform like this:
v@rot = cracktransform(0, 0, 1, {0,0,0}, transpose);
Is there a python library I can grab that'll allow me a similiar function?
nuke.math.Matrix4()has a rotationsZXY() method which is exactly what I'm after. That'd be an answer for me if it didn't require a nuke license which I can't see a way of calling it without one.
Cheers,
r.
Solaris and Karma » Solaris overscan workflow
- rangi
- 306 posts
- Offline
Hi,
Looking for most elegant/preferred way to approach adding overscan to allow for lens distortion in post.
Render settings offers "Data Window NDC" and rollover help suggests using this, so I dial in (-0.05,-0.05,1.05,1.05)
Reading into nuke and the bounds are expanded as I'd expect, but the picture is scaled up by the same. So feels like I'm missing the "Screen Window Size" setting to adjust that back. I can modify the camera's aperture to correct. Doesn't feel like the right way to do it.
How are people doing this? Ideally this is would work for any delegate...
Looking for most elegant/preferred way to approach adding overscan to allow for lens distortion in post.
Render settings offers "Data Window NDC" and rollover help suggests using this, so I dial in (-0.05,-0.05,1.05,1.05)
Reading into nuke and the bounds are expanded as I'd expect, but the picture is scaled up by the same. So feels like I'm missing the "Screen Window Size" setting to adjust that back. I can modify the camera's aperture to correct. Doesn't feel like the right way to do it.
How are people doing this? Ideally this is would work for any delegate...
Solaris and Karma » Instancing input and orientation
- rangi
- 306 posts
- Offline
Hey,
I'm trying to figure best workflow for multi-shot managing intstances, and figure creating point clouds in a layer to be instanced after the shot switching is best approach.
However I lose the orientation and point attributes.
I've attached a hip to show the different behaviour I"m seeing:
First tree - using "Internal Sop" gives me orientation and point attributes I'm after. Then to get my second set of instances I'm object merging the same point cloud. This is how I've been doing things, but doesn't suit my intended workflow and involves object merging at sop levels which is too old school.
Second tree - Using "Internal Sop" again for first instance.. second instance I'm attempting to instance to the first input primitives. This gives me orientation, but I lose the point attributes. ( Some times as I click around in Houdini GL view the balls turn coloured, but never in Karma )
Third tree - this is my intended workflow - create the point cloud then instance to "First Input's Points". Works in that I can repeat that instance the second time using the same technique. But I don't get orientation or point attributes on either instanced set.
Is that third tree a bug I should be reporting? Is there a different workflow someone might suggest?
Cheers
I'm trying to figure best workflow for multi-shot managing intstances, and figure creating point clouds in a layer to be instanced after the shot switching is best approach.
However I lose the orientation and point attributes.
I've attached a hip to show the different behaviour I"m seeing:
First tree - using "Internal Sop" gives me orientation and point attributes I'm after. Then to get my second set of instances I'm object merging the same point cloud. This is how I've been doing things, but doesn't suit my intended workflow and involves object merging at sop levels which is too old school.
Second tree - Using "Internal Sop" again for first instance.. second instance I'm attempting to instance to the first input primitives. This gives me orientation, but I lose the point attributes. ( Some times as I click around in Houdini GL view the balls turn coloured, but never in Karma )
Third tree - this is my intended workflow - create the point cloud then instance to "First Input's Points". Works in that I can repeat that instance the second time using the same technique. But I don't get orientation or point attributes on either instanced set.
Is that third tree a bug I should be reporting? Is there a different workflow someone might suggest?
Cheers
Edited by rangi - Dec. 9, 2021 03:24:57
-
- Quick Links