We've gone back to rendering in Houdini 19.0 for now, which is an awful solve even short-term as the resolver format changed between the two versions so we're being forced to flatten scenes down before sending them to the farm.
It's definitely frustrating that multiple studios have flagged this for multiple renderers and yet no one seems to have any ideas.
Personally I feel like if the same RenderMan version stalling in 19.5 and not in 19.0 that puts the ball in SideFX court but ultimately we just need it fixed :\
Found 60 posts.
Search results Show results as topic list.
Solaris and Karma » Husk.exe stuck not finishing task
- blented
- 61 posts
- Offline
Technical Discussion » No module named sidefx_stroke or kinefx.stateutils
- blented
- 61 posts
- Offline
Amazing tip, this has been such an annoyance, thank youuu!
The Right click > Properties > Start In path is %HOMEDRIVE%%HOMEPATH% by default on our Windows install, it should be:
"C:\Program Files\Side Effects Software\Houdini 20.0.506\bin"
The Right click > Properties > Start In path is %HOMEDRIVE%%HOMEPATH% by default on our Windows install, it should be:
"C:\Program Files\Side Effects Software\Houdini 20.0.506\bin"
Solaris and Karma » Husk.exe stuck not finishing task
- blented
- 61 posts
- Offline
We've spent the day debugging this to no avail unfortunately.
Tested:
There seems to be a loose correlation between the scene complexity and our chances of getting hung frames, with greater complexity giving a much greater chance. We reduced the instance count by half, and then to 20%, and went from most frames hanging to perhaps a third of the frames hanging.
We've also pulled in additional scenes to test, and are getting the same hanging in a simple high poly river scene that we're seeing in a scatter-heavy instance scene, but w/ less consistency, only hanging or slowing about 10% of the frames.
Some of our hung frames do eventually complete, often taking 4-20x longer than the original frames. Our best test case for this scenario has been the river scene, which is normally 2 minutes per frame, but consistently has several frames out of 100 taking 30+ minutes or never completing.
Progress on the hung frames often stalls out at 50-99%, with the logs simply stopping after that.
Apologies for the lengthy post, just trying to give as much info as we can to help solve this.
Windows 10 / Houdini 19.5.569 / RenderMan 25.2
Tested:
- fast exit = 0
- rendering locally then copying to the network (Houdini consistently hates our SMB server)
- reducing scene complexity / instance count
- testing w/ several other scenes
There seems to be a loose correlation between the scene complexity and our chances of getting hung frames, with greater complexity giving a much greater chance. We reduced the instance count by half, and then to 20%, and went from most frames hanging to perhaps a third of the frames hanging.
We've also pulled in additional scenes to test, and are getting the same hanging in a simple high poly river scene that we're seeing in a scatter-heavy instance scene, but w/ less consistency, only hanging or slowing about 10% of the frames.
Some of our hung frames do eventually complete, often taking 4-20x longer than the original frames. Our best test case for this scenario has been the river scene, which is normally 2 minutes per frame, but consistently has several frames out of 100 taking 30+ minutes or never completing.
Progress on the hung frames often stalls out at 50-99%, with the logs simply stopping after that.
Apologies for the lengthy post, just trying to give as much info as we can to help solve this.
Windows 10 / Houdini 19.5.569 / RenderMan 25.2
Solaris and Karma » Husk.exe stuck not finishing task
- blented
- 61 posts
- Offline
There's a corresponding RenderMan forum post w/ sadly a similar lack of info / traction:
https://renderman.pixar.com/forum/showthread.php?s=&postid=265604#post265604 [renderman.pixar.com]
Definitely seems like the non-Karma renderers aren't "releasing resources" or "finishing" properly. Really hoping devs on both sides can put their heads together and figure this out, as it's top of list for us right now as we attempt to migrate to RenderMan 25.2 / Houdini 19.5.
We're running a test w/ Houdini 20 tomorrow to see if the issue persists there, will update this thread w/ the results 👍
https://renderman.pixar.com/forum/showthread.php?s=&postid=265604#post265604 [renderman.pixar.com]
Definitely seems like the non-Karma renderers aren't "releasing resources" or "finishing" properly. Really hoping devs on both sides can put their heads together and figure this out, as it's top of list for us right now as we attempt to migrate to RenderMan 25.2 / Houdini 19.5.
We're running a test w/ Houdini 20 tomorrow to see if the issue persists there, will update this thread w/ the results 👍
Solaris and Karma » Husk.exe stuck not finishing task
- blented
- 61 posts
- Offline
Just to hop in here, we're experiencing the same issue w/ Renderman 25.2, Houdini 19.5.569
Agree it seems suspicious that it's happening w/ both Renderman and Arnold
Agree it seems suspicious that it's happening w/ both Renderman and Arnold
Technical Discussion » Primvar AOVs in RenderMan for Solaris?
- blented
- 61 posts
- Offline
PDG/TOPs » Cook a single work item via Python
- blented
- 61 posts
- Offline
Quite a bit of digging to make this happen, figured I'd leave the answer here for posterity per usual
First, you'll need to use a python scheduler instead of the regular local scheduler to cook things.
The defaults are all fine, just add these lines right after the imports in the scheduling tab.
This will make PDG skip any work items that aren't meant to be cooking. Without this, you'll end up re-cooking all the upstream items on the farm, even if they've already been cooked by prior dependent jobs.
Be sure to update the default scheduler on your topnet.
Next, to actually cook a single work item, you'll want this bit of code. The comments explain it further but essentially graphContext.cookItems was the only function I could find that actually took individual items to cook, so we use that alongside setting PDG_ACTIVE_WORK_ITEM on the environment such that everything else auto-succeeds.
All of this lets us run PDG jobs on the farm similar to how ROPs jobs would run, but with all the great features that come w/ PDG.
First, you'll need to use a python scheduler instead of the regular local scheduler to cook things.
The defaults are all fine, just add these lines right after the imports in the scheduling tab.
# auto-succeed if this isn't the work item we're meant to be cooking if os.environ['PDG_ACTIVE_WORK_ITEM'] != str(work_item.id): return pdg.scheduleResult.CookSucceeded
This will make PDG skip any work items that aren't meant to be cooking. Without this, you'll end up re-cooking all the upstream items on the farm, even if they've already been cooked by prior dependent jobs.
Be sure to update the default scheduler on your topnet.
Next, to actually cook a single work item, you'll want this bit of code. The comments explain it further but essentially graphContext.cookItems was the only function I could find that actually took individual items to cook, so we use that alongside setting PDG_ACTIVE_WORK_ITEM on the environment such that everything else auto-succeeds.
import hou import os def cookWorkItem(node, index, block=True): # generate static work items for this node, which will # generate parents as needed # likely that this only really works with static work items node.generateStaticWorkItems(block) # info about the work item we're cooking pdgNode = node.getPDGNode() context = pdgNode.context workItem = pdgNode.workItems[index] print('cooking:', workItem.id) # set the active work item as an environment variable os.environ['PDG_ACTIVE_WORK_ITEM'] = str(workItem.id) # use the context to cook this work item # our custom onSchedule function in the python schedule # will skip anything that's not this PDG_ACTIVE_WORK_ITEM return context.cookItems( block, [workItem.id], pdgNode.name)
All of this lets us run PDG jobs on the farm similar to how ROPs jobs would run, but with all the great features that come w/ PDG.
Solaris and Karma » how do multishot setup
- blented
- 61 posts
- Offline
There's a nice section in the 18.5 launch keynote that talks about how to set up multi shot with context options:
https://youtu.be/zMrvCWy85xM?t=3425 [youtu.be]
https://youtu.be/zMrvCWy85xM?t=3425 [youtu.be]
PDG/TOPs » Cook a single work item via Python
- blented
- 61 posts
- Offline
Is there a way via Python to cook only a single work item on a node in a PDG graph? I'm working on improving how PDG is handled on the farm and struggling to find the appropriate method.
TopNode [www.sidefx.com] only has cookWorkItems() which doesn't take an index.
TopNode [www.sidefx.com] only has cookWorkItems() which doesn't take an index.
Technical Discussion » No module named sidefx_stroke or kinefx.stateutils
- blented
- 61 posts
- Offline
Worth noting, on a fresh install w/ no tools we're still seeing this on 19.5
The above fix works, modified for python 3.9 as such:
But we're getting a new error which we're still trying to track down:
The above fix works, modified for python 3.9 as such:
PYTHONPATH = $PYTHONPATH;$HFS/packages/kinefx/python3.9libs;$HFS/houdini/viewer_states;$HFS/packages/kinefx/viewer_states
But we're getting a new error which we're still trying to track down:
Error running event handler: Traceback (most recent call last): File "labs::Lop/karma::2.0, ViewerStateModule", line 18, in <module> File "C:\Program Files\Side Effects Software\Houdini 19.5.303.9\python39\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py", line 142, in _import return original_import(name, *args, **kwargs) ModuleNotFoundError: No module named 'ParseExrMetadata' Error running event handler: Traceback (most recent call last): File "labs::Lop/karma::2.0, opdef:/labs::Lop/karma::2.0?ViewerStateInstall", line 1, in <module> File "C:\PROGRA~1/SIDEEF~1/HOUDIN~1.9/houdini/python3.9libs\viewerstate\utils.py", line 948, in register_pystate_embedded raise ViewerStateException("createViewerStateTemplate not found in {}".format(node_type.sourcePath())) viewerstate.utils.ViewerStateException: createViewerStateTemplate not found in oplib:/labs::Lop/karma::2.0?labs::Lop/karma::2.0
Solaris and Karma » Solaris default hair thickness in viewport
- blented
- 61 posts
- Offline
Thanks Tomas! Hiding under wireframe, so obvious 🤦♂️
The # of questions I come across here where you're the one answering is truly incredible.
Screenshot for others who may find this thread, also random crowd person cheering appropriately :P
The # of questions I come across here where you're the one answering is truly incredible.
Screenshot for others who may find this thread, also random crowd person cheering appropriately :P
Solaris and Karma » Solaris default hair thickness in viewport
- blented
- 61 posts
- Offline
In Houdini 19, BasisCurves are shaded as ribbon geometry as opposed to lines.
Couple of questions:
- is there any way to disable this? In most cases we'd prefer the previous shading behavior.
- is there any way to set the default ribbon width (currently seems to be 1m)? .001 would probably be a better default.
- is there a fast way to set widths for all points? Or make it a constant instead of a per point array?
We have tons of hairstyles published without width, as it's set later procedurally, but we can no longer view them in the viewport as they either crash Houdini or look like a ball of thick ribbons.
It's impractically slow to add widths in a sopModify just to view the hairstyles.
In the attached image, left is Solaris viewport, right is what we'd like them to look like.
Thanks for any help you can provide!
Couple of questions:
- is there any way to disable this? In most cases we'd prefer the previous shading behavior.
- is there any way to set the default ribbon width (currently seems to be 1m)? .001 would probably be a better default.
- is there a fast way to set widths for all points? Or make it a constant instead of a per point array?
We have tons of hairstyles published without width, as it's set later procedurally, but we can no longer view them in the viewport as they either crash Houdini or look like a ball of thick ribbons.
It's impractically slow to add widths in a sopModify just to view the hairstyles.
In the attached image, left is Solaris viewport, right is what we'd like them to look like.
Thanks for any help you can provide!
Technical Discussion » No module named sidefx_stroke or kinefx.stateutils
- blented
- 61 posts
- Offline
Thanks for checking, we'll look through our env more and try to figure out what's causing the hang-up.
Technical Discussion » Adding to Layout Asset Gallery from Python
- blented
- 61 posts
- Offline
Thanks for the help!
I can confirm that the following code adds the asset as expected:
Is the source for this module somewhere? The arguments don't line up with this function signature from $HFS/houdini/python2.7libs/husd/assetutils.py at all, and all 3 arguments are required:
Regardless happy to have it working
I can confirm that the following code adds the asset as expected:
hou.qt.AssetGallery.addAsset('toy', 'c:/test/toy/toy.usd', 'c:/test/toy/thumbnail.png')
Is the source for this module somewhere? The arguments don't line up with this function signature from $HFS/houdini/python2.7libs/husd/assetutils.py at all, and all 3 arguments are required:
def addAsset(asset_file_or_dir_path, thumbnail_file_path = None, asset_name = None): """ Adds a USD asset to the asset gallery, given the asset file or directory, and optionally the thumbnail file and asset name. If thumbnail file is not provided or does not exist in a given asset directory, it is automatically generated, so that the asset gallery entry always has an icon. If asset name is not provided, it is deduced from the file/dir name. """
Regardless happy to have it working
Technical Discussion » Adding to Layout Asset Gallery from Python
- blented
- 61 posts
- Offline
I'm looking to add to the layout asset gallery via python. I've found:
$HFS/houdini/python2.7libs/husd/assetutils.py
which contains:
However regardless of how I run it I never get an asset added. The code I'm running:
Furthermore, when querying the database directly via sqlite3 to see if everything's working, I get this error after attempting to add an asset via python:
Is there a guide on how this should work? Any pointers would definitely be appreciated!
$HFS/houdini/python2.7libs/husd/assetutils.py
which contains:
def addAsset(asset_file_or_dir_path, thumbnail_file_path=None, asset_name=None)
However regardless of how I run it I never get an asset added. The code I'm running:
import husd.assetutils as au au.AssetGallery.addAsset('c:/test/toy/toy.usd')
Furthermore, when querying the database directly via sqlite3 to see if everything's working, I get this error after attempting to add an asset via python:
sqlite3.OperationalError: Could not decode to UTF-8 column 'thumbnail' with text ' ╪ α'
Is there a guide on how this should work? Any pointers would definitely be appreciated!
Technical Discussion » No module named sidefx_stroke or kinefx.stateutils
- blented
- 61 posts
- Offline
On a fresh Houdini 19 install we were getting
and
To fix this we had to add the following to our houdini.env
Curious if this is expected behavior? Leaving it here in hopes that it helps anyone else experiencing the same issue.
ImportError: No module named sidefx_stroke
and
ImportError: No module named kinefx.stateutils
To fix this we had to add the following to our houdini.env
PYTHONPATH = $PYTHONPATH;$HFS/packages/kinefx/python2.7libs;$HFS/houdini/viewer_states;$HFS/packages/kinefx/viewer_states
Curious if this is expected behavior? Leaving it here in hopes that it helps anyone else experiencing the same issue.
Edited by blented - 2021年11月20日 10:06:15
Houdini Lounge » Tumbling in Houdini
- blented
- 61 posts
- Offline
The shift Z trick is great, works in LOPs as well, but doesn't work in the render view.
Anyone have a fix for that? Testing w/ Renderman specifically if that helps.. ideally we'd be able to tumble around while IPR-ing
Anyone have a fix for that? Testing w/ Renderman specifically if that helps.. ideally we'd be able to tumble around while IPR-ing
Technical Discussion » HDA with primary input + secondary inputs
- blented
- 61 posts
- Offline
Yeah the UI / UX of it isn't great w/out the first input separated. We've been treating that as our “B” pipe from a Nuke comp sense and it's been working well for assembling larger scenes. Without it separated it's a pain to connect a different input to #1, you also can't have it be empty and just use it as a “group” node which graft currently lets you do.
Technical Discussion » HDA with primary input + secondary inputs
- blented
- 61 posts
- Offline
Yeah, we'd just need a checkbox under basic that specified “split out first input” or something, then as Tomas said you can get it to “look” like a multi-input by just setting max inputs to like 50+.
Our aim is to put an expression in the graft node to control the numbering of duplicates, with additional options to “group” incoming prims. None of that really works though if you can't split out the first input.
Our aim is to put an expression in the graft node to control the numbering of duplicates, with additional options to “group” incoming prims. None of that really works though if you can't split out the first input.
Solaris and Karma » Viewer State: Set Solaris viewport selection via Python
- blented
- 61 posts
- Offline
One minor issue, when I set the viewport selection via python, if I press Alt to tumble the selection highlighting disappears. If I mouse out of the viewport then back over the selection will come back. Bug? Seems to work w/ the default selection tools just not pythonically.
-
- Quick Links