The solution is to add -w as a launch argument to the end of the environment variable value. This waits for the process to close which houdini requires according to support, and keeps the item loaded in a single thread.
EDITOR = "C:/Users/XXXX/AppData/Local/Programs/Microsoft VS Code/Code.exe -w"
Found 150 posts.
Search results Show results as topic list.
Houdini Indie and Apprentice » Set external script editor not working
- Andrew Graham
- 150 posts
- Offline
Technical Discussion » Running pytest on UI from command line?
- Andrew Graham
- 150 posts
- Offline
I had the same question and support suggested:
houdini -foreground script.py
And if your test needs to query or manipulate the Houdini UI, then:
houdini -foreground waitforui script.py
Also, we normally set these environment variables beforehand for testing purposes:
- HOUDINI_DISABLE_CONSOLE=1
- HOUDINI_NO_SPLASH=1
- HOUDINI_DISABLE_BACKGROUND_HELP_INDEXING=1
houdini -foreground script.py
And if your test needs to query or manipulate the Houdini UI, then:
houdini -foreground waitforui script.py
Also, we normally set these environment variables beforehand for testing purposes:
- HOUDINI_DISABLE_CONSOLE=1
- HOUDINI_NO_SPLASH=1
- HOUDINI_DISABLE_BACKGROUND_HELP_INDEXING=1
Technical Discussion » Import Hou in External Python Process
- Andrew Graham
- 150 posts
- Offline
Python. Interestingly it works fine on my workstation, I just can't identify the cause for why it occurs in some other systems.
Technical Discussion » Import Hou in External Python Process
- Andrew Graham
- 150 posts
- Offline
PDG/TOPs » No Pre-Render/Post-Render script for Geometry Top node?
- Andrew Graham
- 150 posts
- Offline
PDG/TOPs » No Pre-Render/Post-Render script for Geometry Top node?
- Andrew Graham
- 150 posts
- Offline
Thinking out aloud, you might also append another script command and alter the actual work item command to execute your initial python script for cleanup/deletion before the main payload runs too. I would do this if the pre frame operation is slow and can run in parallel. you may wish for it to run on the same worker.
If its a fast operation, and must be sequential, or must run before submission, then I would prefer running out during the onscheduled callback.
The onscheduled callback is also the only appropriate place to set parms in your hou session as well (like versions).
Also keep in mind that if files exist at the result location, no cooking will occur at all, unless items upstream cook. so you would probably have to make sure that you will cleanup the target location first, or use the delete outputs from disk in the menu, or increment a version to force a cook.
If its a fast operation, and must be sequential, or must run before submission, then I would prefer running out during the onscheduled callback.
The onscheduled callback is also the only appropriate place to set parms in your hou session as well (like versions).
Also keep in mind that if files exist at the result location, no cooking will occur at all, unless items upstream cook. so you would probably have to make sure that you will cleanup the target location first, or use the delete outputs from disk in the menu, or increment a version to force a cook.
Edited by Andrew Graham - June 24, 2020 00:56:52
PDG/TOPs » No Pre-Render/Post-Render script for Geometry Top node?
- Andrew Graham
- 150 posts
- Offline
This is also the best place to handle asset creation and auto versioning, setting up paths etc, since we only want to do that when an item will cook, but before it executes.
PDG/TOPs » No Pre-Render/Post-Render script for Geometry Top node?
- Andrew Graham
- 150 posts
- Offline
My approach to this currently is to replace code in the onscheduled call back, or also the onpresubmit method (which can be patched in the localscheduler).
onscheduled is the right place since we only want to do those operations if the work item will cook.
onscheduled is the right place since we only want to do those operations if the work item will cook.
PDG/TOPs » Q: telling PDG "failed" is actually "OK"
- Andrew Graham
- 150 posts
- Offline
That's good to know. So with hqueue - would it bail out on a sim if other tasks downstream are failing or would that sim be safe to finish? It would be great to see this in Deadline too if it isn't already there.
PDG/TOPs » Q: telling PDG "failed" is actually "OK"
- Andrew Graham
- 150 posts
- Offline
This ability would be useful. So far I have been using pdg in interactive sessions, so failed frames are fine in that scenario if you just resubmit something that is fast to execute. But anything that takes a long time or that submits pdg on a remote system will need some number of retries of tasks before bailing out anything being affected or downstream. we also wouldn't want to exit simulations for example if a task that is a sibling hits the max failure limit, so stopping the whole graph would be undesirable.
PDG/TOPs » access to pdg event handlers added to the node
- Andrew Graham
- 150 posts
- Offline
Being able to list all event handlers for a graph and for nodes would be useful to selectively remove a handler of a certain type / name.
Without being able to list handlers its also not possible to determine if a handler needs to be added for some process.
Without being able to list handlers its also not possible to determine if a handler needs to be added for some process.
Edited by Andrew Graham - Sept. 8, 2019 05:20:40
PDG/TOPs » Where is the hook located that saves a copy of the hip file in the working directory?
- Andrew Graham
- 150 posts
- Offline
Apologies, there might be a bit of confusion. that rfe is discussing how to create hooks in order to set version parms on rop nodes as a callback.
I created RFE 99047 now to suggest providing a hip path parameter that a user can override used for all tops submissions on schedulers. this would also easily allow tracking of submissions by date/time if a user set that path with a time during preflight.
https://www.sidefx.com/bugs/#/bug/99047 [www.sidefx.com]
I created RFE 99047 now to suggest providing a hip path parameter that a user can override used for all tops submissions on schedulers. this would also easily allow tracking of submissions by date/time if a user set that path with a time during preflight.
https://www.sidefx.com/bugs/#/bug/99047 [www.sidefx.com]
PDG/TOPs » Can't channel ref in a python partitioner?
- Andrew Graham
- 150 posts
- Offline
PDG/TOPs » Where is the hook located that saves a copy of the hip file in the working directory?
- Andrew Graham
- 150 posts
- Offline
PDG/TOPs » Where is the hook located that saves a copy of the hip file in the working directory?
- Andrew Graham
- 150 posts
- Offline
Thanks Chris. I can locate that thanks to you now, however I can't find any ref to that function being called with the deadline scheduler. during submission with deadline where would the location be to call transferFile on the hip and set the item command path to the saved hip path?
PDG/TOPs » Where is the hook located that saves a copy of the hip file in the working directory?
- Andrew Graham
- 150 posts
- Offline
Upon submission in pdg, a hip file is saved in the working directory for slaves to execute their workloads from.
I'm wondering where the hook is located that does this save/copy operation, and assigns the part of the item.command string relevent to this path.
I'd like to override that path location completely and specify it with my own so that it doesn't reside in the working directory.
The reason is I have jobs running on a cloud site, and access to the NFS shared working directory is fine for small payloads like most things I see in the pdg working dir, but hip files are too large for that, so I'd like to replace it with a localised path for the location to ease the VPN traffic per task.
I do currently have a solution to this by altering the item command during the onScheduled callback to do this, but it seems a bit of a post fix hack- it would be good if I could do these alterations where all this is initialised in the first place to ensure the hip files in both locations are identical.
Thanks if you have any ideas on how this might be possible.
Andrew Graham
I'm wondering where the hook is located that does this save/copy operation, and assigns the part of the item.command string relevent to this path.
I'd like to override that path location completely and specify it with my own so that it doesn't reside in the working directory.
The reason is I have jobs running on a cloud site, and access to the NFS shared working directory is fine for small payloads like most things I see in the pdg working dir, but hip files are too large for that, so I'd like to replace it with a localised path for the location to ease the VPN traffic per task.
I do currently have a solution to this by altering the item command during the onScheduled callback to do this, but it seems a bit of a post fix hack- it would be good if I could do these alterations where all this is initialised in the first place to ensure the hip files in both locations are identical.
Thanks if you have any ideas on how this might be possible.
Andrew Graham
PDG/TOPs » Can't channel ref in a python partitioner?
- Andrew Graham
- 150 posts
- Offline
Thanks to jens for this one. The solution is
self['some_int'].evaluateInt()
Edited by Andrew Graham - Aug. 6, 2019 03:57:37
PDG/TOPs » Can't channel ref in a python partitioner?
- Andrew Graham
- 150 posts
- Offline
I'm trying to simply eval an int from a channel in a python partitioner, but seeing as I cant do this on a python partitioner…
import hou
node = hou.pwd()
parm = node.parent('some_int').eval()
Then I'm not sure how to eval any parm/int on the parent hda. in this case pwd will just return ‘/’.
Hacky way is to set the int as an attribute, and pull it that way- but then I'm finding I have to make the partitioner dynamic in order to reliably read that attribute which is not desirable.
Thanks if anyone has a pointer for this.
import hou
node = hou.pwd()
parm = node.parent('some_int').eval()
Then I'm not sure how to eval any parm/int on the parent hda. in this case pwd will just return ‘/’.
Hacky way is to set the int as an attribute, and pull it that way- but then I'm finding I have to make the partitioner dynamic in order to reliably read that attribute which is not desirable.
Thanks if anyone has a pointer for this.
PDG/TOPs » Free tutorial on PDG Fundamentals in Houdini
- Andrew Graham
- 150 posts
- Offline
In the next 24 hours from this post I will premier a free tutorial on PDG fundamentals [youtu.be]
This is a tutorial on a boilerplate setup I've used as a basis for VFX production with PDG in houdini, and serves as a template for many use cases.
I produced it to intentionally cover areas I know I wanted to learn more about when I started implementing PDG in my day to day workflows, and how to predictably understand how it generates output with versioning.
I'll cover a common implementation of how to wedge for multiprocessing of elements and being able to explore parameter variations for each element. We use a data structure to support this workflow in most cases for flexibility. We will also cover how ensure per frame dependency for all variations is possible for rapid iteration with open gl flipbooks, introduce some basic version management ideas (which I'll extend upon in the future), and briefly discuss the implications of PDG and hybrid workflows between a local machine, farm, and cloud compute.
If you like what I'm sharing and you'd like to see me produce more, you can support me on https://patreon.com/openfirehawk [patreon.com]
Those funds primarily go toward supporting open source cloud infrastructure for VFX, and getting the most affordable rendering cost possible - openfirehawk.com
You can also get openfirehawk houdini tools here on github, which represent my current sandbox that I also use in production - https://github.com/firehawkvfx/openfirehawk-houdini-tools [github.com]
I've been using PDG in my pipeline daily since its release a few months back from the date of this video here. Side FX implement rapid changes in PDG continuously, so some of the situations I discuss will likely no longer apply in time-
You will occasionally see some of my usage is not as efficient as it perhaps could be and there are existing RFE's for some of these areas encountered that I work around, none the less I observe rapid improvement in Side FX's implementation and dealing with these issues regularly.
I hope the devs might watch as well to observe some of my implementation to improve in some areas we bump into throughout this use case. Overall, its a good demonstration of the efficiency that is bound our way.
This is a tutorial on a boilerplate setup I've used as a basis for VFX production with PDG in houdini, and serves as a template for many use cases.
I produced it to intentionally cover areas I know I wanted to learn more about when I started implementing PDG in my day to day workflows, and how to predictably understand how it generates output with versioning.
I'll cover a common implementation of how to wedge for multiprocessing of elements and being able to explore parameter variations for each element. We use a data structure to support this workflow in most cases for flexibility. We will also cover how ensure per frame dependency for all variations is possible for rapid iteration with open gl flipbooks, introduce some basic version management ideas (which I'll extend upon in the future), and briefly discuss the implications of PDG and hybrid workflows between a local machine, farm, and cloud compute.
If you like what I'm sharing and you'd like to see me produce more, you can support me on https://patreon.com/openfirehawk [patreon.com]
Those funds primarily go toward supporting open source cloud infrastructure for VFX, and getting the most affordable rendering cost possible - openfirehawk.com
You can also get openfirehawk houdini tools here on github, which represent my current sandbox that I also use in production - https://github.com/firehawkvfx/openfirehawk-houdini-tools [github.com]
I've been using PDG in my pipeline daily since its release a few months back from the date of this video here. Side FX implement rapid changes in PDG continuously, so some of the situations I discuss will likely no longer apply in time-
You will occasionally see some of my usage is not as efficient as it perhaps could be and there are existing RFE's for some of these areas encountered that I work around, none the less I observe rapid improvement in Side FX's implementation and dealing with these issues regularly.
I hope the devs might watch as well to observe some of my implementation to improve in some areas we bump into throughout this use case. Overall, its a good demonstration of the efficiency that is bound our way.
Edited by Andrew Graham - Aug. 2, 2019 22:43:49
PDG/TOPs » How to get all dependencies recursively for work items / find the top most work items contributing to a workitem downstream.
- Andrew Graham
- 150 posts
- Offline
Now I'm currently using this for archival purposes and disk cleanup which is another bonus of tops being output path aware!
This approach below seems to work better. Traverse up all inputs, then evaluate all work items once you have all nodes in the tree.
This approach below seems to work better. Traverse up all inputs, then evaluate all work items once you have all nodes in the tree.
def get_upstream_workitems(self): # this will generate the selected workitems self.pdg_node = self.node.getPDGNode() self.node.executeGraph(False, False, False, True) added_workitems = [] added_nodes = [] added_node_dependencies = [] def append_node_dependencies(node): added_node_dependencies.append(node) if len(node.inputs) > 0: for input in node.inputs: input_connections = input.connections print "input_connections", input_connections if len(input_connections) > 0: for connection in input_connections: dependency = connection.node if dependency not in added_nodes: added_nodes.append(dependency) added_nodes.append(self.pdg_node) for node in added_nodes: append_node_dependencies(node) diff_list = np.setdiff1d(added_nodes, added_node_dependencies) while len(diff_list) > 0: for node in diff_list: append_node_dependencies(node) diff_list = np.setdiff1d( added_nodes, added_node_dependencies) print "added_nodes", added_nodes for node in added_nodes: for workitem in node.workItems: added_workitems.append(workitem) return added_workitems
Edited by Andrew Graham - Aug. 2, 2019 08:21:55
-
- Quick Links