tpetrick The approach you described should also work. From HOM, you can do use hou.TopNode.setSelectedWorkItem(..) to select a work item by id.
Thanks, this works great. Is there a way to Generate a selected node? I get Segment faults if I don't generate first. I found executeGraph but it it seems like it works for the whole network, I just want to do a specific node.
On a side note, I have noticed that PDG isnt using all of my processing power. I 28 core (56 threads CPU), and I am only getting about 30% CPU usage on my tasks. I am running 6 pyro sims that normally give me about 85% usage when running locally without tops. I have limited each work item to 8 CPUs (to avoid running out of RAM), but I also get low CPU when using no limits. I have Maximum CPUs to Use set to Use All CPUs Except One.
Is there a way I could get some more use out of my CPUs? I am running individual pyro sims, not clustering.
I was wondering if there was a way I could cook a PDG node, then select a specific work item index on that node.
I am doing something where I am submitting some rops to our farm (NOT using PDG), but a lot of the variables are being driven byt a CSV input PDG node. I can select the work item and send the rop to the farm, however the work item is not selected when the farm blade picks up the job. My work around would be to run a pre-render python script that would select the correct work item to get all the variables, just like I do locally.
Tyler Britton2 I am working in 17.5.173, could that be why?
Yes, unfortunately it looks like that is the case, please let us know if you are having trouble with a more recent version.
Hey Chris, I finally got to test the latest production build and it works like a charm, thanks. However I am having problems filtering out some attributes I am loading in with a csv input.
In the screenshots I have a “shotindex” integer attribute I am brining in, and am using a Partition by Attribute to ideally partition my above work items by the “shotindex” attribute, however it doesnt seem to be working the same way it did with the wedges I did in my other case. Am I doing something wrong?
My company recently cont Hqueue working as a render manager for our Houdini jobs. It works great, I just had a couple of questions.
1. We load a couple of environment variables with a 123.py/456.py script. Is there a way to get that to load automatically per job so that I do not have to specify them in in my Hqueue launcher nodes?
2. When launching jobs to the farm, I got an error “hou.PermissionError: Failed to modify node or parameter because of a permission error. Possible causes include locked assets, takes, product permissions or user specified permissions” I only had a font and a filecache node in my scene, and after unlocking the filecache node the problem went away and I was able to render the scene correctly. Ideally I wouldn't have to unlock all of my assets before rendering them, is there something I am doing incorrectly?
Tyler Britton2 it looks like the Blur process is only getting the first frame of the Sim step, even though the frame attribute is picking up the current frame…
Yes, sorry that wasn't quite right. If you look at the Output files on one of the partitions in partitionbyattribute1 you'll see that all the wedge frames are there, because the outputs have been merged. Since the expression on the File SOP is `@pdg_input`, it always reads the first element in that list of inputs, which is always element 0 (frame 1).
There are two ways to fix this: 1. Change the File expression to use the correct element of the list ($F-1): `pdginput($F-1, “”, 1)`
Or
2. On Blur_FrameRange, set the Expand Input Files Across Frame Range toggle. This means the input list of each work item will be set to only one of the elements from the partition's outputs, and so your original expression will work.
Hey Chris, I tried both of those, and the #1 is erroring on the expression, and the Expand Input Files solution for #2 is still only bringing in the first frames. I am working in 17.5.173, could that be why?
chrisgreb If you just want to run a sim again for each wedge, you can use a partitionbyattribute for the wedgeindex value and then generate new simulations from each of those partitions.
Thank you for the help, but it looks like the Blur process is only getting the first frame of the Sim step, even though the frame attribute is picking up the current frame…
Thanks for the clafication, I never knew you could do `@pdg_input.1`. However, I am trying to set up work items for them individually, so I get 5 sets of 1-100 frames, each outputting the blurred result of the incoming sim. I am sorry for not making that clearer.
Thanks for taking a look at it. 1) Yes this was a mistake. Fixed. 2) Fixed 3) Its in my noise offset of my emitter. I am not really going for a something amazing I am just trying to figure out this partitioning.
I uploaded the scene with the fixes, but am not getting the 5 wegded blurred results of the sim that I am looking for (only getting one of them). Thanks for your help.
Thanks for the help guys. The partitioning works great but I am only getting it for one of my sims. I have not used partitioners a lot, so I am sure it is something I am doing incorrectly. I made a example hip file with the setup, it would be great if I could have some more guidance on how to complete the setup, maybe even using some mappers if I need too…? I have not found many good examples on mappers as Ken said.
The frame range is 1-20, but in my real production case all the wedges are different frame ranges, so it would be nice to have that setup for that case.
I have a PDG network work I am generating 5 work items from a @wedge attribute using a wedge top node. I am then running a sim to wedge the emission of the source, from frames 1-100. I then want to do another post operation from those results (for example, blurring the volume), per using my original @wedge attribute. I can use the `@pdg_input` variable inside of the file node to pick up the sim from the corresponding work item, and it all works fine. However, when I set the frame range for my 2nd step (blurring post operation) for the 1-100 frame range, I get 50,000 new work items for that second step (because it is taking my orignal frames (5 wedges*100 frames), and doing that for 100 frames again for the next step). However, I only want 100 work items per wedge index, since I am only picking up the resulting frames from my original sim. I have tried a number of things, including isolating the first frame of each wedge and then stepping through the sim with a @frame attribute in a file sop, but it still sticks to that first frame and does not run through the whole sim.
I know this might sound a big confusing, and I can do a hip file when I get home, but I was wondering if someone could lead me into the right direction off the bat. I was trying to use parition top nodes to group my original @wedge work intems into single work items after my initial pyro sim, but with no success.
Is there a way to run hscript commands in PDG as well, something like “opparm -c /out/mantra1 execute”. I tried it with the Generic Generator with no success.
I am having trouble getting this to work. I have geometry written out and based upon this is I want to create @wedge wedges based on how many points it has. I can't upload a scene, but it is something like this
Guide Geometry ROP Geo Output (to find how many points it has) - Geometry Import (with work item generation to Dynamic and, Geomtry Source to Upstream Result File, Load Geometry During Cook checked, and pointing to the correct External File Path, and have set Item Index to Upstream Item Index) - Wedge driving @wedge (which picks in number of points from the Guide Geometry, and the same file that the Geometry Import is importing)
No matter what it doesnt set the @wedges to number of points in the Guide Geometry. I dont get any errors, just 1 work item. Everything is set to dynamic.
I was wondering how I would run a linux terminal bash script using PDG. Something like the Unix sop node, but in TOPS. It is just a one line script (it is actually just “deadlineslave” to start a Deadline slave on my workstation).
I am working for a company that is new to setting up Houdini on their farm. We have Houdini 17.5 working just fine, and want to roll it out to onto their Linux farm with some Engine licenses. Their Deadline version they are using is version 6, not 10 which is the latest version.
I was wondering if it is possible to get Houdini PDG working on their farm without upgrading to upgrade to Deadline 10. If this is not possible, and we had to get Deadline 10 on a few machines to get PDG working, what the setup process was and if there are good walkthroughs how to set it up. We have Deadline working with Gaffer and Nuke, we should just have to add Houdini PDG to it.
Thanks Brandon, I will start doing this. When there are no jobs left to start and some jobs are still going, will the CPUs from the finished jobs go over to help the jobs still going, or does it not work like that?
This is very helpful. Just to be clear, if I have 24 threads on my machine. I was getting 6 jobs at max when rendering a bunch of tasks, I assume because each job was assigned to 4 threads, is that correct? When I switch the CPUs Per Task to 2, from what I understand, I should have a max amount of jobs of 12, but I only seem to get 3. When I put the Maximum CPUs to use to 24, it then seems to open up and I get my 12 jobs. Why do I need to specify the Maximum CPUs to get my 12 jobs? And it looks like I go over 12 jobs when I set Maximum CPUs to over 24, is there a way to limit that number to the amount of CPUs that my computer has automatically?