Found 132 posts.
Search results Show results as topic list.
PDG/TOPs » Empty or Missing file path
- kenxu
- 544 posts
- Offline
PDG/TOPs » Working with HDAs that contain hou package python code
- kenxu
- 544 posts
- Offline
Hi there,
Basically the restriction is to not use hou to modify the scene. Reading should be ok, but modifying geometry or parameters is not. Also not ok is to try to create nodes with hou (PDG would run this stuff on a background thread). If you want to create nodes with hou, the right way to do it is via the command chain stuff.
These restrictions applies to the Python Script node while it is set to “run in process”, and any of the python callbacks you implement (such as Python processor node's onGenerate, onRegenerate etc. callbacks). It does NOT apply to any SOP node, or any HDA you create. You could do whatever you want with hou in those places.
Basically the restriction is to not use hou to modify the scene. Reading should be ok, but modifying geometry or parameters is not. Also not ok is to try to create nodes with hou (PDG would run this stuff on a background thread). If you want to create nodes with hou, the right way to do it is via the command chain stuff.
These restrictions applies to the Python Script node while it is set to “run in process”, and any of the python callbacks you implement (such as Python processor node's onGenerate, onRegenerate etc. callbacks). It does NOT apply to any SOP node, or any HDA you create. You could do whatever you want with hou in those places.
PDG/TOPs » Doing sequencial execution on parallel tasks
- kenxu
- 544 posts
- Offline
We are definitely planning more improvements to the loops and topfetch features - these are powerful constructs that are still under-explored. That said however, for your use case the basic structure for solving the problem won't change - the right way to dynamically launch a variable number of for loops is through the topfetch feature.
WRT the last part of your problem, it sounds like there is some issue with a specific ROP and so the problem is not related to PDG itself?
WRT the last part of your problem, it sounds like there is some issue with a specific ROP and so the problem is not related to PDG itself?
PDG/TOPs » Object network in top network
- kenxu
- 544 posts
- Offline
This is addressed in this thread:
https://www.sidefx.com/forum/topic/69324/ [www.sidefx.com]
Basically it's not a PDG bug but rather an issue with that particular ROP.
https://www.sidefx.com/forum/topic/69324/ [www.sidefx.com]
Basically it's not a PDG bug but rather an issue with that particular ROP.
PDG/TOPs » Scheduled nodes cooking in farm or local
- kenxu
- 544 posts
- Offline
Many nodes in PDG / TOPs such as Attribute Create will cook instantly without needing to be scheduled at all. Workitems that need to be scheduled will by default use whatever scheduler you specified as the default on the top network. You can achieve per processor scheduler settings by pointing a specific processor at a different scheduler. So if you want to make sure a particular processor is using the local scheduler, while the rest is using your custom farm scheduler, then specify your farm scheduler as the default, and override the specific processors that need to be on the local machine with the local scheduler.
PDG/TOPs » Doing sequencial execution on parallel tasks
- kenxu
- 544 posts
- Offline
Ah, ok, I understand better. So, we basically need to launch as many for loops as there are sortids. The way to do this is via the topfetch feature, which allows a separate top network to be launched per workitem. I put the for-loop in that separate top network (top_fetch_net). In the for loop, we generate the number of iterations based on the ‘partitionsize’ attribute, which is inherited from the partition. Finally, notice there is the promote_partitioned_item_attrs node immediately above the topfetch. This is so we can promote whatever attributes you need from the partitioned items to the top level, so it is accessible in the for loop. In this case, I aggregated the names of the characters onto the ‘upstream_names’ attribute.
PDG/TOPs » Processing Multiple files
- kenxu
- 544 posts
- Offline
PDG/TOPs » Batch renders with Arnold using PDG
- kenxu
- 544 posts
- Offline
Could you kindly post something on the Arnold forum, in that case? We’ll raise this as well with them whenever we get to meet. If you could upload a reference file, we can potentially work with them to correct the issue. Note however this may take a while, as it requires coordination between our two companies.
Edited by kenxu - Sept. 18, 2019 17:53:46
PDG/TOPs » Preview problem
- kenxu
- 544 posts
- Offline
Hi Ostap,
The visualization works with respect to the currently selected workitem, not node. A PDG node could contain many workitems, so it’s unclear which one should be visualized?
The visualization works with respect to the currently selected workitem, not node. A PDG node could contain many workitems, so it’s unclear which one should be visualized?
PDG/TOPs » Object network in top network
- kenxu
- 544 posts
- Offline
Yes, it currently is. We support absolute paths only at the moment with PDG stuff. This is because the ROP Fetch can point to a path in a totally different hip file, in which case the relative path doesn't mean anything. In the specific (but common) case where the ROP Fetch is pointing to something in the local file, though, we have an RFE to support relative paths in this case.
PDG/TOPs » Doing sequencial execution on parallel tasks
- kenxu
- 544 posts
- Offline
Currently, trying to expand workitems from an upstream partition in a loop is hard to do. This is because each iteration of the loop depends on the previous iteration, so trying to “trace things back” to the upstream partition (so you can find the partitioned workitems to be expanded again) can be very tricky. It can be done as a work-around in some cases (which I'm not going to show here, unless you really, really want to see it), but we don't recommend it.
I've attached a file here that I *think* does what you want: you have this “sortid” attribute - I think the goal is to execute workitems based on their sortid in ascending order, using a feedback loop. I've used a sort node instead of a partition to do the sorting, and pass that off to the loop. Because we are not partitioning, there is no need to re-expand the workitems in the loop.
I've attached a file here that I *think* does what you want: you have this “sortid” attribute - I think the goal is to execute workitems based on their sortid in ascending order, using a feedback loop. I've used a sort node instead of a partition to do the sorting, and pass that off to the loop. Because we are not partitioning, there is no need to re-expand the workitems in the loop.
PDG/TOPs » Doing sequencial execution on parallel tasks
- kenxu
- 544 posts
- Offline
Hi there,
Yes, feedback loops are likely the construct you are looking for to solve your problem. However, there are some gotchas with them that we have not yet explained well (but I'm in the process of making another master class to better explain it):
1. Dynamic partitioning is not supported feedback loops currently.
2. Only actual outputs are capable of being fed back to the next loop iteration. Regular attributes cannot currently feedback.
There are work-arounds to the above restrictions. If you post your scene file, we'll work with you to get it to go.
Yes, feedback loops are likely the construct you are looking for to solve your problem. However, there are some gotchas with them that we have not yet explained well (but I'm in the process of making another master class to better explain it):
1. Dynamic partitioning is not supported feedback loops currently.
2. Only actual outputs are capable of being fed back to the next loop iteration. Regular attributes cannot currently feedback.
There are work-arounds to the above restrictions. If you post your scene file, we'll work with you to get it to go.
PDG/TOPs » Processing Multiple files
- kenxu
- 544 posts
- Offline
Hi there… it's hard to tell from the screeen shots alone where the problem might be. Could you please post the file itself?
PDG/TOPs » How to correctly use ROP Alembic
- kenxu
- 544 posts
- Offline
The ropalembic would expect regular workitems and not a partition as input. Try to put the ropalembic just above the partition by attribute. Turn on “all frames in one batch”, and leave the partitioning to later?
Edited by kenxu - Sept. 11, 2019 17:32:32
PDG/TOPs » Force ROP geometry to to follow index
- kenxu
- 544 posts
- Offline
In general PDG cooks things parallel, so unless there is a dependency to force a specific order, it'll cook asynchronously in random order. But, in case I didn't understand you correctly, please post a file?
PDG/TOPs » force PDG network to run completely in-process?
- kenxu
- 544 posts
- Offline
We have an FBX ROP, and you could just ROP Fetch it … why not just use that one? Are there concerns beyond performance?
PDG/TOPs » force PDG network to run completely in-process?
- kenxu
- 544 posts
- Offline
Hey there,
Couple of things to address here. First, it's unfortunately not possible for many things (such as, but not limited to, RopGeometry) to run in process. This is because of the fact that Houdini is in general not thread safe. We allude to some of problems here:
https://www.sidefx.com/tutorials/pdg-core-concepts/ [www.sidefx.com]
time 39:20
If you must use python script node (as opposed to python processor, which is always out of process), there are some tips here on how to enforce thread safety, basically by running the workitems in serial:
https://www.sidefx.com/forum/topic/68479/ [www.sidefx.com]
Now that said, there are ways to run stuff in process safely - chiefly through the invoke node to run a chunk of compiled sops, which is thread safe. We are also doing some things in H18 to make things better in this regard. There will be a native geometry attribute, that can be inherited from workitem to workitem, that can be handed off to the invoke node. We are making HDA processor pooling work better, which will also eliminate the spin up cost of hython in many cases.
Lastly, WRT to the gamedev csv exporter not working with PDG - in general I'm afraid the gamedev toolset is not well tested against PDG. We can't recommend their use in combination at this point. If you could please upload a sample with that problem, it would help us track down some of these issues, and hopefully get to a point where we can recommend their use in combination.
Couple of things to address here. First, it's unfortunately not possible for many things (such as, but not limited to, RopGeometry) to run in process. This is because of the fact that Houdini is in general not thread safe. We allude to some of problems here:
https://www.sidefx.com/tutorials/pdg-core-concepts/ [www.sidefx.com]
time 39:20
If you must use python script node (as opposed to python processor, which is always out of process), there are some tips here on how to enforce thread safety, basically by running the workitems in serial:
https://www.sidefx.com/forum/topic/68479/ [www.sidefx.com]
Now that said, there are ways to run stuff in process safely - chiefly through the invoke node to run a chunk of compiled sops, which is thread safe. We are also doing some things in H18 to make things better in this regard. There will be a native geometry attribute, that can be inherited from workitem to workitem, that can be handed off to the invoke node. We are making HDA processor pooling work better, which will also eliminate the spin up cost of hython in many cases.
Lastly, WRT to the gamedev csv exporter not working with PDG - in general I'm afraid the gamedev toolset is not well tested against PDG. We can't recommend their use in combination at this point. If you could please upload a sample with that problem, it would help us track down some of these issues, and hopefully get to a point where we can recommend their use in combination.
PDG/TOPs » Making The output file size an attribute
- kenxu
- 544 posts
- Offline
Yes, the output file size is an intrinsic property on workitems (not a regular attribute). It is something we don’t want downstream workitems to inherit, which would be true if it’s an attribute. To make it into an attribute, we looking into making an attribute promote node that can do such things (and more), but in the mean time the best thing is to use a python processor. You can access the upstream workitem and create an attribute from the upstream workitem’s file size. Note that by doing this your node (and thus downstream graph) will need to be dynamic - the file size is not known until the upstream item is actually computed.
Edited by kenxu - Sept. 5, 2019 17:49:28
PDG/TOPs » PDG support with tractor (ropfetch and native pdg ris node)
- kenxu
- 544 posts
- Offline
… Also, the RfH ROP is something maintained by Pixar. We'll raise it with them in our next meeting, but it would help also if you raise it on their forums.
Edited by kenxu - Aug. 23, 2019 10:55:47
PDG/TOPs » how to set file dependencies procedurally
- kenxu
- 544 posts
- Offline
The (likely) only time you'd need file dependencies for the HDA processor is if the HDA it's running is dependent on other HDA files. Those sub-HDAs should be listed on the file dependencies list.
That said, it sounds like in your case you want to apply an HDA to some files generated upstream? In that case, you don't need to list them as file dependencies. If those files are listed as results of upstream workitems, you can refer to them as `@pdg_input.0` , `@pdg_input.1` … etc. in your HDA parameters tab.
If truly you want to procedurally drive the file dependencies tab in the HDA Processor, it's a little harder - that thing is a multiparm. I suppose you could do it if you were able to bound the maximum number of file dependencies by pre-creating that many slots in the multi-parm…
That said, it sounds like in your case you want to apply an HDA to some files generated upstream? In that case, you don't need to list them as file dependencies. If those files are listed as results of upstream workitems, you can refer to them as `@pdg_input.0` , `@pdg_input.1` … etc. in your HDA parameters tab.
If truly you want to procedurally drive the file dependencies tab in the HDA Processor, it's a little harder - that thing is a multiparm. I suppose you could do it if you were able to bound the maximum number of file dependencies by pre-creating that many slots in the multi-parm…
-
- Quick Links