1. I've attached the sample files from the launch here.
2. There is a feature in PDG where it calculates the “expected output” of workitems. It checks on disk to see if the “expected output” is there. If it is, it'll mark the workitem as cooked instantly. This is currently implemented for the ROP Fetches, and the HDA Processor. Also, this will only work if you are using the @syntax. So if you're using environment variables, or using a node outside of what's mentioned above, that feature won't work. This may explain the inconsistencies you're seeing - if not, please give us a file that we can take a closer look at. Note also we recently added some niceties for the ROP Fetch to make this work better - there is a “Output Parm Name” field that you can use to specified what the expected output files should be. Details here:
https://www.sidefx.com/forum/topic/66828/ [www.sidefx.com]
3. Yes, those nodes are pretty agnostic. Try to use a file pattern for example to pick up the images as workitems and then send them to the ImageMagick or FFMpeg nodes.
4. I don't have too much to add in terms of redshift, but again try the file pattern to pick up the redshift rendered images, or use the PDG API to attach result data to a workitems (addResultData method on pdg.WorkItem).
I see also there is already a whole bunch more discussion around redshift, so perhaps the issue is settled:
https://www.sidefx.com/forum/topic/66828/ [www.sidefx.com]
Found 132 posts.
Search results Show results as topic list.
PDG/TOPs » General PDG/TOPs Issues and a Sprinkling of Redshift
- kenxu
- 544 posts
- Offline
PDG/TOPs » Renderman
- kenxu
- 544 posts
- Offline
It's not wrapped nicely like the ROP Mantra node, but there exists renderman ROPs and you can use a Rop Fetch to point to that.
Actually if you dive into the ROP Mantra TOP node, you'll see it's actually an HDA that is doing exactly that.
Actually if you dive into the ROP Mantra TOP node, you'll see it's actually an HDA that is doing exactly that.
PDG/TOPs » Is it wrong to use double colon in TOP node hda's type name?
- kenxu
- 544 posts
- Offline
PDG/TOPs » Pyro Clusters with PDG
- kenxu
- 544 posts
- Offline
Re. CPU utilization, try turning up the “Houdini Max Threads” setting on the local scheduler. You've already set the Maximum CPUs to use, so that's not the issue.
PDG/TOPs » PDG Debugging
- kenxu
- 544 posts
- Offline
So this is a specific issue with the HDA Processor node. It's wrapper program we wrote to just run an HDA, implemented with HAPI. It's quite possible that it's not digging deeper into the failure to get all the logs it can out of HAPI. Attaching the debug hip paths as attributes is something that makes sense too. So yes, all things we should be improving upon. I've logged RFE 97150 to capture it.
The other bit of advice to give for now would be to use a filter by expression node to limited the number of workitems, then run the Debug HIP file function on the subset to cut down the number of files being dumped.
The other bit of advice to give for now would be to use a filter by expression node to limited the number of workitems, then run the Debug HIP file function on the subset to cut down the number of files being dumped.
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
Hi Christopher,
It has to do with the way PDG generates downstream work from upstream. If we already have all the frames represented separately as workitems, then PDG can only generate 1 or more workitems (in this case frames) per upstream workitem or frame. The only thing that really makes sense here is to generate just 1 frame per upstream frame, given that we already have fully expanded the frames. If we do that, though, we'd end up creating one separate task per frame, which is not batching and would suffer from a lot of setup/tear down overhead. If we go this route, at a minimum we'd need to partition the expanded frames, so that we can generate a batch workitem for the frames in the partition. That then is adding more complexity, but we'll think about it a bit more to see if it's worth it.
It has to do with the way PDG generates downstream work from upstream. If we already have all the frames represented separately as workitems, then PDG can only generate 1 or more workitems (in this case frames) per upstream workitem or frame. The only thing that really makes sense here is to generate just 1 frame per upstream frame, given that we already have fully expanded the frames. If we do that, though, we'd end up creating one separate task per frame, which is not batching and would suffer from a lot of setup/tear down overhead. If we go this route, at a minimum we'd need to partition the expanded frames, so that we can generate a batch workitem for the frames in the partition. That then is adding more complexity, but we'll think about it a bit more to see if it's worth it.
PDG/TOPs » Can I get the second output of a HDA in PDG?
- kenxu
- 544 posts
- Offline
Hi Eric,
PDG is a templated processing graph - as such, when nodes connect, it's really talking about the flow of the workitems. With that semantic in mind, the nodes like split can can multiple outputs, because there are two ways the workitems can flow with a split node. This concept is quite distinct from the multiple results within a workitem, which is probably what you're thinking of when you mention multiple outputs of the node. To address the the multiple outputs (bake results) of a workitem, please use @pdg_input.0, @pdg_input.1, @pdg_input.2 etc from a downstream node to refer to the upstream workitems's various bake results.
That said, there is an outstanding RFE (RFE 94236) to actually capture multiple bake results of HDA Processor, so it can be addressed downstream as @pdg_input.0, @pdg_input.1 etc. Right now, for HDA Processors, it is indeed only grabbing the first output of the HDA itself.
PDG is a templated processing graph - as such, when nodes connect, it's really talking about the flow of the workitems. With that semantic in mind, the nodes like split can can multiple outputs, because there are two ways the workitems can flow with a split node. This concept is quite distinct from the multiple results within a workitem, which is probably what you're thinking of when you mention multiple outputs of the node. To address the the multiple outputs (bake results) of a workitem, please use @pdg_input.0, @pdg_input.1, @pdg_input.2 etc from a downstream node to refer to the upstream workitems's various bake results.
That said, there is an outstanding RFE (RFE 94236) to actually capture multiple bake results of HDA Processor, so it can be addressed downstream as @pdg_input.0, @pdg_input.1 etc. Right now, for HDA Processors, it is indeed only grabbing the first output of the HDA itself.
Edited by kenxu - May 22, 2019 13:51:15
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
Hi Christopher, I think I get what you're saying, sort of to break out the functionality to generate the right frames to be outside of the ROP so it can be used elsewhere. The trouble with that is that a lot of the time you'll want to use the batch functionality of the ROP (to minimize startup and shutdown costs), and breaking it out like this wouldn't allow you to take advantage of that.
PDG/TOPs » Filtering highest file / folder version
- kenxu
- 544 posts
- Offline
Maybe try the “Filter by Expression” node, with an expression like @pdg_index < threshold ? That will get rid of workitems with index below that threshold. If it's some other attribute and not index, it would be just @whatever_attrname
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
Hi Christopher,
Now that I think about it some more, in this particular case we may not need a mapper. The reason is that it is in fact possible in this case to procedurally derive the right relationships with the information given. If we already know the upstream frame range and downstream frame range ahead of time, then we have all the information we need to connect the right frames together. However, we are missing an “Evaluate Using” mode to allow for disjoint frame ranges like that. We will add this in, and this should solve your problem neatly.
In the mean time, here is a work around. It's not elegant - what I described above is the right way to solve the problem, but it's instructive so we'll post it here. The idea is to generate all the needed frames, then filter out the frames you don't want.
Finally, what I said about mappers and their functionality earlier stands. The key to us managing to solve this problem without mappers is because we've figured out a way to procedurally derive the connectivity information with the settings the user has entered ahead of time. If for example we wanted to pick out a few procedurally generated faces of a building for further processing (eg. add decorations to it), then those faces won't even exist until the building is generated, and so won't know how to connect the decoration operations to the faces ahead of time like we did here.
Now that I think about it some more, in this particular case we may not need a mapper. The reason is that it is in fact possible in this case to procedurally derive the right relationships with the information given. If we already know the upstream frame range and downstream frame range ahead of time, then we have all the information we need to connect the right frames together. However, we are missing an “Evaluate Using” mode to allow for disjoint frame ranges like that. We will add this in, and this should solve your problem neatly.
In the mean time, here is a work around. It's not elegant - what I described above is the right way to solve the problem, but it's instructive so we'll post it here. The idea is to generate all the needed frames, then filter out the frames you don't want.
Finally, what I said about mappers and their functionality earlier stands. The key to us managing to solve this problem without mappers is because we've figured out a way to procedurally derive the connectivity information with the settings the user has entered ahead of time. If for example we wanted to pick out a few procedurally generated faces of a building for further processing (eg. add decorations to it), then those faces won't even exist until the building is generated, and so won't know how to connect the decoration operations to the faces ahead of time like we did here.
Edited by kenxu - May 14, 2019 11:54:19
PDG/TOPs » RFE: DnD parms into Wedge TOPs
- kenxu
- 544 posts
- Offline
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
Hi ChristopherC, I'm not sure I totally understand, but if one could procedurally determine the current frame range from the upstream, then partitioning is the way to go. If the frame range is manually specified and not procedurally derived from upstream, then put a mapper in between with the appropriate specifications.
I don't think it makes sense to have the mapped range passed down from a workitem because then the specification on how to do the mapping is per workitem, and mappers are supposed to specify dependency relationships from all upstream workitems to all downstream workitems. The rule with which it does so is not on a per workitem basis.
I don't think it makes sense to have the mapped range passed down from a workitem because then the specification on how to do the mapping is per workitem, and mappers are supposed to specify dependency relationships from all upstream workitems to all downstream workitems. The rule with which it does so is not on a per workitem basis.
Edited by kenxu - May 13, 2019 13:46:41
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
Ok, here is the problem - the blur part of the network is something that only accepts 1 of 5 inputs. A single file sop can only read 1 of the 5 upstream images. In order to blur all 5 images, you'll need something that has 5 file sops (or a single file merge sop), each being set to `@pdg_input.0`, `@pdg_input.1`, `@pdg_input.2` …etc. The the rest of your sop network will need properly blend those together.
Edited by kenxu - May 9, 2019 14:34:07
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
Hi Tyler,
I took a quick look at your file. While I may not know for sure whether what I see is causing the problem, I did notice at least a few problems:
1) The wedge node is creating 5 wedges on a integer attribute from 0 - 1. This means that 2/5 wedges will have a value of 0, and 3/5 wedges will have the value of 1. So for all intents and purposes, there are 2 wedges, and not 5.
2) The Blur_FrameRange node appears to be pointing at the wrong part of the network - it is pointing to the original sim and not the blur part of the network.
3) I looked around and saw no mention of @wedge (the name of the wedge attribute being created in the beginning) being used any where in the DOP or SOP network. Maybe I missed something, but unless that is actually used in those networks, there will be no variation.
I took a quick look at your file. While I may not know for sure whether what I see is causing the problem, I did notice at least a few problems:
1) The wedge node is creating 5 wedges on a integer attribute from 0 - 1. This means that 2/5 wedges will have a value of 0, and 3/5 wedges will have the value of 1. So for all intents and purposes, there are 2 wedges, and not 5.
2) The Blur_FrameRange node appears to be pointing at the wrong part of the network - it is pointing to the original sim and not the blur part of the network.
3) I looked around and saw no mention of @wedge (the name of the wedge attribute being created in the beginning) being used any where in the DOP or SOP network. Maybe I missed something, but unless that is actually used in those networks, there will be no variation.
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
Hi Ostap,
Very good point. If your second set of frame range cannot be procedurally derived from the first set of frame range, then it is a separate chain of proceduralism, and mappers are the right construct to tie together the two chains.
To this point we have not pushed hard on this topic (there are literally no tutorials that touch mappers at this moment), because we felt the community needs time to even absorb the first two major constructs in PDG: processors and partitioners.
That said, mappers are a powerful and necessary constructs, that can model in any kind of human (and thus non-procedural) interaction with a procedural system. For example, if I have a procedural building and want to do some manual decorations on top of the procedurally generated content, but I want to maintain those manual edits even if I go and update the procedural building by changing its parameters. The manual edits are a human interaction that cannot be derived procedurally, and to this point how to properly maintain the manual edits on top of the procedural content has been a tough problem plaguing the community. Mappers offer a framework that can solve this problem, and other problems like it, such as topology independent editing of procedural content, which is another example manual interaction with a procedural system.
You have hit on another example with the above. Kudos - this shows your understanding of the system is becoming quite advanced.
Very good point. If your second set of frame range cannot be procedurally derived from the first set of frame range, then it is a separate chain of proceduralism, and mappers are the right construct to tie together the two chains.
To this point we have not pushed hard on this topic (there are literally no tutorials that touch mappers at this moment), because we felt the community needs time to even absorb the first two major constructs in PDG: processors and partitioners.
That said, mappers are a powerful and necessary constructs, that can model in any kind of human (and thus non-procedural) interaction with a procedural system. For example, if I have a procedural building and want to do some manual decorations on top of the procedurally generated content, but I want to maintain those manual edits even if I go and update the procedural building by changing its parameters. The manual edits are a human interaction that cannot be derived procedurally, and to this point how to properly maintain the manual edits on top of the procedural content has been a tough problem plaguing the community. Mappers offer a framework that can solve this problem, and other problems like it, such as topology independent editing of procedural content, which is another example manual interaction with a procedural system.
You have hit on another example with the above. Kudos - this shows your understanding of the system is becoming quite advanced.
PDG/TOPs » Processing multiple wedged simulations
- kenxu
- 544 posts
- Offline
Hi Tyler,
It sounds like for your second step, at the very least you should be setting the “evaluate using” parameter to “single frame” instead of “frame range”. If for your blurring operation you need to have multiple upstream frames, then consider a partition by frame node. You shouldn't need mappers for this (mappers are an advanced topic, that mostly applies to tying together two procedural chains, such as for example handling some manual edits on top of an existing procedural chain. we'll have tutorials around it in the future once people get more used to using partitioners).
It sounds like for your second step, at the very least you should be setting the “evaluate using” parameter to “single frame” instead of “frame range”. If for your blurring operation you need to have multiple upstream frames, then consider a partition by frame node. You shouldn't need mappers for this (mappers are an advanced topic, that mostly applies to tying together two procedural chains, such as for example handling some manual edits on top of an existing procedural chain. we'll have tutorials around it in the future once people get more used to using partitioners).
PDG/TOPs » PDG product configurator
- kenxu
- 544 posts
- Offline
WRT the custom columns in the task graph table, it's a well known RFE. We are working on it.
PDG/TOPs » hou.PermissionError
- kenxu
- 544 posts
- Offline
Hi Wolrajh,
So hopefully we are one step closer to getting it deployed properly for you guys. A solution for the DHCP issue is on the way as mentioned before - any other blockers, please let us know.
So hopefully we are one step closer to getting it deployed properly for you guys. A solution for the DHCP issue is on the way as mentioned before - any other blockers, please let us know.
PDG/TOPs » hou.PermissionError
- kenxu
- 544 posts
- Offline
Hi Wolrajh,
I think the paths and variables issue with Deadline has been solved. DHCP you seem to have a work-around for, and we are working on a longer term solution that we may be able to deliver in a few weeks (have a “tracker” run on the farm, so everything talks to the tracker, and the tracker talks to PDG thus bypassing the DHCP issues).
The asset unlocking - we've been working with the assumption that parameters that need to be set are exposed at the top level, or nodes are appropriately marked as editable. However, we see this is not the case here. We'll try to put in a change to “auto-unlock” these things on the farm. We can probably do this fairly quickly. If there are other issues with Deadline that we're not yet aware of, please point them out.
Help is on the way…thanks for your patience - bear with us here
I think the paths and variables issue with Deadline has been solved. DHCP you seem to have a work-around for, and we are working on a longer term solution that we may be able to deliver in a few weeks (have a “tracker” run on the farm, so everything talks to the tracker, and the tracker talks to PDG thus bypassing the DHCP issues).
The asset unlocking - we've been working with the assumption that parameters that need to be set are exposed at the top level, or nodes are appropriately marked as editable. However, we see this is not the case here. We'll try to put in a change to “auto-unlock” these things on the farm. We can probably do this fairly quickly. If there are other issues with Deadline that we're not yet aware of, please point them out.
Help is on the way…thanks for your patience - bear with us here
Edited by kenxu - May 3, 2019 11:18:04
-
- Quick Links