Thanks for Your time, in 18.0.566 it You file also works with cache option.
Another question, I still have is rendering with v blur?
this does not work in any case here
Edit:
The problem gets even more unpredictable as soon as particle are reaped as they die. Another point for v blur approach. But still found no way to use vblur
As soon es I keep all dead particles I the motion blur does work again. But now I have to deactivate/hide them with the prune node based on the “dead” attribute.
My initial guess was to use the “usd_flattenedprimvarelement” vex function to get the right value for the instance, but I need a index, and @elemnum does not work. the documentation is vague, to say at least.
I think it is not supported yet, but I do hope im wrong.
Found 283 posts.
Search results Show results as topic list.
Solaris » Instances and motionblur
-
- sanostol
- 557 posts
- Offline
Solaris » Instances and motionblur
-
- sanostol
- 557 posts
- Offline
I tried the cacheLop, but maybe ( I mean chances are good ) I'm doing something wrong here.
The network looks like this for all cases, just the last one works
The network looks like this for all cases, just the last one works
Edited by sanostol - Sept. 21, 2020 17:08:33
Solaris » Instances and motionblur
-
- sanostol
- 557 posts
- Offline
Should it work at all in version 18? As Karma is still beta and Solaris quite new, it would be good to know if I just keep running against a solid “not yet” wall.
Thanks for any hints
Thanks for any hints
Solaris » Instances and motionblur
-
- sanostol
- 557 posts
- Offline
This should be working, right? What do I miss here.
I tried both motion samples and v attribute and both have no effect. What do I have to add here.
In addition also vblur on a mesh seems to work not the way I at least would expect it, only the tree with samples on a mesh do blur at all.
Please have a look at the stage in the file
Thank You
I tried both motion samples and v attribute and both have no effect. What do I have to add here.
In addition also vblur on a mesh seems to work not the way I at least would expect it, only the tree with samples on a mesh do blur at all.
Please have a look at the stage in the file
Thank You
Solaris » Transfering attributes from usd graph objets to instances
-
- sanostol
- 557 posts
- Offline
Hello,
I have some issues with instancing and usd. When the “instance To Point” node is set to Internal Sop as Location Source all Point Attributes on the Target Points are copied as expected. “test”, “Cd2 and ”v“ in my example.
But if I use the first input as target, I can not copy them. On point is that the attributes on the target points are renamed to ”primvars:test“, ”primvars:displayColor“, ”velocities“. This happens with sop sources and abc file sources as well.
So how do I copy the attributes, I tried ”primvars:test" but it does not work
I attached a file showing my problem, its a archive with hip, 2 bgimages for houdini and a abc.
Thanks for Your help
I have some issues with instancing and usd. When the “instance To Point” node is set to Internal Sop as Location Source all Point Attributes on the Target Points are copied as expected. “test”, “Cd2 and ”v“ in my example.
But if I use the first input as target, I can not copy them. On point is that the attributes on the target points are renamed to ”primvars:test“, ”primvars:displayColor“, ”velocities“. This happens with sop sources and abc file sources as well.
So how do I copy the attributes, I tried ”primvars:test" but it does not work
I attached a file showing my problem, its a archive with hip, 2 bgimages for houdini and a abc.
Thanks for Your help
Solaris » slow VolumeLop for writing usd files
-
- sanostol
- 557 posts
- Offline
Hi
i read in some VDB data with the volumeLOP and it takes quite some time to write out a complete usd sequence. It seems, the lop does read in the volume for every frame, that takes quite some time.
Is there a way to speed this up, maybe by providing bound information and field descriptions?
the fields setup will not change over time
thanks
Martin
i read in some VDB data with the volumeLOP and it takes quite some time to write out a complete usd sequence. It seems, the lop does read in the volume for every frame, that takes quite some time.
Is there a way to speed this up, maybe by providing bound information and field descriptions?
the fields setup will not change over time
thanks
Martin
PDG/TOPs » Cooking top nodes by script
-
- sanostol
- 557 posts
- Offline
Thank You!
EDIT:
The “executeGraph” method works fine in a running session, all the PDGNodes are accessible. But I tried to use it in a hda “on loaded” event and again, I get a None back from “getPDGNode” method
The “exectueGraph” method seems to have no effect in a “on loaded” event
is there a way to get that running in this kind of event?
EDIT:
The “executeGraph” method works fine in a running session, all the PDGNodes are accessible. But I tried to use it in a hda “on loaded” event and again, I get a None back from “getPDGNode” method
The “exectueGraph” method seems to have no effect in a “on loaded” event
is there a way to get that running in this kind of event?
Edited by sanostol - April 10, 2020 08:39:50
PDG/TOPs » Cooking top nodes by script
-
- sanostol
- 557 posts
- Offline
Hi, I try to cook nodes with a python script, but … well, it could run better 
I attached a file. I would like to open that file and run this python script:
when You run it directly after opening the file, it fails, top.getPDGNode() results in None.
after cooking the top nodes with the ui and deleting the results, the script works.
I wonder, what is going on here. Tt seems that a initial cool by hand, does initialize something so the getPDGNode does work. How can I do this?
thanks for any hints
Martin

I attached a file. I would like to open that file and run this python script:
top = hou.node('/obj/topNet/wedge1') print top.path() top.setDisplayFlag(True) top.getPDGNode().cook(True) work_items = top.getPDGNode().workItems print (work_items)
when You run it directly after opening the file, it fails, top.getPDGNode() results in None.
after cooking the top nodes with the ui and deleting the results, the script works.
I wonder, what is going on here. Tt seems that a initial cool by hand, does initialize something so the getPDGNode does work. How can I do this?
thanks for any hints
Martin
PDG/TOPs » PDG Graphs on Farm
-
- sanostol
- 557 posts
- Offline
PDG/TOPs » PDG Graphs on Farm
-
- sanostol
- 557 posts
- Offline
Hi Manuel,
yes we are using the muster scheduler. the vvertex developer is very supporting, very cool and we are in contact with him.
Hi Chris,
I get Your point (more or less), with a typical static job description You loose a lot of functionality. Still trying to get the broader concepts when submitting stuff to farms. I think I have to install at some point hqueue to be able to compare. The downside I see right now with a full pdg submission to farm is the lack of overview. Jobs pop into existence depending on previous jobs, what makes it hard to keep track if You have multiple jobs with dependencies. it is hard to keep track. Right now if a job in the chain fails, I'm not to sure if I can restart this specific job/junk and everything is just fine.
I specifically have have tasks in mind, that are low maintenance, proven and of high count. Generic stuff, creature fx, stuff like that. they are less dynamic in job creation. asset imports, collision preparation, presim, sim and postsim passes, flip book and ffmpeg. maybe a bit wedging. they are not difficult, but most of the time You have to deal with huge amounts and just want to throw it on to the farm, and see result clips before You continue to push it into the next department.
It would be easier with the top interface embedded into the scheduler.
What position does Pilot have here?
Thanks for Your insight
Martin
yes we are using the muster scheduler. the vvertex developer is very supporting, very cool and we are in contact with him.
Hi Chris,
I get Your point (more or less), with a typical static job description You loose a lot of functionality. Still trying to get the broader concepts when submitting stuff to farms. I think I have to install at some point hqueue to be able to compare. The downside I see right now with a full pdg submission to farm is the lack of overview. Jobs pop into existence depending on previous jobs, what makes it hard to keep track if You have multiple jobs with dependencies. it is hard to keep track. Right now if a job in the chain fails, I'm not to sure if I can restart this specific job/junk and everything is just fine.
I specifically have have tasks in mind, that are low maintenance, proven and of high count. Generic stuff, creature fx, stuff like that. they are less dynamic in job creation. asset imports, collision preparation, presim, sim and postsim passes, flip book and ffmpeg. maybe a bit wedging. they are not difficult, but most of the time You have to deal with huge amounts and just want to throw it on to the farm, and see result clips before You continue to push it into the next department.
It would be easier with the top interface embedded into the scheduler.
What position does Pilot have here?
Thanks for Your insight
Martin
PDG/TOPs » PDG Graphs on Farm
-
- sanostol
- 557 posts
- Offline
Hi Manuel, thank You.
I'm a bit confused as we now connect to the farm renders with a musterscheduler, not sure where a additional scheduler fits here. But I see, there are “submit graph as job buttons” in hqueue and deadline as well. Does that mean that the complete graph is processed as a job on farm, just as if You open the scene and do a Cook Output Node?
I was looking for a even more static way. More like all static tasks are evaluated with all dependencies, and then all nodes that create files are submitted as “indipendent” job to the dispatcher, but with correct dependencies set on job or chunk level.
I tried to resolve the dependencies with the api, but I could not easily get over nodes like wait for all, as they seem to have no staticWorkItems jsut staticWrapers, that made me wonder is there maybe already a defined way in the api, to resolve all dependencies.
About the expanding jobs stuff, I think this is possible. right now when I start a job with several top nodes, the first batches of tasks get launched on the assigned computers, and no other job exists for muster at this moment. as soon as one batch is finished a dependent batch pops into existence on muster. the only info it has is the, top node, task name and framerange.
looks like thisL: –batch -p “filename” -n “\obj\geo1\fileCache1\render” -i “ropfetch40” -fs 1 -fe 1 -fi 1
muster seems to just connect to the farm. all the dispatching happens in pdg. hope that helps
Martin
I'm a bit confused as we now connect to the farm renders with a musterscheduler, not sure where a additional scheduler fits here. But I see, there are “submit graph as job buttons” in hqueue and deadline as well. Does that mean that the complete graph is processed as a job on farm, just as if You open the scene and do a Cook Output Node?
I was looking for a even more static way. More like all static tasks are evaluated with all dependencies, and then all nodes that create files are submitted as “indipendent” job to the dispatcher, but with correct dependencies set on job or chunk level.
I tried to resolve the dependencies with the api, but I could not easily get over nodes like wait for all, as they seem to have no staticWorkItems jsut staticWrapers, that made me wonder is there maybe already a defined way in the api, to resolve all dependencies.
About the expanding jobs stuff, I think this is possible. right now when I start a job with several top nodes, the first batches of tasks get launched on the assigned computers, and no other job exists for muster at this moment. as soon as one batch is finished a dependent batch pops into existence on muster. the only info it has is the, top node, task name and framerange.
looks like thisL: –batch -p “filename” -n “\obj\geo1\fileCache1\render” -i “ropfetch40” -fs 1 -fe 1 -fi 1
muster seems to just connect to the farm. all the dispatching happens in pdg. hope that helps
Martin
Edited by sanostol - Oct. 14, 2019 11:14:42
PDG/TOPs » PDG Graphs on Farm
-
- sanostol
- 557 posts
- Offline
Hello,
we successfully connected our dispatcher ( Muster )to PDG and the task are executed like expected on the assigned computers. That's very cool, but I must admit I do not fully understand the way this should work.
In this setup I can launch PDG graphs on the dispatcher, but as soon as I quit houdini or it dies(happens from time to time), the graph is not executed anymore. It seems all the actual dispatching work is done in PDG and as soon as it goes down, the current task are finished, but task that should be started are not started anymore. Like expected.
But often we would need a more disconnected way to work with PDG graphs. For example it would be quite unrealistic to have a 3 day render on the farm and keep houdini open for three days, so it can proceed with some ffmpeg and comp tasks.
Let's say I have a top with sim, a ifd generate, a ifd render and a ffmpeg and just want to send it to the farm, and then close houdini, or do something different.
Are there any methods already to do this?
Where does Pilot fit into this, by the way?
thanks
Martin
we successfully connected our dispatcher ( Muster )to PDG and the task are executed like expected on the assigned computers. That's very cool, but I must admit I do not fully understand the way this should work.
In this setup I can launch PDG graphs on the dispatcher, but as soon as I quit houdini or it dies(happens from time to time), the graph is not executed anymore. It seems all the actual dispatching work is done in PDG and as soon as it goes down, the current task are finished, but task that should be started are not started anymore. Like expected.
But often we would need a more disconnected way to work with PDG graphs. For example it would be quite unrealistic to have a 3 day render on the farm and keep houdini open for three days, so it can proceed with some ffmpeg and comp tasks.
Let's say I have a top with sim, a ifd generate, a ifd render and a ffmpeg and just want to send it to the farm, and then close houdini, or do something different.
Are there any methods already to do this?
Where does Pilot fit into this, by the way?
thanks
Martin
Technical Discussion » Blackbox otls by python?
-
- sanostol
- 557 posts
- Offline
Can we create Blackboxed HDAS with python?
Could not finde anything in the documentation.
thanks Martin
Could not finde anything in the documentation.
thanks Martin
PDG/TOPs » PDG/TOPs grounded - Open letter for SideFX
-
- sanostol
- 557 posts
- Offline
Still learning, but my impression of TOPS is a bit different, but I guess my expectations are also quite different.
I see it more like a scheduler on steroids, that greatly increases You throughput on farm. To have all this fine controlled dependencies is just amazing.
Just some examples:
- Clustered and sliced simulations, in addition with wedging in a custom scheduler environment was doable, but You had to use environment variables, that communicate the wedge/slice/cluster information to the batch process. It is working, but would break much easier and was kind of tricky in developing state. PDG on the other side was just working and with a huge artistic usability.
- All kind of repetitive workflows are now much easier to set up, without the need to code a lot. You can plug Your workflow together and modify it on the fly. You can create Digital Assets from this workflows and create higher levels of abstraction for this, so that one node does in fact do several things. Coding and maintaining this is of course doable, but can get quite difficult as soon as You have multiple deep dependencies, that should be parallelized on farm, to use all the resources.
- But even without farm computers, we see now cpus with up to 64 cores in a single cpu setup, You have to use that resources as well.
For example, You process all the collision geometry preparation in the background, while already working on some other parts of Your scene, or even let PDG completely build all Your hips to that point, that all necessary assets are loaded, and the mandatory steps are also done in background, as well. You open a hip where everything is done to that point where You and Your artistic skills are really needed.
After this PDG takes care of all the other stuff that is necessary in a production. It can free up a lot of Your time for the stuff that no machine can do. This means You are faster and more productive. A lot of work is still very repetitive and if You can shift it to PDG, great.
I might miss a lot here, but that is my initial impression of PDG. Probably it is much more, but step by step
I see it more like a scheduler on steroids, that greatly increases You throughput on farm. To have all this fine controlled dependencies is just amazing.
Just some examples:
- Clustered and sliced simulations, in addition with wedging in a custom scheduler environment was doable, but You had to use environment variables, that communicate the wedge/slice/cluster information to the batch process. It is working, but would break much easier and was kind of tricky in developing state. PDG on the other side was just working and with a huge artistic usability.
- All kind of repetitive workflows are now much easier to set up, without the need to code a lot. You can plug Your workflow together and modify it on the fly. You can create Digital Assets from this workflows and create higher levels of abstraction for this, so that one node does in fact do several things. Coding and maintaining this is of course doable, but can get quite difficult as soon as You have multiple deep dependencies, that should be parallelized on farm, to use all the resources.
- But even without farm computers, we see now cpus with up to 64 cores in a single cpu setup, You have to use that resources as well.
For example, You process all the collision geometry preparation in the background, while already working on some other parts of Your scene, or even let PDG completely build all Your hips to that point, that all necessary assets are loaded, and the mandatory steps are also done in background, as well. You open a hip where everything is done to that point where You and Your artistic skills are really needed.
After this PDG takes care of all the other stuff that is necessary in a production. It can free up a lot of Your time for the stuff that no machine can do. This means You are faster and more productive. A lot of work is still very repetitive and if You can shift it to PDG, great.
I might miss a lot here, but that is my initial impression of PDG. Probably it is much more, but step by step
PDG/TOPs » Making TOPs less confusing
-
- sanostol
- 557 posts
- Offline
we are switching our tool set completely to TOPS right now, still keeping our old submission tools available. The PDG system is quite new and I guess it will evolve quite fast. that keeps one busy but I think it is totally worth it. So far I just love it, but having a fall back is kind of relaxing.
We are not using tractor or deadline, but muster as a scheduler. and the muster developer is picking up stuff quickly, but I do have some rather basic questions, as the tool right now is heavily work in progress on the scheduler side.
When I cook a top it seems to be bound to the houdini running it, is there a way to submit a top tree to be processed in a more old school way.
To be more specific and that is just my impression, right now the top cooking is bound to the current houdini session, that's very cool for fluid working and developing, but I miss a way to just submit a dependency tree to the farm, and then I can close houdini or open a new scene. while the top processes are calculated on the farm.
Is that possible in hqueue already?
Am I missing here something or is that right?
thank You
We are not using tractor or deadline, but muster as a scheduler. and the muster developer is picking up stuff quickly, but I do have some rather basic questions, as the tool right now is heavily work in progress on the scheduler side.
When I cook a top it seems to be bound to the houdini running it, is there a way to submit a top tree to be processed in a more old school way.
To be more specific and that is just my impression, right now the top cooking is bound to the current houdini session, that's very cool for fluid working and developing, but I miss a way to just submit a dependency tree to the farm, and then I can close houdini or open a new scene. while the top processes are calculated on the farm.
Is that possible in hqueue already?
Am I missing here something or is that right?
thank You
PDG/TOPs » SendmailTOP with attachment?
-
- sanostol
- 557 posts
- Offline
ah, it works, I provided the Input. but that did not work, a string attribute works fine. Thank You again
PDG/TOPs » How to lock a TOP Node?
-
- sanostol
- 557 posts
- Offline
PDG/TOPs » SendmailTOP with attachment?
-
- sanostol
- 557 posts
- Offline
PDG/TOPs » SendmailTOP with attachment?
-
- sanostol
- 557 posts
- Offline
hello,
i do not get the attachment to work, what do I have to use in the attachment field?
thanks a lot
Martin
i do not get the attachment to work, what do I have to use in the attachment field?
thanks a lot
Martin
PDG/TOPs » How to lock a TOP Node?
-
- sanostol
- 557 posts
- Offline
Hi,
how can we lock a TOP node, so that it does never get executed anymore and the cache does not deleted at when delete result on disk gets executed?
Sometime Tops does evalutes a node, even if it has no changes.
When locked, it should just deliver all wedge informations, though.
thanks
Martin
how can we lock a TOP node, so that it does never get executed anymore and the cache does not deleted at when delete result on disk gets executed?
Sometime Tops does evalutes a node, even if it has no changes.
When locked, it should just deliver all wedge informations, though.
thanks
Martin
-
- Quick Links