A more robust job-global system for sorting newly ready commands produced by “expand” tasks. This change addresses the “Cmd not Ready?” error problem - which was due to sorting key collisions (precision) on large recursively expanded jobs.
Found 424 posts.
Search results Show results as topic list.
PDG/TOPs » Tops and Tractor diary (plus bugs/rfe/discussion points)
- chrisgreb
- 603 posts
- Offline
Tractor 2.4 actually has more related fixes. I think this fix is for a bug we reported awhile back for 2.3. The workaround in place now is to rate-limit the expanded job submissions.
PDG/TOPs » tractorscheduler submit 17.5 vs 18.0
- chrisgreb
- 603 posts
- Offline
The log indicates that no work items were generated and the cook finished right away.
Are you showing the complete task log? My 18.0.460 log has more lines:
Is this only happening with your script? Does the cook work if you open the saved hip file and submit as job manually?
Are you showing the complete task log? My 18.0.460 log has more lines:
Running Houdini 18.0.460 with PID 29084 Loading .hip file /... Given Node 'topnet1', Cooking Node 'smoke_src' PDG Callback Server at upton.sidefx.com:60095 Finished Cook
Is this only happening with your script? Does the cook work if you open the saved hip file and submit as job manually?
Edited by chrisgreb - June 10, 2020 10:13:04
PDG/TOPs » Caching Improvements in Today's Daily Build (18.0.436)
- chrisgreb
- 603 posts
- Offline
PDG/TOPs » Caching Improvements in Today's Daily Build (18.0.436)
- chrisgreb
- 603 posts
- Offline
No, but that's a bug. It will be fixed soon so that empty string is the fallback handler. Currently you have to supply a non-empty string, so you would need to register a handler for every possible tag prefix of at least one character.
Edited by chrisgreb - June 9, 2020 12:29:53
PDG/TOPs » Q: telling PDG "failed" is actually "OK"
- chrisgreb
- 603 posts
- Offline
If you want to ignore failed items you can use a Filter by Expression node with a python expression that removes the failed ones:
> pdg.workItem().state != pdg.workItemState.CookedSuccess
However every time you cook the failed items will be re-run.
> pdg.workItem().state != pdg.workItemState.CookedSuccess
However every time you cook the failed items will be re-run.
Edited by chrisgreb - June 7, 2020 15:32:10
PDG/TOPs » tractorscheduler and "Reset $HIP on Cook" output location?
- chrisgreb
- 603 posts
- Offline
probably yes - you'd want to do something like:
hou.hscript('set HIP={}'.format(original_hip)) hou.hscript('set HIPFILE={}/{}'.format(original_hip, hip_basename)) hou.hscript('varchange')
PDG/TOPs » tractorscheduler and "Reset $HIP on Cook" output location?
- chrisgreb
- 603 posts
- Offline
I fixed the issue in the next build of 18.0, but it sounds like you're on 17.5?
> there's some sort of .hip copy concept possibility in ropfetch itself
No, there's only the general mechanism that when the TOP graph cooks, it will copy the hip file to the scheduler working directory if it's not already there. The problem here is that submit-as-job doesn't work with it, which is what I've fixed in 18.
> there's some sort of .hip copy concept possibility in ropfetch itself
No, there's only the general mechanism that when the TOP graph cooks, it will copy the hip file to the scheduler working directory if it's not already there. The problem here is that submit-as-job doesn't work with it, which is what I've fixed in 18.
PDG/TOPs » TOPS Wedge Hip File Versioning
- chrisgreb
- 603 posts
- Offline
I think your best bet is a HOM shelf script that, given your TOP node will select each work item in turn and save out the hip file.
PDG/TOPs » tractorscheduler and "Reset $HIP on Cook" output location?
- chrisgreb
- 603 posts
- Offline
Oh I misread your question and was testing with an interactive cook.
Because you are using `submitAsJob`, the hip file is being first copied to the working directory and then the top network is cooked. So at this point the ‘real’ $HIP is considered to be WORKINGDIR as far as the ropfetch node is concerned. When those jobs cook, the hip is not re-copied because $HIP == WORKINGDIR, and `Reset $HIP` does nothing, so $HIP continues to be WORKINGDIR.
Logged as issue #105717
Because you are using `submitAsJob`, the hip file is being first copied to the working directory and then the top network is cooked. So at this point the ‘real’ $HIP is considered to be WORKINGDIR as far as the ropfetch node is concerned. When those jobs cook, the hip is not re-copied because $HIP == WORKINGDIR, and `Reset $HIP` does nothing, so $HIP continues to be WORKINGDIR.
Logged as issue #105717
PDG/TOPs » tractorscheduler and "Reset $HIP on Cook" output location?
- chrisgreb
- 603 posts
- Offline
Could you attach a hip here or to a support ticket so we can take a look? In my test the hip file is written to WORKINGDIR along with pdgtemp, but the output files which are relative to $HIP are written to ORIGHIP.
PDG/TOPs » renderman rop generates no images with tractor scheduler
- chrisgreb
- 603 posts
- Offline
I tried your hip file with 18.0.460 and RMan 23.3, and it works for me. In the work item log I see:
And the exr files are indeed generated at $HIP/render/…
Do you see that in the log?
Are you opening your hip file from the same file mount path that the blade is using?
PDG_RESULT: ropfetch1_1;-1;u'__PDG_DIR__/render/renderman_pdg_v02_cg01.ris1.0001.exr';;0
And the exr files are indeed generated at $HIP/render/…
Do you see that in the log?
Are you opening your hip file from the same file mount path that the blade is using?
PDG/TOPs » top.py spawned by pdgjobcmd.py, and Tractor cancels
- chrisgreb
- 603 posts
- Offline
davidoberst
Did this make it into the 17.5 branch?
Also, the last production build of 17.5 is 17.5.460 (from Dec 5/2019), although there is a daily as recent as 17.5.631 from May 27. Are there plans for another production build of 17.5?
It's there:
https://www.sidefx.com/changelog/?journal=17.5&categories=54&body=&version=&build_0=&build_1=&show_versions=on&show_compatibility=on&items_per_page= [www.sidefx.com]
There are no plans right now for another production build.
Houdini Engine API » Problems with setting parameters in an HDA
- chrisgreb
- 603 posts
- Offline
The call to HAPI_SetParmIntValue is asynchronous when you are initialized to use the cooking thread, so you need to wait for HAPI_GetStatus to return a ‘ready’ enum before proceeding to the next thing.
As far as debugging, you may want to use the debugger / SessionSync in an interactive Houdini session so that you can see what changes are occurring in the scene.
http://www.sidefx.com/docs/houdini/ref/henginesessionsync.html [www.sidefx.com]
/// /// @note In threaded mode, this is an _async call_! /// /// This API will invoke the cooking thread if threading is /// enabled. This means it will return immediately. Use /// the status and cooking count APIs under DIAGNOSTICS to get /// a sense of the progress. All other API calls will block /// until the cook operation has finished. /// /// Also note that the cook result won't be of type /// ::HAPI_STATUS_CALL_RESULT like all calls (including this one). /// Whenever the threading cook is done it will fill the /// @a cook result which is queried using /// ::HAPI_STATUS_COOK_RESULT.
As far as debugging, you may want to use the debugger / SessionSync in an interactive Houdini session so that you can see what changes are occurring in the scene.
http://www.sidefx.com/docs/houdini/ref/henginesessionsync.html [www.sidefx.com]
Edited by chrisgreb - June 3, 2020 12:34:27
PDG/TOPs » Renderman rop errors via pdg local scheduler - [fixed]
- chrisgreb
- 603 posts
- Offline
This should be fixed in 18.0.481. You can try forcing HOUDINI_PATH on local scheduler.
Edited by chrisgreb - June 3, 2020 11:24:54
PDG/TOPs » Programmatic triggering of tractorscheduler node
- chrisgreb
- 603 posts
- Offline
It should only require an engine license, I'm not sure why that would fail in hython but not in houdini. Do you have the same HOUDINI_PATH in both cases? I would suggest submitting a support ticket.
PDG/TOPs » PDG_TEMP or workaround
- chrisgreb
- 603 posts
- Offline
__PDG_TEMP__ is only really used for commands on jobs that run out of process where the $PDG_TEMP environment variable is automatically defined.
If you want to use it in-process, you'll have to use the scheduler python API
If you want to use it in-process, you'll have to use the scheduler python API
work_item.node.scheduler.localizePath('__PDG_TEMP__') # Or work_item.node.scheduler.tempDir(True)
PDG/TOPs » PDG (Local Scheduler) bad performance compared to FG-render
- chrisgreb
- 603 posts
- Offline
What version of houdini is this? Can you tell by looking at your O/S performance monitor if there is a difference in the core utilization? You could try enabling the Houdini Max Threads parm and setting it to 0 or -1.
PDG/TOPs » Send Command can't find variable declared previously
- chrisgreb
- 603 posts
- Offline
tamteYes, that works. You just have to watch out for not accidentally shadowing the global var with a local of the same name if you forget the global statement.
is something like this advisable or does it have any potential drawbacks that I may not see right now?
PDG/TOPs » Send Command can't find variable declared previously
- chrisgreb
- 603 posts
- Offline
Any variables you create go into a local namespace that gets thrown away when the script ends. If you want to keep the variable around you should hang it somewhere global. For example:
a = 'hello' hou.session.a = a
print(hou.session.a)
PDG/TOPs » running TOPS local scheduler and custom houdini env
- chrisgreb
- 603 posts
- Offline
Some studios need to run hython via a wrapper script. If that's the case you can override the hython that is used by PDG by setting the environment variable PDG_HYTHON. If you don't already have a wrapper script you probably don't need one for PDG and don't have to worry about it.
-
- Quick Links