Generic Processor and Nuke

   2382   4   2
User Avatar
Member
55 posts
Joined: Nov. 2015
Offline
Hello,
I'm trying to call the Nuke executable from a generic processor to run a nuke script passing some arguments. I type the fully qualified path for everything including arguments and from the workitem log I can see the Nuke version and build message so I know the command has run but the workitem fails and the log show no other messages or errors. Is there a special way to write the custom command string?
The command line string that I currently have follows the below structure


/opt/Nuke12.0v5/Nuke12.0 -xi pathtomyscript argument1 argument2 1-300
User Avatar
Member
55 posts
Joined: Nov. 2015
Offline
I tried to wrap the nuke command into a shell script and executed that and now I'm getting a segmentation fault in the log file when attempting to run the command. Any idea what's going on?
User Avatar
Staff
467 posts
Joined: Aug. 2019
Offline
Hi there, thanks for bringing this to our attention.

When you utilize Nuke inside of a Nuke Command Chain, it sanitizes a few environment variables so that it does not run into library conflicts. This sanitization does not occur when you utilize a Generic Generator. In order for you to avoid these library conflicts, you must add an Environment Variable Scheduler Job Parm to the Generic Generator. To do this, select your Generic Generator, click on the gear icon in the parameter window, and choose “Edit Parameter Interface”. Choose the “Node Properties” pane, then go to Scheduler Properties>Local. Drag the “Environment Variables” folder over to the “Existing Parameters” panel. Click Apply and then Accept. Now, in the parameter window, click the “+” button beside “Environment Variables”. Set the name field to PYTHONHOMEand leave the value field blank. You should now be able to run Nuke without the segmentation fault.

More information regarding Scheduler Job Parms can be found here: https://www.sidefx.com/docs/houdini/tops/schedulers.html#jobparms [www.sidefx.com]
User Avatar
Member
55 posts
Joined: Nov. 2015
Offline
Thank you so much, makes sense and works perfectly! would the same concept apply when using hqueue scheduler? Does the executing node need to have it's own environment variable parameters in order to execute successfully on remote machines?
User Avatar
Staff
586 posts
Joined: May 2014
Offline
The problem occurs with the local scheduler because the job is spawned as a child process of an existing Houdini process, which has its own Python libraries loaded. If you're running the job on the farm, that behavior there will depend on the farm software you're using and how it runs jobs.

HQueue runs jobs using a Python wrapper – you can, however, configure that Python version used on the HQueue scheduler node itself. You'll likely need to do the same thing with the HQueue scheduler though, which also has the same Environment job parms as the Local scheduler.
Edited by tpetrick - Aug. 10, 2020 16:27:42
  • Quick Links