I'm using the code below to log the output of external commands so that it appears on the Python Script node info window. However, it feels a bit too low level for PDG – especially having to retrieve the path to the logs directory, create it if necessary, etc. I would have expected being able to fetch, say, a file descriptor directly from the PDG API to feed the log into; something of that sort.
I'm new to PDG and my Python is rusty, so I'm wondering, is there a simpler or cleaner way to do this that I have missed?
importos,subprocess,urlparse,urllib# Get log pathlog_url_path=urlparse.urlparse(self.scheduler.getLogURI(work_item)).pathlog_local_path=urllib.url2pathname(log_url_path)# Create logs directory if missinglog_local_parent=os.path.dirname(log_local_path)try:os.makedirs(log_local_parent)except:ifnotos.path.isdir(log_local_parent):raise# Run external commandtry:withopen(log_local_path,'w')aslog_sink:subprocess.call(['date.exe','/T'],# placeholderstderr=subprocess.STDOUT,stdout=log_sink,shell=True)except:raise
You can use the Generic Generator node to run a custom command string. The scheduler will spawn that as a standalone process and output the log to disk. You can then MMB on the work item dot in the generic generator to see the command that was executed, as well as view the log.
If you need more low level control of the work item generation, the Python Processor allows you to create work items, add attributes to them, and set a command string which will be used by the scheduler node.
I need to handle the external command's return code, so I gave the Python Processor a try. It captures the command output with no additional work on my part, but only if I run it out-of-process. If I run it in-process, I need to use code like the above in order to grab and show the command output in the node info window. Is this as expected?