hython TOPs progress reporting . logs etc

   6724   13   2
User Avatar
Member
258 posts
Joined: July 2006
Offline
Hi, I am trying to figure out a way to visualize hython running TOPs tasks. I am now manually printing out frame number etc and i can possibly build a UI over it. but is there a built in , more elegant way ,maybe already built in.

When I run a tops task with hython, I get no feedback in the terminal , so i have to check my logs to see what its doing.

Thanks
Head of CG @ MPC
CG Supervisor/ Sr. FX TD /
https://gumroad.com/timvfx [gumroad.com]
www.timucinozger.com
User Avatar
Member
17 posts
Joined: Feb. 2019
Offline
Hi!

We do have an experimental feature that might help you. The PDG Data Layer (pdgd) allows you to access PDG data using a subscription system, not easy to explain here but it is basically you subscribe to a PDG graph, node or work item and will get updates every time data changed.

Using pdgd you can choose to access to local PDG data or to connect to a remote instance.

Today's build should have an updated version of the examples, a good starting point is $HFS/houdini/pdgd/examples/visualizer/, that will give you a brief intro on using pdgd showing very basic usage of the library. I hope to add another example this week showing how to start a server and how to connect to a remote instance, using the same code to inspect local and remote PDG data.

It's important to note that this is an experimental feature, we are still working on it, we're also working on documentation and examples.

Please let me know if you have any questions,
Joab
User Avatar
Member
17 posts
Joined: Feb. 2019
Offline
Hi again,

There is one other thing that might solve your problem, TOPs is actually using pdgd, and we have node called “Remote Graph” that allows you to connect to a remote instance, all you need is to make sure you have a pdgd server running on that instance.

If before you cook your PDG graph, you run this code:
import pdgd

server_manager = pdgd.DataLayerServerManager.Instance()

# The first parameter is the server type, the second
# is the port the server will listen, zero means the
# system will choose an available port
server = server_manager.createServer('DataLayerWSServer', PORT_NUMBER)
server.serve()

You then can use Houdini to attach to this instance and visualize the graph, in the `tasks` context, drop a remote graph node, set the host address and connect, the remote graph node will create a replica of the remote graph and display progress just like if it was a local graph.
User Avatar
Member
258 posts
Joined: July 2006
Offline
That`s great digging in right away. Thanks a lot
Head of CG @ MPC
CG Supervisor/ Sr. FX TD /
https://gumroad.com/timvfx [gumroad.com]
www.timucinozger.com
User Avatar
Member
13 posts
Joined: Feb. 2016
Offline
I got a very similar question.
When I run a tops task with hython, I get no feedback in the terminal , so i have to check my logs to see what its doing.
Is there a way in the end to grab the logs of a workItem to then collect the information? I am not looking for a visual representation and I would like to print the logs from hython so that we get a decent traceback message in case of a failed workItem. I couldn't find online anything to get the same logs we can see from the topnet UI.
User Avatar
Member
13 posts
Joined: Feb. 2016
Offline
To add to this, I am searching for a way to get the logs from a workItem in a hython session. You would think that the stdout or logs of a workItem could be accessed from a property of the workItem object. I couldn't find how. What if we just want to query and format with hython.
User Avatar
Staff
585 posts
Joined: May 2014
Offline
For a work item that cooks in-process, you can access it's internal log buffer using the work_item.logMessages property (https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html#logMessages)

For work items that cook out of process, the Scheduler provide an API method to query the URI to the log. The local scheduler stores the log files in the PDG_TEMP dir on disk -- farm schedulers store it somewhere on the farm itself, depending on which scheduler you're using: https://www.sidefx.com/docs/houdini/tops/pdg/Scheduler#getLogURI [www.sidefx.com]
User Avatar
Member
13 posts
Joined: Feb. 2016
Offline
Ok I have built a case in houdini 19.0.663 with failing workItem where with the GUI of houdinifx I can see the logs on the item, but when thrown in the python shell with
hou.node("/out/topnet/test_node").getPDGNode().workItems.logMessages I just get an empty string. My first instict was to dismiss this as a pdg logging feature where it would only show the logs from a certain pdg logger. Glad to hear it's a way to instead just get the full logs we can see in the UI under normal circomstances.
Also this doesn't seem to exists in 18.5+ at least not in 18.5.727.
I am going to investigate the scheduler log you are talking about to see if it contains the same info and get back on this thread after.
User Avatar
Member
13 posts
Joined: Feb. 2016
Offline
Perfect the second approach to using the scheduler itself to bring up the log file works for what I have to do. Thank you for your answer.
User Avatar
Staff
585 posts
Joined: May 2014
Offline
pdg.WorkItem.logMessages only exists in H19.0 and newer, and will only have log data for work items that cooked in-process. The log files for any work items that run out of process are always managed by the scheduler, and will be a file on disk or a URL depending on which scheduler is being used.
Edited by tpetrick - July 14, 2022 18:20:09
User Avatar
Member
13 posts
Joined: Feb. 2016
Offline
Why is the logMessages only supporting log data for inProcess workItems though? If a workItem is actually tracking the results of out of process cook using pdg.WorkItem.isSuccessful shouldn't the log also map out to pdg.WorkItem.logMessages?
Is it because guessing which scheduler is used for the pdg.Node hard?
User Avatar
Staff
585 posts
Joined: May 2014
Offline
pdg.WorkItem.logMessages is just a string buffer that in-process work items can use to write log message, since an in-process work item doesn't have an external file to write to (and we don't want to create one).

For work items that cook out of process, the log is managed by the farm system. In order to make it show up in pdg.WorkItem.logMessages PDG would need to download the log data from the farm system for each work item as it finishes. This could end up being expensive and result in a lot of extra RPC/network calls to the farm system for something that may not even be used. In the UI the work item MMB panel downloads the logs as necessary when you click on work items -- only log data that's actually requested by the user is fetched.

For example, on Deadline the logs are archived and we actually have to submit a new Deadline task to query the log data for a completed job.
User Avatar
Member
13 posts
Joined: Feb. 2016
Offline
I am still having a hard time figuring out why pdg.WorkItem.logMessages couldn't just map this code to open the file only on call. The weight of it would only exist when one is asking for it kinda like when people are clicking individual work items. If essentially we are only putting the implementation to read outside logs inside a property or method as a shortcut it would be super convenient for everyone. If it's bad to cache it, can we just not cache it but leave the implementation to fetch it?
User Avatar
Staff
585 posts
Joined: May 2014
Offline
It certainly could -- that's just not how it works currently.

I've logged an RFE to provide a work item API method that queries the full log data via the work item's scheduler. It'll likely have to be exposed as new method when it gets added, since the pdg.WorkItem.logMessages property is directly bound to the in-process log buffer.
  • Quick Links