I have a customized attribute baking setup that needs to do some work in SOPs, pass a few UV sampled attributes to COPs and then write out that data as a UV tile image. Cooking this setup is very quick but I'm working with assets that have hundreds of UDIMs which I need to traverse tile-by-tile and I'd like to leverage PDG to automate things.
I have a network with two wedge nodes that accomplishes this:
wedge1 generates work items for each UDIM on the input asset
wedge2 generates work items for each attribute that needs to be parsed per UDIM
On the ropcomposite1 TOP node which saves the resulting image to disk, I have it set to cook all frames in a single batch, but as expected this doesn't work since my work items aren't actually "frames"
The result is that a new instance of hython is spun up for each work item, which is wildly inefficient compared to the actual cooking of the work items (60s to start hython, .5s to cook...).
I've tried partitioning by attribute but then I only get the first work item of each partition outputting to disk, though I may be missing something there.
What is the best way to structure this kind of setup to get the desired batching behavior?
Thanks for any ideas!