検索 - User list
Full Version: Optimizing Python Script Execution in Houdini TOPs for Large Ranges
Root » PDG/TOPs » Optimizing Python Script Execution in Houdini TOPs for Large Ranges
Soothsayer
I'm working with a very large range of frames (tens of thousands), where a simple Python script processes each frame. I've noticed that running the script for each frame individually is much faster than processing multiple frames in parallel. This seems to be due to the overhead of loading Python libraries for each work item, which significantly adds up over time.

I'd like to optimize the workflow by running a chunk of work items sequentially within a single work item so that Python modules load only once, and then the script processes several frames before loading the modules again. However, I'm having trouble setting this up.

I would like to group, say, 10 work items together and run them sequentially as if they were one. So far, attempts at partitioning work items either end up processing chunks in sequence (with the loading overhead for each) or treating each work item separately, which brings me back to the original performance issue.

How do I approach this?
eguquan
you can try to use "Frame per Batch"
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB