My strategy so far has been this:
- Slice the simulation volume into N 2d slices.
- Make a render with each slice individually from the camera point of view. (10 renders per frame if N=10)
- Composite all N renders into a single frame with a COP network (where I can do the edge detection).
I'm stuck on step 3. I've been able to make N renders per frame using a Wedge node, however I can't seem to find any way to load an arbitrary number of frames using a COP Node. I want the number of slices to a be a variable that I can change at will, but it looks like I can only load images with the COP File node, which only loads one image at a time.
Any thoughts on how you would approach this instead? Any suggestions on how to load in a bunch of renders per frame in a COP?
For reference, this is my current setup:
I do a volumeslice along N places on the volume, and then merge them together. Then I render with a wedge to render all N images per frame.
Let me know if I can be clearer!