Hi there;
Thank you! Yes, indeed, I tried that as well. It doesn't seem to affect anything; the parameter I'm trying to modify remains unchanged.
I even tried removing the parameter and remaking it, but this attempt at a workaround fails as well. It seems as though once the range has been set, it can't be changed, which makes me wonder what the min/max parmTemplate attributes are useful for...
Found 206 posts.
Search results Show results as topic list.
Technical Discussion » Adjusting range of parm templates
- dhemberg
- 207 posts
- Offline
Technical Discussion » Adjusting range of parm templates
- dhemberg
- 207 posts
- Offline
Hi;
I have an HDA I'm building that does some stuff, and builds a list, and I would like to adjust the min/max range of a parameter to reflect this length of this list.
This is almost exactly what's described in the docs, yet this does not seem to work.
How does one specify the min/max range of a paramter in python?
I have an HDA I'm building that does some stuff, and builds a list, and I would like to adjust the min/max range of a parameter to reflect this length of this list.
group = node.parmTemplateGroup() index_parm_template = group.find("index") if index_parm_template: index_parm_template.setMaxValue(some_new_length) index_parm_template.setMaxIsStrict(True) group.replace("index", index_parm_template)
This is almost exactly what's described in the docs, yet this does not seem to work.
How does one specify the min/max range of a paramter in python?
Edited by dhemberg - 2023年2月20日 22:11:12
Technical Discussion » Lens Flare & other effects in COPs
- dhemberg
- 207 posts
- Offline
Hi, I'm interested in trying to build up a kit of various lens effects in COPs. A starting point in what I'd like to aim for is Redshift's very awesome postprocessing COP, which lets you enable or disable individual effects like diffusion, bloom, flare, etc. I'd also like to do some stuff I've seen in Premiere/Aftereffects lens flare plugins like dust on the lens, anamorphic flare, etc.
I realize this is an uphill battle. What I'm interested in finding are some resources that might outline how effects like this are built up from first principles in compositing packages elsewhere...things like nuke gizmos, old compositing books, etc.
Compositors who have been doing this for a few decades: how did you build these back when they were painfully uncool?
I realize this is an uphill battle. What I'm interested in finding are some resources that might outline how effects like this are built up from first principles in compositing packages elsewhere...things like nuke gizmos, old compositing books, etc.
Compositors who have been doing this for a few decades: how did you build these back when they were painfully uncool?
Solaris and Karma » Physical sky in Karma?
- dhemberg
- 207 posts
- Offline
jsmack
Physical can also be unitless. There are many physical properties that are unitless as they are ratios or other values where the units have canceled out.
Of course I understand that; my intent wasn't to say "this is how it SHOULD be", and certainly not "this is the only way that is correct". I said I wanted to simulate the real world. I wanted to build a system that behaves the way I understand it to behave as a real-world photographer; I wanted knobs that behave the way the knobs on my real Canon 5d behaves. Of course I understand that in a different scenario these controls are wholly uninteresting.
It's fine if folks want to be disinterested in it - it's clear you aren't interested in it. There's no need for you to 'out pedantic' me here - I had a thing I wanted to make, I couldn't make it without building what I think are interesting tools, I wanted to share. That's it.
Solaris and Karma » Physical sky in Karma?
- dhemberg
- 207 posts
- Offline
Hi @robp; this is nice to hear!
Since the time I first asked this question, I forged ahead with the solution I described above; I wanted to offer a description here in the event that whomever is developing this for Karma finds any of it interesting:
"Physical" to me indicates some ability to represent the sky in absolute - rather than relative/arbitrary/unknown - units. Physical brightness units (lumens, CD/m2, etc) necessitates a camera with physical properties, which is to say shutter/fstop/ISO parameters that do something meaningful with exposure in addition to depth of field and motion blur. It seems like *some* of the necessary bits and bobs to do this are laying around in Houdini, in various states of function (e.g. the Physical Lens shader worked, then didn't, then I found an Exposure parameter on the camera itself that kinda-sorta helped).
To tape all this together, I:
I'm happy with the results I get [www.instagram.com], I wish my setup worked a little more like a real dSLR where 'autoexposure' can be calculated without needing a prerender, but to do this I need to actually evaluate my sky dome...with a Physical Sky light in Houdini, much of this math can be precalculated and solved without needing a grey card prerender.
I realize there's probably not a huge audience trying to do what I'm trying to do...there's also not many solutions for what I'm trying to do either. It's neat to me that I CAN do this in Houdini.
Since the time I first asked this question, I forged ahead with the solution I described above; I wanted to offer a description here in the event that whomever is developing this for Karma finds any of it interesting:
"Physical" to me indicates some ability to represent the sky in absolute - rather than relative/arbitrary/unknown - units. Physical brightness units (lumens, CD/m2, etc) necessitates a camera with physical properties, which is to say shutter/fstop/ISO parameters that do something meaningful with exposure in addition to depth of field and motion blur. It seems like *some* of the necessary bits and bobs to do this are laying around in Houdini, in various states of function (e.g. the Physical Lens shader worked, then didn't, then I found an Exposure parameter on the camera itself that kinda-sorta helped).
To tape all this together, I:
- Spent a month climbing onto the roof of my apartment every hour throughout the day to capture unoccluded IBLs of the sky, also measuring sky brightness with a light meter and noting the LUX value. I built a Python script that adjusts pixel values to absolute nits, roughly following the procedure outlined in this paper. [blog.selfshadow.com] Together, this collection of IBLs is my "physical sky", and I set time of day and overcastness by lerping.
- I have a scene file that places a grey card near the subject at which my USD camera is pointing, and renders it at 32-bit depth. This yields (most of the time) a very blown-out image that I then evaluate using Python, and algorithmically choose "autoexposure" values (shutter/fstop/ISO) based on models common to dSLRs like "shutter priority" or "aperture priority". I geeked hard over this part of it and I'm proud of it.
- I use these calculated values to set the various parameters on my USD camera. Because there's (currently) no notion of ISO, I create this attribute on my camera, then use it to drive an exposure compensation.
- I captured grain profiles from a Canon 5D, I select amongst these based on my calculated ISO and composite them into my image. This could easily be driven more artistically with a Grain COP node, but 1) I wanted to be as physically-based as possible and 2) I figured that Grain node isn't terribly sophisticated and doesn't try to mimic real physical grain.
I'm happy with the results I get [www.instagram.com], I wish my setup worked a little more like a real dSLR where 'autoexposure' can be calculated without needing a prerender, but to do this I need to actually evaluate my sky dome...with a Physical Sky light in Houdini, much of this math can be precalculated and solved without needing a grey card prerender.
I realize there's probably not a huge audience trying to do what I'm trying to do...there's also not many solutions for what I'm trying to do either. It's neat to me that I CAN do this in Houdini.
Solaris and Karma » Shadow pass in XPU
- dhemberg
- 207 posts
- Offline
Thanks for this.
I do see I can generate a shadows pass using Karma CPU. But, is it really the case this isn't possible in XPU? This seems like pretty basic, core functionality for visual effects.
I really want to love Karma XPU, I really really do, and I really want to feel justified in jettisoning Redshift a year ago in the hopes that I could cross the finish line with projects as effectively with Karma as I'd been able to do with Redshift for years. But, dang, some days I really do feel like I'm slamming myself against a brick wall.
I do see I can generate a shadows pass using Karma CPU. But, is it really the case this isn't possible in XPU? This seems like pretty basic, core functionality for visual effects.
I really want to love Karma XPU, I really really do, and I really want to feel justified in jettisoning Redshift a year ago in the hopes that I could cross the finish line with projects as effectively with Karma as I'd been able to do with Redshift for years. But, dang, some days I really do feel like I'm slamming myself against a brick wall.
Technical Discussion » H19 Karma no camera motion blur
- dhemberg
- 207 posts
- Offline
Following my above question: closer inspection of my imported USD camera revealed that it's open and close shutter times were both 0, indicating an instantaneous shutter. So, no manner of jiggling settings on Cache nodes or Motion Blur lops helped, but using an Edit Camera node to set open/close shutter times to nonzero values (-0.25 and 0.25 in my case) solved my issue and I can now see motion blur.
Leaving this here in the event it's helpful for anyone else.
Leaving this here in the event it's helpful for anyone else.
Solaris and Karma » Shadow pass in XPU
- dhemberg
- 207 posts
- Offline
Hi;
Following the concepts noted here for generating a shadow AOV using LPEs:
https://www.youtube.com/watch?v=p5sEhWI2Iwc&ab_channel=VFXMagic [www.youtube.com]
namely, creating a custom AOV using "holdouts;shadow;CDL" - this works great in Karma CPU, but does not seem to produce any results (a black AOV) in XPU. The XPU docs suggest AOVs and LPEs are working...is there more needed to produce a similar result in XPU?
thanks!
Following the concepts noted here for generating a shadow AOV using LPEs:
https://www.youtube.com/watch?v=p5sEhWI2Iwc&ab_channel=VFXMagic [www.youtube.com]
namely, creating a custom AOV using "holdouts;shadow;CDL" - this works great in Karma CPU, but does not seem to produce any results (a black AOV) in XPU. The XPU docs suggest AOVs and LPEs are working...is there more needed to produce a similar result in XPU?
thanks!
Technical Discussion » H19 Karma no camera motion blur
- dhemberg
- 207 posts
- Offline
Hi;
I have a usda file that contains a camera with animated xforms. I can see the attributes on the camera are green, indicating that it's animated (and, of course, when I scrub the timeline, I can see it moves).
However, I cannot seem to get camera motion blur going, and can't understand why. My Reference node that imports the USDA camera does not have a little watch icon next to it; I'm unsure whether this is significant. I've tried all manner of cache nodes, motionblur nodes, toggling settings, etc but still no camera motion blur. What might I be overlooking?
I have a usda file that contains a camera with animated xforms. I can see the attributes on the camera are green, indicating that it's animated (and, of course, when I scrub the timeline, I can see it moves).
However, I cannot seem to get camera motion blur going, and can't understand why. My Reference node that imports the USDA camera does not have a little watch icon next to it; I'm unsure whether this is significant. I've tried all manner of cache nodes, motionblur nodes, toggling settings, etc but still no camera motion blur. What might I be overlooking?
Solaris and Karma » Refracting a geometry light in Karma
- dhemberg
- 207 posts
- Offline
Hi;
I have some geometry 'beneath' a water surface; I would like to make the water murky, and would like the light to glow beneath the water surface.
I have an emissive shader on my geometry, and have marked it 'treat as light source' via a renderGeometrySettings node. Currently I am using materialX for the water surface; ignoring the 'murky' bit at the moment, simply setting transmission to 1, I can see my geom beneath the water, but the geo renders black. If I bypass the renderGeometrySettings node, it renders emissive. Is this expected? It's not clear to me what further I should adjust to produce a light source that can be refracted.
The 'murky' bit is ideally handled via transmission scattering; the shader says this isn't implemented yet, so I hope to use either a subsurface approach or a volumetric approach...but need to get the light actually illuminating things first.
Thanks!
I have some geometry 'beneath' a water surface; I would like to make the water murky, and would like the light to glow beneath the water surface.
I have an emissive shader on my geometry, and have marked it 'treat as light source' via a renderGeometrySettings node. Currently I am using materialX for the water surface; ignoring the 'murky' bit at the moment, simply setting transmission to 1, I can see my geom beneath the water, but the geo renders black. If I bypass the renderGeometrySettings node, it renders emissive. Is this expected? It's not clear to me what further I should adjust to produce a light source that can be refracted.
The 'murky' bit is ideally handled via transmission scattering; the shader says this isn't implemented yet, so I hope to use either a subsurface approach or a volumetric approach...but need to get the light actually illuminating things first.
Thanks!
Solaris and Karma » Karma shadow artifacts when rendering with motion blur
- dhemberg
- 207 posts
- Offline
Bumping on this for visibility.
A friend from Pixar with extensive USD experience seemed puzzled by the pictures I showed him (from above), and was skeptical it was an animated bounding box issue. He suggested I verify that my point counts (onto which my leaves are being instanced) are not changing frame to frame, as that would likely cause problems, but I stepped through the timeline of the file I shared above and confirmed that the point count for the leaves does stay the same (which is to say: the arrays containing animated orientation/scale/etc. data all seem to be the same length frame to frame). I'd love any advice about how to troubleshoot further, but the aforementioned friend worried this might indeed be a render issue rather than something wrong with my setup.
A friend from Pixar with extensive USD experience seemed puzzled by the pictures I showed him (from above), and was skeptical it was an animated bounding box issue. He suggested I verify that my point counts (onto which my leaves are being instanced) are not changing frame to frame, as that would likely cause problems, but I stepped through the timeline of the file I shared above and confirmed that the point count for the leaves does stay the same (which is to say: the arrays containing animated orientation/scale/etc. data all seem to be the same length frame to frame). I'd love any advice about how to troubleshoot further, but the aforementioned friend worried this might indeed be a render issue rather than something wrong with my setup.
Solaris and Karma » Karma XPU failure on 3090ti
- dhemberg
- 207 posts
- Offline
Thank you sir! I hadn't spotted that, but I will test it. I'm currently also troubleshooting what I think is likely an unrelated issue (https://www.sidefx.com/forum/topic/87458/), but hopefully the original crashing issue does not resurface. If it does I'll report back. Thanks again!
Solaris and Karma » Karma shadow artifacts when rendering with motion blur
- dhemberg
- 207 posts
- Offline
Sure, here is a complete setup:
https://www.dropbox.com/sh/baahmqh16tk3c9q/AABweKLaE1N08Sbo8OW_XQ3Ha?dl=0 [www.dropbox.com]
Though, I don't believe it's a bug (or, at least, I strongly suspect that the bug is me). My intuition makes me think I'm doing something wrong with how I'm sublayering the animation onto the static trees; it feels as though there might be an issue where the geometry moves but the extents/bounding boxes of the geo doesn't...or something.
I've seen several references to this notion of exporting static geo USD + animation as a separate USD layer; the tone always suggests this is a widely-accepted, common technique. I find it very compelling and want to do it this way, but have yet to find a complete guide demonstrating how to do it, and the various threads I've read about it on the forum don't make me feel like I have a complete understanding in my head just yet. It seems very easy to get wrong, which I suspect is what's causing this issue.
https://www.dropbox.com/sh/baahmqh16tk3c9q/AABweKLaE1N08Sbo8OW_XQ3Ha?dl=0 [www.dropbox.com]
Though, I don't believe it's a bug (or, at least, I strongly suspect that the bug is me). My intuition makes me think I'm doing something wrong with how I'm sublayering the animation onto the static trees; it feels as though there might be an issue where the geometry moves but the extents/bounding boxes of the geo doesn't...or something.
I've seen several references to this notion of exporting static geo USD + animation as a separate USD layer; the tone always suggests this is a widely-accepted, common technique. I find it very compelling and want to do it this way, but have yet to find a complete guide demonstrating how to do it, and the various threads I've read about it on the forum don't make me feel like I have a complete understanding in my head just yet. It seems very easy to get wrong, which I suspect is what's causing this issue.
Solaris and Karma » Karma shadow artifacts when rendering with motion blur
- dhemberg
- 207 posts
- Offline
Hi;
I have some trees that I'm generating via the Labs Trees tools; the leaves are copied as packed prims onto the branches and are then animated.
I'm exporting a single static model of my trees as a series of USD files (1 file per tree variant), then exporting a second USD file per tree that contains only the animation data for the leaves.
In Solaris, I bring in the static trees, then sublayer on the animation. The leaves are represented by a point instancer. I then use a second point instancer to copy various trees around in an area.
This all seems to work great, and when I render a sequence of frames, I see this:
However, when I enable Motion Blur on my Karma Render Settings LOP, and render again, I see these weird shadow-like artifacts:
In trying to pare back my scene to try to understand where the breakage is occurring, it seems to be with the use of motion blur (I have a separate Motion Blur node in my scene downstream of my RenderSettings node; this blocky artifact is present whether I use this Motion Blur node or not).
What am I doing wrong here?
Thanks!
I have some trees that I'm generating via the Labs Trees tools; the leaves are copied as packed prims onto the branches and are then animated.
I'm exporting a single static model of my trees as a series of USD files (1 file per tree variant), then exporting a second USD file per tree that contains only the animation data for the leaves.
In Solaris, I bring in the static trees, then sublayer on the animation. The leaves are represented by a point instancer. I then use a second point instancer to copy various trees around in an area.
This all seems to work great, and when I render a sequence of frames, I see this:
However, when I enable Motion Blur on my Karma Render Settings LOP, and render again, I see these weird shadow-like artifacts:
In trying to pare back my scene to try to understand where the breakage is occurring, it seems to be with the use of motion blur (I have a separate Motion Blur node in my scene downstream of my RenderSettings node; this blocky artifact is present whether I use this Motion Blur node or not).
What am I doing wrong here?
Thanks!
Edited by dhemberg - 2022年11月18日 15:59:24
Houdini Lounge » Mplay in Houdini Launcher
- dhemberg
- 207 posts
- Offline
Hi;
I feel a little silly for asking this, but: I use mplay a lot for reviewing animation frames. Each time I update Houdini, I have to go noodle around for mplay to find it and add it to my Windows taskbar or dock (basically, it's not trivial to find and launch the UI the way I'm used to launching Houdini).
Possible to add Mplay to the Houdini Launcher to make it easier to get at after updating Houdini?
I feel a little silly for asking this, but: I use mplay a lot for reviewing animation frames. Each time I update Houdini, I have to go noodle around for mplay to find it and add it to my Windows taskbar or dock (basically, it's not trivial to find and launch the UI the way I'm used to launching Houdini).
Possible to add Mplay to the Houdini Launcher to make it easier to get at after updating Houdini?
Solaris and Karma » Karma XPU failure on 3090ti
- dhemberg
- 207 posts
- Offline
Hey Brian!
I'm hesitant to wave the victory flag too wildly, but upgrading to the latest Game driver as you suggested (526.86) yields:
This bug has been mysterious enough for me that I'm afraid to imagine a magic bullet fix like this is truly real, so I'm proceeding with some caution and double-checking other elements of my scene to make sure I've not just inadvertently enabled something that might sidestep the bug. But...I am cautiously optimistic!
I've been told strictly by support@ that I should always always use the Studio driver, so it wouldn't have occurred to me to try out the Game driver; I would have just presumed I'd be exacerbating the issue. So, thank you for the pointer!
I'm hesitant to wave the victory flag too wildly, but upgrading to the latest Game driver as you suggested (526.86) yields:
- The setup I shared on Dropbox - which previously consistently crashed for me - now seems to work.
- The larger scene file that touched off this thread also seems to work without crashing.
This bug has been mysterious enough for me that I'm afraid to imagine a magic bullet fix like this is truly real, so I'm proceeding with some caution and double-checking other elements of my scene to make sure I've not just inadvertently enabled something that might sidestep the bug. But...I am cautiously optimistic!
I've been told strictly by support@ that I should always always use the Studio driver, so it wouldn't have occurred to me to try out the Game driver; I would have just presumed I'd be exacerbating the issue. So, thank you for the pointer!
Solaris and Karma » Karma XPU failure on 3090ti
- dhemberg
- 207 posts
- Offline
I'm still wrestling against this issue, it has severely impeded my project unfortunately. One thing I wanted to note: @jsmack suggested at one point that I could force Houdini to crash (making this Optix bug more evident, rather than simply noting unexpecedly-long render times when Optix fails) by disabling Embree in my houdini.env file. I thought this was a good idea, so I tried it.
Unfortunately, however, if I disable Embree, Karma simply outputs an empty image in the event of an Optix failure...which I found even more confusing behavior than the long render times in the event of an Embree fallback.
Properly catching this issue seems like it would rely on some sort of post-render script that looks at the error log and parses it for the Optix failure line.
Unfortunately, however, if I disable Embree, Karma simply outputs an empty image in the event of an Optix failure...which I found even more confusing behavior than the long render times in the event of an Embree fallback.
Properly catching this issue seems like it would rely on some sort of post-render script that looks at the error log and parses it for the Optix failure line.
PDG/TOPs » ROP Geometry Output 'hangs'
- dhemberg
- 207 posts
- Offline
Hi;
I have a scene file that generates some procedurally-generated geometry based on top Wedge attributes. I use a ROP Geometry Output TOP to write each bit of geo to disk, before continuing on down my PDG graph to render the geo.
I notice that maybe half the time (this is a very rough estimate), the ROP Geometry top seems to "hang"; it save out one or two bits of geo, but then just seems to stop doing work. Left un-interrupted, Houdini can stay in this hung state for hours. This is all being done via Hython, so I usually have to kill my Windows shell and restart the process. I can't identify a pattern that causes it to hang sometimes and not others.
I've set my localscheduler to just use 1 slot, and my ROP geo node is set to run in-process. I can't imagine what about this might be causing this hanging; curious if anyone might have some advice as to how I might troubleshoot?
I have a scene file that generates some procedurally-generated geometry based on top Wedge attributes. I use a ROP Geometry Output TOP to write each bit of geo to disk, before continuing on down my PDG graph to render the geo.
I notice that maybe half the time (this is a very rough estimate), the ROP Geometry top seems to "hang"; it save out one or two bits of geo, but then just seems to stop doing work. Left un-interrupted, Houdini can stay in this hung state for hours. This is all being done via Hython, so I usually have to kill my Windows shell and restart the process. I can't identify a pattern that causes it to hang sometimes and not others.
I've set my localscheduler to just use 1 slot, and my ROP geo node is set to run in-process. I can't imagine what about this might be causing this hanging; curious if anyone might have some advice as to how I might troubleshoot?
PDG/TOPs » Outputting progress
- dhemberg
- 207 posts
- Offline
Thank you so much! This works splendidly.
A slightly-related question: I notice when my render TOP (which is a ROP Fetch of a USD render LOP) executes, it seems to cook the resulting work items in a wonky 'diagonal' pattern, rather than sequentially. I can't recall this always being the case for me, so I'm not sure if this is default behavior or if I've inadvertently twiddled a setting somewhere to cause this.
Is the execution order for ROP fetches controllable? This isn't a huge deal, but it does make playback of in-progress animation sequences a bit weird, so I'm curious if I can state that I'd like the tasks to run sequentially (which is to say: in ascending-by-frame-number order).
A slightly-related question: I notice when my render TOP (which is a ROP Fetch of a USD render LOP) executes, it seems to cook the resulting work items in a wonky 'diagonal' pattern, rather than sequentially. I can't recall this always being the case for me, so I'm not sure if this is default behavior or if I've inadvertently twiddled a setting somewhere to cause this.
Is the execution order for ROP fetches controllable? This isn't a huge deal, but it does make playback of in-progress animation sequences a bit weird, so I'm curious if I can state that I'd like the tasks to run sequentially (which is to say: in ascending-by-frame-number order).
[BEAUTIFUL SHELVES] Rendering animation...
[BEAUTIFUL SHELVES] Frame range: 1 - 52
[BEAUTIFUL SHELVES] Frame 1 of 52 complete.
[BEAUTIFUL SHELVES] Frame 2 of 52 complete.
[BEAUTIFUL SHELVES] Frame 6 of 52 complete.
[BEAUTIFUL SHELVES] Frame 7 of 52 complete.
[BEAUTIFUL SHELVES] Frame 13 of 52 complete.
[BEAUTIFUL SHELVES] Frame 14 of 52 complete.
[BEAUTIFUL SHELVES] Frame 19 of 52 complete.
[BEAUTIFUL SHELVES] Frame 20 of 52 complete.
[BEAUTIFUL SHELVES] Frame 21 of 52 complete.
[BEAUTIFUL SHELVES] Frame 26 of 52 complete.
[BEAUTIFUL SHELVES] Frame 27 of 52 complete.
[BEAUTIFUL SHELVES] Frame 32 of 52 complete.
[BEAUTIFUL SHELVES] Frame 33 of 52 complete.
[BEAUTIFUL SHELVES] Frame 39 of 52 complete.
[BEAUTIFUL SHELVES] Frame 40 of 52 complete.
[BEAUTIFUL SHELVES] Frame 3 of 52 complete.
[BEAUTIFUL SHELVES] Frame 4 of 52 complete.
[BEAUTIFUL SHELVES] Frame 5 of 52 complete.
[BEAUTIFUL SHELVES] Frame 8 of 52 complete.
[BEAUTIFUL SHELVES] Frame 9 of 52 complete.
PDG/TOPs » Outputting progress
- dhemberg
- 207 posts
- Offline
Hi there;
I have a TOP graph that uses a ROP Fetch to render a sequence of images. I'd like to cobble together a Python script that periodically outputs progress as the images are rendered. Currently the render runs in-process on my local machine; after the ROP fetch, I have a Python script node that just does something like:
This sort of works; I get an output message once each frame renders, except the output is always:
"Frame 1.0 complete."
I'm curious what I could use other than hou.frame() to correctly output the frame that is/was rendered?
I have a TOP graph that uses a ROP Fetch to render a sequence of images. I'd like to cobble together a Python script that periodically outputs progress as the images are rendered. Currently the render runs in-process on my local machine; after the ROP fetch, I have a Python script node that just does something like:
import hou current_frame = hou.frame() print(" Frame", current_frame, "complete.")
This sort of works; I get an output message once each frame renders, except the output is always:
"Frame 1.0 complete."
I'm curious what I could use other than hou.frame() to correctly output the frame that is/was rendered?
-
- Quick Links