TOPs is ideal for this.
-Folder with HDIs
-wedge the HDR files.
-wedge the rotation parm
-render all wedges.
It's really a nice and simple project to get into TOPs.
Found 222 posts.
Search results Show results as topic list.
Technical Discussion » Auto generate render previews with multiple HDRis
- Jonathan de Blok
- 253 posts
- Offline
Solaris and Karma » Clone Control Panel - save images?
- Jonathan de Blok
- 253 posts
- Offline
Besides the batch thing, even a single shot can get of the rails quite quickly during development.
The Karma ROP has a 'render to disk' and a 'render to mPlay' option, both not nice for doing iterative renders in a comfortable way.
The gallery has a 'render in background' button but it's tied to a viewport, so that's a very loose thing with very 'low confidence' of what you're going to get as eikonoklastes puts it. It's quite easy to accidently change a viewport, jump out of a camera or even mess a camera up.
The clones can be locked to a certain lop/cam/frame which is great, but they don't have a render button. They are either on pause or start to render every time you touch a node. Which is nice in demos but is quite a waste in energy and money with power-hungry GPUs and such. Needing to toggle the pause button for this is quite an annoying UX.
As a stop gap I made something [www.sidefx.com] that adds a 'render' button to the clone panel that add the functionality of a 'single shot' render-on-demand so I can use a clone as a locked-in background render. The only issue is that AOVs are a bit clunky to handle since the clones only deliver one AOV to the gallery at a time.
So what we need it a simple 'render' button that renders to the gallery using an explicit set, or sets, of camera/lop/frame and only renders when you tell it to. Ideally this is added for the 'normal' render and well as for the clones so you can check multiple shots/settings with a single click render action.
I know it's not a sexy as interactive viewport renders, which certainly have their place, but they are just too volatile and hard to work with if you need a locked in shot.
btw: @eikonoklastes if you need to control a lot of shots, have a look at https://jdbgraphics.nl/script/prosequencer-2-0-houdini/ [jdbgraphics.nl] you can visually control ranges and what not and use a single ROP and have ProSequencer control it for each shot.
The Karma ROP has a 'render to disk' and a 'render to mPlay' option, both not nice for doing iterative renders in a comfortable way.
The gallery has a 'render in background' button but it's tied to a viewport, so that's a very loose thing with very 'low confidence' of what you're going to get as eikonoklastes puts it. It's quite easy to accidently change a viewport, jump out of a camera or even mess a camera up.
The clones can be locked to a certain lop/cam/frame which is great, but they don't have a render button. They are either on pause or start to render every time you touch a node. Which is nice in demos but is quite a waste in energy and money with power-hungry GPUs and such. Needing to toggle the pause button for this is quite an annoying UX.
As a stop gap I made something [www.sidefx.com] that adds a 'render' button to the clone panel that add the functionality of a 'single shot' render-on-demand so I can use a clone as a locked-in background render. The only issue is that AOVs are a bit clunky to handle since the clones only deliver one AOV to the gallery at a time.
So what we need it a simple 'render' button that renders to the gallery using an explicit set, or sets, of camera/lop/frame and only renders when you tell it to. Ideally this is added for the 'normal' render and well as for the clones so you can check multiple shots/settings with a single click render action.
I know it's not a sexy as interactive viewport renders, which certainly have their place, but they are just too volatile and hard to work with if you need a locked in shot.
btw: @eikonoklastes if you need to control a lot of shots, have a look at https://jdbgraphics.nl/script/prosequencer-2-0-houdini/ [jdbgraphics.nl] you can visually control ranges and what not and use a single ROP and have ProSequencer control it for each shot.
Edited by Jonathan de Blok - 2024年2月8日 03:53:19
Technical Discussion » Trigger Python SOPs
- Jonathan de Blok
- 253 posts
- Offline
btw.. for simple one-after-the-other you best use the 'inProcesScheduler'. It just runs things in the current Houdini, even on the main thread if you want, and it's the safest option for when you're uing TOPs as a glorified for-loop
Technical Discussion » Windows Environment Variable instead of .env file?
- Jonathan de Blok
- 253 posts
- Offline
And if you're really lazy, you can place a bat file in the root of the pipeline folder which if a user runs it, the path of the batchfile is used so set certain env vars.
For example the code below set JDB_ROOT to the location of the batchfile itself. It also set the package dir and some other vars. So if you are on a new machine and pulled the pipeline contents from a repo to any random folder, running the batchfile sets up the env so all software can find what it needs. This is also a good test case to weed out any hardcoded path since the pipeline root location can vary per machine. So if nothing breaks it's all good
And the code below is the bare minimum to put on a json package file if you want add a homedir. So you save it as myAwesomePipeline.json in the folder that HOUDINI_PACKAGE_DIR points to and was set by the batchfile.
In that folder you can now place items using the same folder structure as is used in the /documents. (So HDAs go in /otls etc).
Make sure to clean out the oldschool .env file if there is any and transfer everything to packages.
For example the code below set JDB_ROOT to the location of the batchfile itself. It also set the package dir and some other vars. So if you are on a new machine and pulled the pipeline contents from a repo to any random folder, running the batchfile sets up the env so all software can find what it needs. This is also a good test case to weed out any hardcoded path since the pipeline root location can vary per machine. So if nothing breaks it's all good
set mypath=%~dp0
echo %mypath%
setx JDB_ROOT %mypath%
setx SYNTHEYES_SCRIPT_PATH %mypath%SynthEyes\Scripts\
setx HOUDINI_PACKAGE_DIR %mypath%Houdini\Packages
setx NUKE_PATH %mypath%Nuke\nuke_dir
pause
And the code below is the bare minimum to put on a json package file if you want add a homedir. So you save it as myAwesomePipeline.json in the folder that HOUDINI_PACKAGE_DIR points to and was set by the batchfile.
{
"env": [
{
"HOUDINI_PATH": "E:/GIT/JDB_Pipeline/Houdini/JDB",
}
]
}
In that folder you can now place items using the same folder structure as is used in the /documents. (So HDAs go in /otls etc).
Make sure to clean out the oldschool .env file if there is any and transfer everything to packages.
Edited by Jonathan de Blok - 2024年2月7日 08:33:49
Solaris and Karma » Clone Control Panel - save images?
- Jonathan de Blok
- 253 posts
- Offline
It's a bit of a detour but I had something for this in the works, the lack of some proper API calls for this makes it depended on the UI and a bit tricky.
You can run the code below if there is:
-A clone that has produced output (100% completed)
-Output from that clone being shows in the gallery
Its saves all AOVs for clone snapshots to $HIP/CloneOutput
And to state the obvious.. The code below is not something you want run in production
You can run the code below if there is:
-A clone that has produced output (100% completed)
-Output from that clone being shows in the gallery
Its saves all AOVs for clone snapshots to $HIP/CloneOutput
And to state the obvious.. The code below is not something you want run in production
import hou from datetime import datetime from pathlib import Path from time import sleep import hdefereval @hdefereval.do_work_in_background_thread def save_all_AOVs(): print("init") yield for clone in hou.clone.clones(): if clone.imagePath() and clone.percentComplete()==100: for aov in clone.availableAovs(): clone.setDisplayAov(aov) yield sleep(0.5) snapshots=clone.renderGalleryDataSource() if snapshots!=None: snap=clone.renderGalleryItemId() if snap: snapshots.setLabel(snap, aov ) cam=Path(clone.cameraPath()).name lop=clone.lopNode().name() frame=hou.text.expandString(clone.frameExpression()) if not frame: frame=str(int(hou.frame())) tmpfile = Path(hou.text.expandString("$HIP")) / "CloneOutput" / f"{lop}_{cam}_{aov}_{frame}.exr" snapshots.prepareItemForUse(snap) try: f=hou.node("/img/comp1").createNode("file") membuf= snapshots.filePath(snap) #clone.imagePath() f.parm("filename1").set( membuf ) tmpfile.parent.mkdir(exist_ok=True, parents=True) f.saveImage(tmpfile.as_posix()) f.destroy() print(f"{aov}: {tmpfile}") except: pass yield yield print("Done!") save_all_AOVs()
Edited by Jonathan de Blok - 2024年2月7日 08:19:54
Technical Discussion » Windows Environment Variable instead of .env file?
- Jonathan de Blok
- 253 posts
- Offline
Have a good look at https://www.sidefx.com/docs/houdini/ref/plugins.html [www.sidefx.com]
You can do it all, for example I have an additional second home folder that's layered on top of the default one in /documents.
In that folder are all the HDAs, desktops, shelf tools etc that I want shared between my workstation, laptop and render node. It's in repository so I can push updates and keep all the others in sync. But a shared network is drive, Dropbox etc will work fine as well.
You can do it all, for example I have an additional second home folder that's layered on top of the default one in /documents.
In that folder are all the HDAs, desktops, shelf tools etc that I want shared between my workstation, laptop and render node. It's in repository so I can push updates and keep all the others in sync. But a shared network is drive, Dropbox etc will work fine as well.
Edited by Jonathan de Blok - 2024年2月6日 18:15:32
Technical Discussion » Trigger Python SOPs
- Jonathan de Blok
- 253 posts
- Offline
Yeah what Tomas says. If you just want to run a series of actions the node graph is not the place for that.
Besides the mentioned shelf and HDA it's also possible to make your own python panels where you can use the power of Qt for fancy UIs if required.
And another thing, if you need to sample values at certain times it's better to use the "..AtFrame" version of certain methods, for example the: "someParm.evalAsFloatAtFrame(frame)"
Besides the mentioned shelf and HDA it's also possible to make your own python panels where you can use the power of Qt for fancy UIs if required.
And another thing, if you need to sample values at certain times it's better to use the "..AtFrame" version of certain methods, for example the: "someParm.evalAsFloatAtFrame(frame)"
Technical Discussion » Trigger Python SOPs
- Jonathan de Blok
- 253 posts
- Offline
The quick and dirty solution is to make a comment line in python and use backticks to inject the current frame in to source using an expression. That will trigger a recook because it thinks the source updates.
Same concept as this where transforming the camera trigger a python recook: https://www.sidefx.com/forum/topic/90339/?page=1#post-392629 [www.sidefx.com]
Same concept as this where transforming the camera trigger a python recook: https://www.sidefx.com/forum/topic/90339/?page=1#post-392629 [www.sidefx.com]
Technical Discussion » How to manage viewport display options?
- Jonathan de Blok
- 253 posts
- Offline
https://www.sidefx.com/docs/houdini/hom/hou/GeometryViewportSettings.html [www.sidefx.com]
Looks easy to script, almost all of those values can simply be put in a json file for saving and loading.
I ran into this as well a few times but more on a per project base. Different project using different viewports, I might make something for this if I find the time.
Looks easy to script, almost all of those values can simply be put in a json file for saving and loading.
I ran into this as well a few times but more on a per project base. Different project using different viewports, I might make something for this if I find the time.
Solaris and Karma » Animation best practices?
- Jonathan de Blok
- 253 posts
- Offline
eikonoklastes
Option C) should not require any Python. It's what USD was built to do out-of-the-box. You can bring your static model into Solaris for lookdev, and then bring the animation in as a separate layer that you can just overlay on the static model.
Something like this (you'll see the animation layer comes in with no geometry - just transform info):
https://www.youtube.com/watch?v=cFqUBPFU07E [www.youtube.com]
Ok, good to know and it helps a bit, If I import the transforms only using the "sceneimport1" set to not import geo/materials and sublayer that onto a frozen, "timeshift1" set to frame 0, stage it goes from around 3 to 15fps. (Houdini GL/flat shaded viewport)
Just the frozen layer plays back at +120 fps, so that's not causing any performance drops, the xform-only "sceneimport1" does 19fps and when combining those using the "sublayer1" I see the animation working fine but at a measly 15fps. Even when I just inport the xform of a single object.
We're talking less to 100 lowres objects here, not even 20 are animated. And when I simply use a USD transform node after the timeshift I can animate a few prims and get +-35..30 fps, not great either.
For reference, the /obj context does everything at +100 easily.
Edited by Jonathan de Blok - 2024年2月2日 06:14:12
Solaris and Karma » Animation best practices?
- Jonathan de Blok
- 253 posts
- Offline
I'm sure this has come up before but I'm a bit late to the USD party here
So what's the concences on doing animations that would normally be done in /obj level?
Think assembly type animations with a lot of moving parts, tweaking animation curves and keyfranes to get a nice flow.
Workflows so far:
A) Simpy using a scene import node. Works but this quickly bogs down since it will re-import everything each frame and triggering recooks downstream. Scrubbing the time line feels like it's 1999 again.
B) Importing a static scene and using transform nodes after that, preferably somewhere near the end of the USD network to minimize downstream updates. It's faster but a bit cumbersome and not all the animation features work (pressing 'K' does not set keys for example) It's also not as performant as the /obj context so it can be hard to judge animation flow properly.
C) Import static scene, do the animation in /obj and use a bit of python to reference those transform values into USD transform nodes that are automatically added into the stage for all /obj level nodes that have a keyframed transform. This allows for animating in the fast and comfortable /obj context yet having a relatively fast USD stage. Downside is that it requires some automated housekeeping and gets messy with complex hierarchies and such.
D) Caching things out. Fast playback but it quickly becomes a drag when doing lots of iterations etc.
E) Don't go into USD until all animation is completed. Sound like a plan but also when doing lookdev/lighting it's annoying when scrubbing is a bit slow.
Are there any other/better ways of getting max performance from USD in relatively simple scenes?
So what's the concences on doing animations that would normally be done in /obj level?
Think assembly type animations with a lot of moving parts, tweaking animation curves and keyfranes to get a nice flow.
Workflows so far:
A) Simpy using a scene import node. Works but this quickly bogs down since it will re-import everything each frame and triggering recooks downstream. Scrubbing the time line feels like it's 1999 again.
B) Importing a static scene and using transform nodes after that, preferably somewhere near the end of the USD network to minimize downstream updates. It's faster but a bit cumbersome and not all the animation features work (pressing 'K' does not set keys for example) It's also not as performant as the /obj context so it can be hard to judge animation flow properly.
C) Import static scene, do the animation in /obj and use a bit of python to reference those transform values into USD transform nodes that are automatically added into the stage for all /obj level nodes that have a keyframed transform. This allows for animating in the fast and comfortable /obj context yet having a relatively fast USD stage. Downside is that it requires some automated housekeeping and gets messy with complex hierarchies and such.
D) Caching things out. Fast playback but it quickly becomes a drag when doing lots of iterations etc.
E) Don't go into USD until all animation is completed. Sound like a plan but also when doing lookdev/lighting it's annoying when scrubbing is a bit slow.
Are there any other/better ways of getting max performance from USD in relatively simple scenes?
Edited by Jonathan de Blok - 2024年2月2日 03:39:20
Solaris and Karma » Use same persp cam between stage and inside SOP Create LOP
- Jonathan de Blok
- 253 posts
- Offline
I guess you can make all those cams and just link their transform, and other relevant, parms?
Edited by Jonathan de Blok - 2024年2月1日 16:21:23
Solaris and Karma » How to snapshot a Clone's output?
- Jonathan de Blok
- 253 posts
- Offline
robp_sidefx
The background renders are linked to a viewport to the degree that the viewport is the mechanism by which they access the USD stage and information about the current camera (which may be an actual USD camera, or an arbitrary view).
Ahh that's why it works that way, that make a bit of sense. And with the clones you explicitly point them a lop node to tell it where to fetch the a USD stage and which cam to use etc for rendering. I guess technically the background-render and clones are the same thing in a different wrapper?
And I must say I've been using my shiny new 'render' button all day and it actually did instantly improve my workflow quite a lot, I can manually trigger a single render when I want or unlock the clones for live updates when required. Ideally for more control each clone could have it's own 'render' button in a table-column as well, then you can render a specific shot from one of the clones. I hope something like this, or something better, will eventually be there right out of the box!
Edited by Jonathan de Blok - 2024年2月1日 06:42:37
Solaris and Karma » How to snapshot a Clone's output?
- Jonathan de Blok
- 253 posts
- Offline
Good to know clone snapshot is on the radar!
About the 'background' button. I too thought it was about backgrounds, pressing it and setting it to 'live render' shows this popup:
When increasing the limit to 2 it still pops up (do the clones count for this as well?). Increasing it higher leads to a 'Unable to find a Scene View pane to launch a render from' so I guess even when this works it's still linked to a viewport which is not really what I want, I want it locked to a camera.
I think the general issue is that everything is designed for a workflow where rendering and a viewport are tightly connected. Which can be nice but also really frustrating if you just want to tweak some things after which you want to update the render, doing it though a fixed camera with no relation what so ever to a viewport. Using a clone for this will at least disconnect it from any viewport and locks it to a camera, all that is missing a 'render' button to trigger manual updates.
(Btw, the clones only pickup the cameras that are in USD stage, not the ones floating in /obj level. I understand why and it makes sense from a technical point of view, but when using the Karma ROP the only cam that's available to the clones is the cam that is assigned to the ROP. Maybe just gathering all cameras in the stage might be an option for the Karma ROP?)
Anyways, I've made it myself. When running the code below in the python source editor (or put it in a shelf tool) it will add a 'Render' button to ll of the currently opened Clone Control Panels. Ctrl-Clicking will create a new set of snapshots in the gallery so you can compare it to previous renders. Normal clicking overwrites the clone's active snapshot. The older snaps are prefixed with the creation date in the label.
In the screenshot below you can see the added button at the top of the 'Clone Control Panel' and some older and current snapshots from the two clones. It's a bit hacky but all I could do with the currently exposed python bits.
About the 'background' button. I too thought it was about backgrounds, pressing it and setting it to 'live render' shows this popup:
When increasing the limit to 2 it still pops up (do the clones count for this as well?). Increasing it higher leads to a 'Unable to find a Scene View pane to launch a render from' so I guess even when this works it's still linked to a viewport which is not really what I want, I want it locked to a camera.
I think the general issue is that everything is designed for a workflow where rendering and a viewport are tightly connected. Which can be nice but also really frustrating if you just want to tweak some things after which you want to update the render, doing it though a fixed camera with no relation what so ever to a viewport. Using a clone for this will at least disconnect it from any viewport and locks it to a camera, all that is missing a 'render' button to trigger manual updates.
(Btw, the clones only pickup the cameras that are in USD stage, not the ones floating in /obj level. I understand why and it makes sense from a technical point of view, but when using the Karma ROP the only cam that's available to the clones is the cam that is assigned to the ROP. Maybe just gathering all cameras in the stage might be an option for the Karma ROP?)
Anyways, I've made it myself. When running the code below in the python source editor (or put it in a shelf tool) it will add a 'Render' button to ll of the currently opened Clone Control Panels. Ctrl-Clicking will create a new set of snapshots in the gallery so you can compare it to previous renders. Normal clicking overwrites the clone's active snapshot. The older snaps are prefixed with the creation date in the label.
In the screenshot below you can see the added button at the top of the 'Clone Control Panel' and some older and current snapshots from the two clones. It's a bit hacky but all I could do with the currently exposed python bits.
import hou from time import sleep, time from datetime import datetime from hutil.Qt import QtWidgets, QtCore def render(): new_snap= QtWidgets.QApplication.keyboardModifiers() == QtCore.Qt.ControlModifier for clone in hou.clone.clones(): snapshots=clone.renderGalleryDataSource() if snapshots: if len([x for x in snapshots.itemIds() if snapshots.label(x) == clone.name()])==0: #check if there are active snapshots for the clone(s) new_snap=True #rename previous snapshot if new snapshot is requested if new_snap: if snapshots: for snap in snapshots.itemIds(): if snapshots.label(snap) == clone.name(): snap_date=datetime.fromtimestamp(snapshots.modificationDate(snap)).strftime("%m/%d/%Y, %H:%M:%S") snapshots.setLabel(snap,f"{clone.name()} {snap_date}") #write membuf to disk and set as source for old thumbs tmpfile=snapshots.generateItemFilePath( snap, "exr" ) f=hou.node("/img/comp1").createNode("file") f.parm("filename1").set( snapshots.filePath(snap) ) f.saveImage(tmpfile) f.destroy() snapshots.setFilePath(snap,tmpfile) clone.setProcessUpdates(False) #touching a clone's LopNode will generate a new set of snapshots, touching camera path will only trigger an update if new_snap: tmp=clone.lopNode() clone.setLopNode(None) else: tmp=clone.cameraPath() clone.setCameraPath(None) clone.setProcessUpdates(True) if new_snap: clone.setLopNode(tmp) else: clone.setCameraPath(tmp) #small delay required so clones can start rendering. sleep(0.1) clone.setProcessUpdates(False) #find open clone control tabs cc_tabs=[item for item in hou.ui.paneTabs() if getattr(item, "activeInterface", False) and item.activeInterface().name()=="clone_control"] uiscale = hou.ui.globalScaleFactor() #shift table down and inset pushbutton for cc in cc_tabs: root=cc.activeInterfaceRootWidget() root.layout().setContentsMargins(0, 35* uiscale, 0, 0); if not getattr(root, "but_render", False): root.but_render = QtWidgets.QPushButton(root) root.but_render.clicked.connect(render) root.but_render.setText('Render') root.but_render.setGeometry(5*uiscale, 5*uiscale, 100*uiscale, 24*uiscale) root.but_render.show()
Edited by Jonathan de Blok - 2024年1月31日 13:10:48
Solaris and Karma » How to snapshot a Clone's output?
- Jonathan de Blok
- 253 posts
- Offline
Currently I'm using a single cloned Houdini session, using clone panel, to render into the render gallery. Lets say I want to iterate some samples/denoise/filter options and want to do an A/B comparison.. how do I stash/duplicate the current clone output? (so I can compare to future output from the same clone)
I can fire up a new clone and stop the other one but that's a bit over the top.
And I'm probably missing some knowledge here, but is using a clone the only way to render into the gallery so it's not a nondeterministic snapshot of the viewport? (As in, be exactly as what would be written to disk on a render-to-disk run)
And I'm Still missing the 'render' button here.. I can lock/unlock a clone to sort of get the same effect. But just doing some updates and then pressing a render button to see the results in the gallery is still a bit of an enigma.
I'm paying my own energy bill here so, and for environmental reasons as well, I don't want my GPU flat out rendering every change I make for no good reason.
I can fire up a new clone and stop the other one but that's a bit over the top.
And I'm probably missing some knowledge here, but is using a clone the only way to render into the gallery so it's not a nondeterministic snapshot of the viewport? (As in, be exactly as what would be written to disk on a render-to-disk run)
And I'm Still missing the 'render' button here.. I can lock/unlock a clone to sort of get the same effect. But just doing some updates and then pressing a render button to see the results in the gallery is still a bit of an enigma.
I'm paying my own energy bill here so, and for environmental reasons as well, I don't want my GPU flat out rendering every change I make for no good reason.
Solaris and Karma » Karma rendering VFB (Visual Frame Buffer)
- Jonathan de Blok
- 253 posts
- Offline
robp_sidefx
Thanks Jonathan for the original post, this is great feedback!
I appreciate your desire to avoid drowning in the USD ocean, but I'll echo what's been said above and suggest at least getting your toes wet with the proposed two-node SceneImport+KarmaRenderSettings (or possibly three-node - adding a UsdRender) setup in/stage
.
A lot (not everything) of what you've described is available through a combination of the Render Gallery and the Cloning introduced in Houdini 20.0.
If you haven't yet seen it, have a glance at https://www.youtube.com/watch?v=R4SLw5EdzQ8 [www.youtube.com] (I think from minute 9-19 would be the most relevant section).
One thing you mentioned was "Also simply drawing a rectangle in the render to just render that part or click to focus rendering power on specific area" ... and this you *can* already do in the Karma viewport. See attached video.
Ok wow,, maybe it's more a discoverability issue then a missing features issue then I totally missed those region tools and the bit about cloning render instances and such. It looks brilliant for a lot of things I do and maybe I was also a bit stuck in the 'this is what I'm used to' mindset.
That minimal USD setup that was mentioned, or just using the contents of the Karma rop reworked to do it's thing directly on the USD stage should be manageable. And I do a lot Houdini/Unreal back and forth so maybe I should look at levering USD in Unreal as well since that's also working with a native USD stage.
Solaris and Karma » Rendering 100's of variants - what would a houdini genius do
- Jonathan de Blok
- 253 posts
- Offline
I do quite a lot of these kind of projects for big fashion brands. PDG is brilliant for this, in the most basic setup you create workitems for all the variants using wegdes.
More advanced setups involves checking input/output modifications dates and only redoing stale items and such, submitting jobs to deadline, hooking into review systems etc.
But start with looking into PDG wegdges.
(not in Solaris btw, PDG is simply controlling all the parm values that make the different variants)
More advanced setups involves checking input/output modifications dates and only redoing stale items and such, submitting jobs to deadline, hooking into review systems etc.
But start with looking into PDG wegdges.
(not in Solaris btw, PDG is simply controlling all the parm values that make the different variants)
Edited by Jonathan de Blok - 2024年1月27日 15:54:54
Solaris and Karma » Karma rendering VFB (Visual Frame Buffer)
- Jonathan de Blok
- 253 posts
- Offline
jsmack
have you tried the render gallery?
It looks like it's not really designed to work from /obj level, things like 'revert network to this state' etc do nothing. And besides that, I guess it's more a lookdev tool as it doesn't exactly represents what is going to be saved to disk in the end because it's based on snapshots from the viewport, capturing handles and all.
Having a 1-to-1 representation of what actually will be saved during a production render, in a panel somewhat similar to the gallery, is quite a necessity imho.
Solaris and Karma » Karma rendering VFB (Visual Frame Buffer)
- Jonathan de Blok
- 253 posts
- Offline
..to continue the conversation about a VFB that started in another thread: https://www.sidefx.com/forum/topic/94120/#post-411195 [www.sidefx.com]
What I'm really missing is a proper VFB for Karma, currently there is the "viewport preview" and "render to MPlay", both have their uses but it leaves quite a gap in features and UX. (To SideFX: UX means User eXperience, it's a strange concept but bare with me here.. )
Obviously I can't speak for everyone's workflow but I think in general creating a render consists of 3 modes:
1) General lookdev, basically tweaking materials, lights etc.
2) Optimizing, trying to keep the same look but in a fraction of the initial rendering time.
3) Solving tech issues. Getting things into the right AOVs or finding ways to avoid steps in comp. Hunting for workarounds and glitches and so on. Basically all non-artistical related things.
For step 1: The viewport is alright, gives instant feedback and gets the jobs done, it's actually quite nice.
For step 2: the viewport doesn't help, hard to tell rending times and hard to compare different setups. In theory MPlay could do this but in reality it just doesn't work for this, or for anything actually tbh. Feedback is minimal and usually there is no clue anything is going on until images pop in there. For this you need al the info possible, from sampling coverage to how it's spending cpu/gpu on each step from scene to pixel to file on disk
For step 3: Quickly toggle features/override shaders etc to find the source/trigger of any issues that might arise. This could involve both bugs in the engine and things like insane light values, 1.0 albedo's etc.
I'm mostly just using the the Karma rop and staying out of the USD stage for now, I'm a one man band and I've really tried, and appreciate what USD is on a technical level, but I'm having a hard time justifying the overhead for what it brings to me. So it could be that the whole experience in the USD stage is much better, so the following is all purely from a /obj level view.
The Karma rop has 3 relevant buttons.. 'Karma Viewport', 'render to disk' and 'Render to MPlay'.
"Karma Viewport": The viewport that pops up and it does the jobs, it feels a bit disjointed and I can't find a way to use the current viewport as a KarmaViewer or set one up manually docked in main ui? Looking at the code behind the buttons I could probably script something but for now it does the job.
Render to disk.. does what it does
"Render to MPlay": Does nothing when pressed or so it seams.. it does quietly start rendering and when it's done it pops up and I don't think it actually renders again until je dump the frame from MPlay or scrub to another frame. Anyways, not useful for doing iterative renders need during step 2 and 3. It's hard to tell rendering progress and it's just pops in when it's finished.. might as well just save it to disk then. Also it's using the 'frame range/current frame' settings. What we need here is a 'production render' vs 'iterative render' workflow, again easy to setup with a few nodes but it's such a basic workflow feature that it should be in there.
And I'd like to see the image evolve/buckets during render, I don't have to wait till the end to judge if something is alright or not. Also simply drawing a rectangle in the render to just render that part or click to focus rendering power on specific area, compare with previous versions, view various stats etc. Also some photometric camera properties could be set by clicking/sampling the viewport. focus distance, white balance, exposure etc. I could write a really long list here but have a good look at VRay and RedShift VFBs they are a very big part of the whole rendering workflow.
I think a good starting point and a big improving would be to have just a simple panel that's like the KarmaSceneViewer but just showing the rendering/rendered rasterized bitmap. By default it should only update with the 'render' button is pressed so it's content isn't as volatile as the viewport preview. Also just render the current frame by default, ignoring any rop settings for this unless told to do so. Then add some non-lookdev features to accommodate step 2 and 3 and take it from there.
What I'm really missing is a proper VFB for Karma, currently there is the "viewport preview" and "render to MPlay", both have their uses but it leaves quite a gap in features and UX. (To SideFX: UX means User eXperience, it's a strange concept but bare with me here.. )
Obviously I can't speak for everyone's workflow but I think in general creating a render consists of 3 modes:
1) General lookdev, basically tweaking materials, lights etc.
2) Optimizing, trying to keep the same look but in a fraction of the initial rendering time.
3) Solving tech issues. Getting things into the right AOVs or finding ways to avoid steps in comp. Hunting for workarounds and glitches and so on. Basically all non-artistical related things.
For step 1: The viewport is alright, gives instant feedback and gets the jobs done, it's actually quite nice.
For step 2: the viewport doesn't help, hard to tell rending times and hard to compare different setups. In theory MPlay could do this but in reality it just doesn't work for this, or for anything actually tbh. Feedback is minimal and usually there is no clue anything is going on until images pop in there. For this you need al the info possible, from sampling coverage to how it's spending cpu/gpu on each step from scene to pixel to file on disk
For step 3: Quickly toggle features/override shaders etc to find the source/trigger of any issues that might arise. This could involve both bugs in the engine and things like insane light values, 1.0 albedo's etc.
I'm mostly just using the the Karma rop and staying out of the USD stage for now, I'm a one man band and I've really tried, and appreciate what USD is on a technical level, but I'm having a hard time justifying the overhead for what it brings to me. So it could be that the whole experience in the USD stage is much better, so the following is all purely from a /obj level view.
The Karma rop has 3 relevant buttons.. 'Karma Viewport', 'render to disk' and 'Render to MPlay'.
"Karma Viewport": The viewport that pops up and it does the jobs, it feels a bit disjointed and I can't find a way to use the current viewport as a KarmaViewer or set one up manually docked in main ui? Looking at the code behind the buttons I could probably script something but for now it does the job.
Render to disk.. does what it does
"Render to MPlay": Does nothing when pressed or so it seams.. it does quietly start rendering and when it's done it pops up and I don't think it actually renders again until je dump the frame from MPlay or scrub to another frame. Anyways, not useful for doing iterative renders need during step 2 and 3. It's hard to tell rendering progress and it's just pops in when it's finished.. might as well just save it to disk then. Also it's using the 'frame range/current frame' settings. What we need here is a 'production render' vs 'iterative render' workflow, again easy to setup with a few nodes but it's such a basic workflow feature that it should be in there.
And I'd like to see the image evolve/buckets during render, I don't have to wait till the end to judge if something is alright or not. Also simply drawing a rectangle in the render to just render that part or click to focus rendering power on specific area, compare with previous versions, view various stats etc. Also some photometric camera properties could be set by clicking/sampling the viewport. focus distance, white balance, exposure etc. I could write a really long list here but have a good look at VRay and RedShift VFBs they are a very big part of the whole rendering workflow.
I think a good starting point and a big improving would be to have just a simple panel that's like the KarmaSceneViewer but just showing the rendering/rendered rasterized bitmap. By default it should only update with the 'render' button is pressed so it's content isn't as volatile as the viewport preview. Also just render the current frame by default, ignoring any rop settings for this unless told to do so. Then add some non-lookdev features to accommodate step 2 and 3 and take it from there.
Solaris and Karma » Karma properties on objects
- Jonathan de Blok
- 253 posts
- Offline
robp_sidefxJonathan de Blok
Call me spoiled but when doing lookdev for renders I'd like a 'Render' button and a panel that show the render output being rendered in all it's glory
You're definitely not the first to ask for this, but I'd like to hear more definition from you about what such a thing is/isn't.There is the karma viewport which is excellent for previewing the visual output but doesn't provide a way to dive into the AOVs etc.
On the AOV front, there is a button that'll let you switch between AOVs.
Ahh I overlooked that AV button.. thx.
I'll start a new thread about the VFB!
-
- Quick Links