
dhemberg
dhemberg
About Me
Expertise
Not Specified
Location
Not Specified
Website
Connect
Recent Forum Posts
How do I render with ACEScg color space in Solaris? May 9, 2023, 4:40 p.m.
Back with another question about working with ACES/OCIO in Houdini.
Let's say sometimes I want to dot all my I's and cross all my T's and work in a proper ACES pipeline. This pipeline, as I understand it, requires a few extra steps to go from an ACES linear EXR (which seems to be what Karma most-wants to output) to an image I can share on the web (which is to say: in sRGB). I say this because I can do lookdev in the Karma viewport and make a picture that looks nice, then render to, say, png or jpg, and the resulting image looks different than what I see in the viewport; it looks more contrasty, highlights blow out more easily, etc. If I render to EXR, then step through a COP2 workflow where I manually do an OCIO transform to move my color from ACEScg to sRGB, I can get a png/jpg that matches what I see in the Karma viewport.
Sometimes, though, I'm trying to go fast, and don't necessarily care to take my image through a compositing step. I'm curious how I can get an output 8-bit jpg/png that matches what I see in the Karma viewport if I have OCIO enabled in Houdini? Just setting my output image type to jpg/png yields the higher-contrast 'wrong-looking' (or, at least, not-matching-the-viewport) images noted above.
I'm sure someone will point out I'm still not fully grasping this, and that's probably true!
Let's say sometimes I want to dot all my I's and cross all my T's and work in a proper ACES pipeline. This pipeline, as I understand it, requires a few extra steps to go from an ACES linear EXR (which seems to be what Karma most-wants to output) to an image I can share on the web (which is to say: in sRGB). I say this because I can do lookdev in the Karma viewport and make a picture that looks nice, then render to, say, png or jpg, and the resulting image looks different than what I see in the viewport; it looks more contrasty, highlights blow out more easily, etc. If I render to EXR, then step through a COP2 workflow where I manually do an OCIO transform to move my color from ACEScg to sRGB, I can get a png/jpg that matches what I see in the Karma viewport.
Sometimes, though, I'm trying to go fast, and don't necessarily care to take my image through a compositing step. I'm curious how I can get an output 8-bit jpg/png that matches what I see in the Karma viewport if I have OCIO enabled in Houdini? Just setting my output image type to jpg/png yields the higher-contrast 'wrong-looking' (or, at least, not-matching-the-viewport) images noted above.
I'm sure someone will point out I'm still not fully grasping this, and that's probably true!
Viewport "gets stuck" April 3, 2023, 9:33 p.m.
I don't know how to describe this problem, but since its introduction I routinely have cases in the Solaris viewport where Houdini decides it really loves some geometry I'm viewing, and it won't 'un-display' it when I move on to viewing other nodes. Like this:

This is meant to just be a single USD camera I've imported, but for some reason I'm seeing a second 'copy' of the camera at a different timeline location, frozen. Sometimes this problem goes away if I rapidly hop in and out of a different context, sometimes I have to refresh my entire workspace.
It's fairly irritating, particularly if I'm troubleshooting some node's behavior and realize I'm looking at stale or erroneous viewport information.
Anyone else see this? It's very common on my Macbook and a Dell Linux laptop (though I think I run into it less on Windows).
This is meant to just be a single USD camera I've imported, but for some reason I'm seeing a second 'copy' of the camera at a different timeline location, frozen. Sometimes this problem goes away if I rapidly hop in and out of a different context, sometimes I have to refresh my entire workspace.
It's fairly irritating, particularly if I'm troubleshooting some node's behavior and realize I'm looking at stale or erroneous viewport information.
Anyone else see this? It's very common on my Macbook and a Dell Linux laptop (though I think I run into it less on Windows).
How to divide in COPs March 31, 2023, 5:04 p.m.
jsmack
downscaling an image doesn't sample all of the pixels, it uses a sample filter and some subset of pixels. Otherwise downscaling a high res image to a thumbnail would take minutes, not the fraction of a second we expect it to.
Hm, ok; I understand that. But, I mean...I've used this strategy according to this Unity paper with success, and thought replicating it in COPs would be a fairly simple affair, though clearly I am mistaken (for reasons I don't understand yet).