Found 206 posts.
Search results Show results as topic list.
Solaris and Karma » Orthographic cameras in Solaris
- dhemberg
- 207 posts
- Offline
How does one adjust an orthographic camera's view in Solaris? I can position the camera but no amount of adjusting params seems to actually change the view.
Solaris and Karma » How do I render with ACEScg color space in Solaris?
- dhemberg
- 207 posts
- Offline
Back with another question about working with ACES/OCIO in Houdini.
Let's say sometimes I want to dot all my I's and cross all my T's and work in a proper ACES pipeline. This pipeline, as I understand it, requires a few extra steps to go from an ACES linear EXR (which seems to be what Karma most-wants to output) to an image I can share on the web (which is to say: in sRGB). I say this because I can do lookdev in the Karma viewport and make a picture that looks nice, then render to, say, png or jpg, and the resulting image looks different than what I see in the viewport; it looks more contrasty, highlights blow out more easily, etc. If I render to EXR, then step through a COP2 workflow where I manually do an OCIO transform to move my color from ACEScg to sRGB, I can get a png/jpg that matches what I see in the Karma viewport.
Sometimes, though, I'm trying to go fast, and don't necessarily care to take my image through a compositing step. I'm curious how I can get an output 8-bit jpg/png that matches what I see in the Karma viewport if I have OCIO enabled in Houdini? Just setting my output image type to jpg/png yields the higher-contrast 'wrong-looking' (or, at least, not-matching-the-viewport) images noted above.
I'm sure someone will point out I'm still not fully grasping this, and that's probably true!
Let's say sometimes I want to dot all my I's and cross all my T's and work in a proper ACES pipeline. This pipeline, as I understand it, requires a few extra steps to go from an ACES linear EXR (which seems to be what Karma most-wants to output) to an image I can share on the web (which is to say: in sRGB). I say this because I can do lookdev in the Karma viewport and make a picture that looks nice, then render to, say, png or jpg, and the resulting image looks different than what I see in the viewport; it looks more contrasty, highlights blow out more easily, etc. If I render to EXR, then step through a COP2 workflow where I manually do an OCIO transform to move my color from ACEScg to sRGB, I can get a png/jpg that matches what I see in the Karma viewport.
Sometimes, though, I'm trying to go fast, and don't necessarily care to take my image through a compositing step. I'm curious how I can get an output 8-bit jpg/png that matches what I see in the Karma viewport if I have OCIO enabled in Houdini? Just setting my output image type to jpg/png yields the higher-contrast 'wrong-looking' (or, at least, not-matching-the-viewport) images noted above.
I'm sure someone will point out I'm still not fully grasping this, and that's probably true!
Solaris and Karma » Viewport "gets stuck"
- dhemberg
- 207 posts
- Offline
I don't know how to describe this problem, but since its introduction I routinely have cases in the Solaris viewport where Houdini decides it really loves some geometry I'm viewing, and it won't 'un-display' it when I move on to viewing other nodes. Like this:
This is meant to just be a single USD camera I've imported, but for some reason I'm seeing a second 'copy' of the camera at a different timeline location, frozen. Sometimes this problem goes away if I rapidly hop in and out of a different context, sometimes I have to refresh my entire workspace.
It's fairly irritating, particularly if I'm troubleshooting some node's behavior and realize I'm looking at stale or erroneous viewport information.
Anyone else see this? It's very common on my Macbook and a Dell Linux laptop (though I think I run into it less on Windows).
This is meant to just be a single USD camera I've imported, but for some reason I'm seeing a second 'copy' of the camera at a different timeline location, frozen. Sometimes this problem goes away if I rapidly hop in and out of a different context, sometimes I have to refresh my entire workspace.
It's fairly irritating, particularly if I'm troubleshooting some node's behavior and realize I'm looking at stale or erroneous viewport information.
Anyone else see this? It's very common on my Macbook and a Dell Linux laptop (though I think I run into it less on Windows).
Technical Discussion » How to divide in COPs
- dhemberg
- 207 posts
- Offline
jsmack
downscaling an image doesn't sample all of the pixels, it uses a sample filter and some subset of pixels. Otherwise downscaling a high res image to a thumbnail would take minutes, not the fraction of a second we expect it to.
Hm, ok; I understand that. But, I mean...I've used this strategy according to this Unity paper with success, and thought replicating it in COPs would be a fairly simple affair, though clearly I am mistaken (for reasons I don't understand yet).
Technical Discussion » How to divide in COPs
- dhemberg
- 207 posts
- Offline
Like, here's an example: I make a 300x300 ramp from black to white. Using the VOP approach (which takes like 20 seconds to run on this small image), the resulting output is a 300x300 image that's 0.5 everywhere.
On the left though, I scale down to 1x1 pixel, and that pixel is totally black. Of course when I scale back up again, I get a 300x300 black image.
On the left though, I scale down to 1x1 pixel, and that pixel is totally black. Of course when I scale back up again, I get a 300x300 black image.
Technical Discussion » How to divide in COPs
- dhemberg
- 207 posts
- Offline
Hm...to be honest, I'm not sure how to be sure. It's very slow (like, it locks up Houdini for 30 seconds or so).
I tried the much more sensible strategy of first downscaling my original image to 1 pixel, then scaling it back up again to its original resolution. Theoretically this should yield something like the behavior I'm after (and when I test it using a constant-colored test swatch, it works fine...the output of this process exactly matches the input). What I'm seeing though is that if the input image isn't uniform, it produces results that seem incorrect to me (though I don't understand why)...the computed result is darker than I expect (and darker/different than what's produced when I do this much more explicit VOP approach). I'm not sure if it has to do with a filtering/sampling issue or what (I've tried jiggling the knobs on the scale nodes to see what clues I can find, but nothing seems to cause much of a change in the result).
FYI, this strategy is basically what's described in this [blog.selfshadow.com] very excellent paper.
I tried the much more sensible strategy of first downscaling my original image to 1 pixel, then scaling it back up again to its original resolution. Theoretically this should yield something like the behavior I'm after (and when I test it using a constant-colored test swatch, it works fine...the output of this process exactly matches the input). What I'm seeing though is that if the input image isn't uniform, it produces results that seem incorrect to me (though I don't understand why)...the computed result is darker than I expect (and darker/different than what's produced when I do this much more explicit VOP approach). I'm not sure if it has to do with a filtering/sampling issue or what (I've tried jiggling the knobs on the scale nodes to see what clues I can find, but nothing seems to cause much of a change in the result).
FYI, this strategy is basically what's described in this [blog.selfshadow.com] very excellent paper.
Technical Discussion » How to divide in COPs
- dhemberg
- 207 posts
- Offline
Neat, thanks! The insane constraints in COPS make them oddly fun to noodle through problems like this. And I hadn't played with snippets, that's a great tool to learn about.
Here's another question: I would like to calculate the average brightness of an image. Ideally I could stash that somewhere in COPS as a single vector, but the image-y way to do it is compute an image with a constant color value of the average brightness.
Tinkering with this today, I came up with this:
It's wildly inefficient though; doing this in a VOP Cop means I'm basically looping through every pixel to find an average, but repeating this for every pixel in the image, rather than just a single time. I'm curious if there's a way to perform the evaluation once, then set it once?
Here's another question: I would like to calculate the average brightness of an image. Ideally I could stash that somewhere in COPS as a single vector, but the image-y way to do it is compute an image with a constant color value of the average brightness.
Tinkering with this today, I came up with this:
It's wildly inefficient though; doing this in a VOP Cop means I'm basically looping through every pixel to find an average, but repeating this for every pixel in the image, rather than just a single time. I'm curious if there's a way to perform the evaluation once, then set it once?
Technical Discussion » How to divide in COPs
- dhemberg
- 207 posts
- Offline
How does one divide one image by another in COPs? There seems to be no divide node; the VEX COP filter is very confusing and gives no syntax clues as to how I might write this operation in text, and I cannot figure out how I might use the VOP COP to do this, as it's unclear how to pull in channel information from any input other than the first (despite this node having 4 inputs).
I feel like an idiot...
I feel like an idiot...
Solaris and Karma » How do I render with ACEScg color space in Solaris?
- dhemberg
- 207 posts
- Offline
jsmack
Studios write their own configs to suit their needs. OCIO was designed with studios in mind. The template configs are just that, templates, no really meant to be used as-is except for demonstration purposes.
Oh this seems like an important insight (and wasn't at all obvious to me; I presumed these downloadable configs are meant to be gospel). So the way a studio might work is - rather than relying on some token in the filename for the desired colorspace of the file itself (e.g. "ACEScg") - checking for a particular disk location or a different naming convention to handle this (e.g. 'all files with "diffuse" in the name located within a "model/textures" folder should be presumed to be in ACEScg.]..or whatever). This makes a ton more sense to me.
jsmack
A config file has an OCIO version, the ACES version is determined by the colorspaces defined in the config file. It's possible to have looks defined in the config to view with different ACES versions.
...
There would need to be a new node "display transform" to do display color spaces. You can add the display transform as a color space to your config to make it more like a 1.X config and still use this workflow with COPs.
Ahhh, ok I'm starting to get it; because these configs are intended to be mutable, and because I can mix and match various ACES version colorspaces in a single config, I can basically build my own setup to give myself a way to 1) use color management (rather than sidestepping it as I am now) while 2) navigating any bugs in, say, COPS or hypothetical changes that MaterialX might offer as it matures.
Thanks for taking the time to explain all that!
Solaris and Karma » How do I render with ACEScg color space in Solaris?
- dhemberg
- 207 posts
- Offline
Hmm; I think this might leave me more confused than less.
I suppose this begs the question "how are people expected to work with ocio/aces then?"; if OCIO working/not working properly hinges on some correct token to be in every file name (which seems incredibly tenuous to begin with), and new versions of OCIO change the naming convention of this token, how are artists - both at big studios and hobbyists - expected to work with it? Renaming the files in a texture library to suit the whims of an OCIO config seems totally impractical. (I realize this has nothing to do with sidefx).
Is there a resource that explains - from start to finish - how one should set up their pipeline to work nicely with OCIO/ACES?
I suppose that's a reasonable question; let me explain where my head's at. I'm trying to work in Solaris, with Karma XPU, with MaterialX. I've been laboring on a personal project for about a year and a half that's sought to use this workflow. Frequently, when I ask other questions to either sideFX or on the forums about various ins and outs of USD/Solaris/Karma/XPU, I'm told "this is in beta", "this does not work yet", "this has not yet been implemented", "this is not reliable", "this is not worth it", etc. These answers collectively have painted a picture to me that I should not take for granted that OCIO and ACES 'just work' in Houdini.
Separately, I asked a question here [www.sidefx.com] a while back about using ACES in Karma/Cops, and a response (by you!) that stuck with me was "your images are in acescg by fiat/they are in acescg because you know it to be true". This implies to me that if I'm very careful and observant about what's happening with my images every step of the way, and keep careful accounting of what colorspace everything is in at every step, then I can more or less guarantee that what is written by Karma *has to be* in aces.
So, to answer your question: my habit of disabling color space management and managing it myself is rooted in a lack of confidence that there is any other way to do it. But of course this is fraught with its own problems, and I would LOVE to learn that all of this works well in H19.5/Karma, and that embracing ACES/OCIO is simply a matter of following some clear instructions. Is this the case?
In any event, the insight you offered that this whole pipeline hinges on me divining the correct token to include in my filenames - and that this token may have changed, and may be causing my images to suddenly appear different - is a very good clue, and makes me suspect I have NOT successfully disabled color handling. So I'm trying to get to the bottom of which path I should take: either wholly embrace color handling (again, with Karma XPU, with MaterialX as it exists in H19.5), or continue to not trust it and manage it all on my own, in my head, by fiat.
This implies that my output from Karma needs to have "acescg" (or some permutation of this token that matches my config) in my output filename before reading it into COPS, if I want COPS to be using OCIO, is that right? OR I disable this, tell COPs not to do anything fancy, and then I manage the colorspace transform myself, yes?
Hm, interesting; this implies that the "operation for converting to the display space" is missing from this particular config, which is again confusing because this config is described thusly:
The "display" section there implies I should be able to, um, display my images? But when I drop down an OCIO Transform VOP in COPS, I do not see any obvious display transforms, which is why I picked the "Texture-sRGB" one (apparently incorrectly)
Yikes, ok this is very helpful, because that is hella confusing/unexpected.
Hm, is it possible that I waylaid myself (or jumped the gun) by trying to migrate to OCIO 2.1? Maybe I need to retrace my steps with some more basic questions:
I really do want to understand this and use it effectively, it just seems like it changes rapidly and unexpectedly, or I'm not reading the right documentation or something...I'm consistently confused by it. But I'm trying hard not to be!
jsmack
OCIO 2.1 changes out colorspaces are detected, so I'm not surprised you might see a change in how your input is treated.
...
Depending on your config, that may do nothing. The token used in the template config is "ACES - ACEScg", so "acescg" won't match anything unless you made your own config with that colorspace. OCIO 2.0 introduced aliases, so it's possible to have an alias to that space called "acescg", but I think these are not working correctly in Houdini in my experience.
I suppose this begs the question "how are people expected to work with ocio/aces then?"; if OCIO working/not working properly hinges on some correct token to be in every file name (which seems incredibly tenuous to begin with), and new versions of OCIO change the naming convention of this token, how are artists - both at big studios and hobbyists - expected to work with it? Renaming the files in a texture library to suit the whims of an OCIO config seems totally impractical. (I realize this has nothing to do with sidefx).
Is there a resource that explains - from start to finish - how one should set up their pipeline to work nicely with OCIO/ACES?
jsmack
Why bother with a config at all if you're just turning off color handling?
I suppose that's a reasonable question; let me explain where my head's at. I'm trying to work in Solaris, with Karma XPU, with MaterialX. I've been laboring on a personal project for about a year and a half that's sought to use this workflow. Frequently, when I ask other questions to either sideFX or on the forums about various ins and outs of USD/Solaris/Karma/XPU, I'm told "this is in beta", "this does not work yet", "this has not yet been implemented", "this is not reliable", "this is not worth it", etc. These answers collectively have painted a picture to me that I should not take for granted that OCIO and ACES 'just work' in Houdini.
Separately, I asked a question here [www.sidefx.com] a while back about using ACES in Karma/Cops, and a response (by you!) that stuck with me was "your images are in acescg by fiat/they are in acescg because you know it to be true". This implies to me that if I'm very careful and observant about what's happening with my images every step of the way, and keep careful accounting of what colorspace everything is in at every step, then I can more or less guarantee that what is written by Karma *has to be* in aces.
So, to answer your question: my habit of disabling color space management and managing it myself is rooted in a lack of confidence that there is any other way to do it. But of course this is fraught with its own problems, and I would LOVE to learn that all of this works well in H19.5/Karma, and that embracing ACES/OCIO is simply a matter of following some clear instructions. Is this the case?
In any event, the insight you offered that this whole pipeline hinges on me divining the correct token to include in my filenames - and that this token may have changed, and may be causing my images to suddenly appear different - is a very good clue, and makes me suspect I have NOT successfully disabled color handling. So I'm trying to get to the bottom of which path I should take: either wholly embrace color handling (again, with Karma XPU, with MaterialX as it exists in H19.5), or continue to not trust it and manage it all on my own, in my head, by fiat.
jsmack
COPS reads follow the same OCIO rules defined in the config as other reads in Houdini. Disabling linearize is useful for manual handling of color spaces. If the file is named correctly, you shouldn't have to though.
This implies that my output from Karma needs to have "acescg" (or some permutation of this token that matches my config) in my output filename before reading it into COPS, if I want COPS to be using OCIO, is that right? OR I disable this, tell COPs not to do anything fancy, and then I manage the colorspace transform myself, yes?
jsmack
The operation for converting to the display space is actually missing. You want an OCIO Display-View transform that contains the "look" of ACES, as well as the display transform. (I think this is what you're after, maybe you don't want a tone mapped image. The display-view transform is what you see in the renderview)
Hm, interesting; this implies that the "operation for converting to the display space" is missing from this particular config, which is again confusing because this config is described thusly:
The "display" section there implies I should be able to, um, display my images? But when I drop down an OCIO Transform VOP in COPS, I do not see any obvious display transforms, which is why I picked the "Texture-sRGB" one (apparently incorrectly)
jsmack
Correct, when viewing display referred values, color correction should be disabled in the viewer by setting it to Raw. Disabling OCIO may apply the viewers default 2.2 gamma, so you probably don't want that.
Yikes, ok this is very helpful, because that is hella confusing/unexpected.
jsmack
I don't think it's possible to get wysiwyg with the renderview/cop viewer by applying a transform in cops with OCIO 2.1 You'll need to use a standalone commandline tool such as hoiiotool or ocioconvert to apply the display-view transform. If you're feeling adventurous, you can edit the config and add a colorspace that implements a display-view transform. This would allow using a colorspace transform in legacy software that lacks display-view transforms.
Hm, is it possible that I waylaid myself (or jumped the gun) by trying to migrate to OCIO 2.1? Maybe I need to retrace my steps with some more basic questions:
- ACES and OCIO are two different things (YES/no)?
- Does setting the path to a .ocio config file set both OCIO version AND ACES version simultaneously (YES/no)?
- Are there compatibility issues between Houdini 19.5 and OCIO (YES/no)?
- Are there compatibility issues between Houdini 19.5 and ACES (yes/NO)?
- Is there a recommended OCIO and/or ACES version that is considered stable/good to use (i.e. a "studio driver" version rather than a "game driver" equivalent)?
I really do want to understand this and use it effectively, it just seems like it changes rapidly and unexpectedly, or I'm not reading the right documentation or something...I'm consistently confused by it. But I'm trying hard not to be!
Edited by dhemberg - March 1, 2023 11:19:14
Solaris and Karma » How do I render with ACEScg color space in Solaris?
- dhemberg
- 207 posts
- Offline
Hi pals, I'm at least a little sorry to re-ignite this somewhat-aged thread. But here I am!
I recently upgraded my OCIO setup to the latest from the OCIO website [opencolorio.readthedocs.io], which is to say OCIO v2.1, for Aces 1.3, using cg config (cg-config-v1.0.0_aces-v1.3_ocio-v2.1). This upgrade has changed how my image outputs look, and I'm trying to retrace my steps to make sure I understand how this is supposed to work.
All this rigamarole, however, leaves me in a spot where I'm no longer confident that what I see on-screen is what I should expect to see. Prior to moving to OCIO v2, I was reasonably confident that the above was giving me a WYSIWYG workflow, but moving to OCIO v2 seems to have changed this behavior in a way I don't understand. So, I'm wondering if anyone can poke holes in what I'm describing above, or otherwise point me to a more current resource for how to properly set this up (including COPS) than what seems to exist in the Houdini docs.
Thanks!
--a
I recently upgraded my OCIO setup to the latest from the OCIO website [opencolorio.readthedocs.io], which is to say OCIO v2.1, for Aces 1.3, using cg config (cg-config-v1.0.0_aces-v1.3_ocio-v2.1). This upgrade has changed how my image outputs look, and I'm trying to retrace my steps to make sure I understand how this is supposed to work.
- I'm using Karma (XPU, materialX) in Houdini 19.5
- I have $OCIO defined in my houdini.env, describing the path to my aforementioned cg-config.ocio.
- albedo surface textures have been converted using oiiotool to acescg; albedo files have "acescg" in the name (e.g. albedo.acescg.exr)
- Other surface textures have not been converted; height, normal maps, specular roughness, etc. are sRGB jpg/png
- All surface textures are being read via mtlx image nodes, using a vector3 signature (rather than a color signature), acknowledging that materialX is not aware of ACES. My understanding is that reading everything as vector3 circumvents any "black box" color transformation guess-ery that may/may not be happening, giving me manual control over this.
- All IBL/DomeLight textures have been created in acescg space, and have "acescg" in the filename (e.g. ibl_acescg.exr)
- I render using Karma XPU, to an EXR file. I'm not sure that there are any controls on the Karma Render Settings node that has any control over output color space. Instead, I am "trusting" that my careful grooming of input file textures is what asserts that Karma-output exrs are in acescg. Do I need to actually include acescg in the output filename in order to control this?
- I import my rendered exr into COPS for postprocess (I am not using Nuke). I disable "linearize non-linear images", because I am unsure how COPS deals with aces/ocio, if at all.
- I do my postprocessing: exposure adjustment, color grading, bloom, diffusion, etc.
- I (try to) manually convert my file from acescg to sRGB using a VOP that contains an OCIO_TRANSFORM vop, where my inSpace is "ACEScg" and my outspace is "sRGB - Texture" (I ultimately want to make a png that I can share online, so I'm looking for the 'right' way to get into sRGB)
- I have a ROP File Output COP, which has "Convert to Image Format's Colorspace" disabled. By disabling this and doing the previous color conversion step, I *think* I am again circumventing black box color transforms and manually managing this, but I'm not sure.
- I disable OCIO in my Composite View's display options, again because I am trying to manually manage everything so I can understand what's happening where. Allegedly this should let me see the 'raw' pixel value, but I also see a "RAW" option in the OCIO options in the display viewport, and my image looks different when selecting this option than when I totally disable OCIO.
- I write out a PNG
All this rigamarole, however, leaves me in a spot where I'm no longer confident that what I see on-screen is what I should expect to see. Prior to moving to OCIO v2, I was reasonably confident that the above was giving me a WYSIWYG workflow, but moving to OCIO v2 seems to have changed this behavior in a way I don't understand. So, I'm wondering if anyone can poke holes in what I'm describing above, or otherwise point me to a more current resource for how to properly set this up (including COPS) than what seems to exist in the Houdini docs.
Thanks!
--a
Edited by dhemberg - Feb. 28, 2023 22:17:33
Technical Discussion » Tricky parameter shenanigan
- dhemberg
- 207 posts
- Offline
Heh, *facepalm*. Awesome, thank you! And, thanks again for your help with my bigger question, it's working great! I was overcomplicating it a bit and lost sight of the forest for the trees.
Technical Discussion » Tricky parameter shenanigan
- dhemberg
- 207 posts
- Offline
tamte
here is an example
Amazing! This is so super helpful, thank you so much! Also, re-posing the silly question I asked above: Is there a way to disconnect a multiparm that's been previously connected via opmultiparm? I managed to be my arg order backwards and now no amount of expressiom-deleting seems to break the connection...
Technical Discussion » Tricky parameter shenanigan
- dhemberg
- 207 posts
- Offline
tamte
that "asking" is what expressions are for
expression can lookup how many instances there are, generate random number within that range based on current HDA settings and then lookup such instance value and return it, live in the parameter
OHHHH, I think it just clicked; the stuff I'm trying to do via callbacks should just be in expressions on the params themselves. Ok, I think I got it...thank you!
Technical Discussion » Tricky parameter shenanigan
- dhemberg
- 207 posts
- Offline
Heh, let's say you tried the opmultiparm command, hypothetically, and - hypothetically, of course - you mis-ordered the parameters and now have made a connection you don't want to make. How, hypothetically, do you permanently UN-opmultiparm connect a multiparm?
Technical Discussion » Tricky parameter shenanigan
- dhemberg
- 207 posts
- Offline
tamte
that's a different question, I thought your second HDA was referencing a specific one parameter from multiparm (even if random)
for which you can just have an expression in that HDA parameter that pulls the right value from the right multiparm instance based on that HDA settings
Hm, yeah, see this is my problem. The number of colors is not static; it can vary (sometimes the randomizer picks 5 colors from an image, sometimes it picks 9, etc). So my ColorPicker HDA needs to "ask" how many colors are available before it can know how to randomly choose one.
How it does this 'asking' is what I'm struggling with. I can have all my ColorPicker nodes do this with an OnLoaded script in the HDA, but this isn't all the way ideal in general, and it fails to 'notice' when the Controller node is re-randomized (or, at least, I'm unsure how to create a dependency between the two nodes without directly connecting a parameter).
The other way around it that I've played with is trying to mirror the Colors multiparm on the ColorPicker HDA, then connect the whole multiparm via channel references. This way, the ColorPickers should go along for the ride whenever the Controller is re-randomized, without the need for callbacks (I think). But I wasn't able to figure out how to connect an entire multiparm (the structure of which might change) in Python once, at node creation time, without using a callback.
Technical Discussion » Tricky parameter shenanigan
- dhemberg
- 207 posts
- Offline
Cool, thanks! Is there a way to channel reference an entire multiparm from one node to another using python?
I seem to be able to read the multiparm “counter”, then can iterate through each of the multiparm params themselves to connect the one by one, but I have to have something to “trigger” this process, which is where I’m stuck.
I seem to be able to read the multiparm “counter”, then can iterate through each of the multiparm params themselves to connect the one by one, but I have to have something to “trigger” this process, which is where I’m stuck.
Edited by dhemberg - Feb. 26, 2023 16:22:46
Technical Discussion » Tricky parameter shenanigan
- dhemberg
- 207 posts
- Offline
Hi.
I have kind of a tricky setup I'm trying to puzzle through. I have a "global control node" in my scene that's powered predominantly with Python. One of the things it does is pick a random image from a folder, analyze the image, and generate a selection of colors drawn from the image:
This works great in and of itself; I can poke the Randomize button and get a new color palette each time.
Throughout my scene, I would like to be able to refer to these colors, ideally in a nice user-friendly way. Sometimes I want to manually choose one of the colors, other times I want to say "give me a random color from the current 'scene palette'". So, I have a separate HDA I'm trying to build that can do this. Let's call this my ColorPicker node. It also is handled primarily with Python:
Where I'm having trouble is trying to figure out how to connect the two. I can create a callback on the ColorPicker whereby I specify the path to the "Controller" node so that it populates itself internally; but if I re-randomize the controller, all the child colorpicker nodes don't go along for the ride...I have to manually go refresh them all to get them to 'pull' new colors from the controller node.
I reasoned that maybe making a direction connection (via a channel reference or something) might cause the child ColorPicker nodes to update themselves when the Controller node is refreshed, but I'm having trouble figuring out how to do that efficiently...it seems that connecting multiparms programmatically doesn't work quite as simply as channel references, or at least I'm having trouble figuring out how to get this working.
I've also tried managing this with Context Options, but that's messy in a different way (Context Options don't have a Color type, so it's a little messier to see what I'm doing when I try to store a list of colors in a Context Option). My current multiparm on the Controller node is the best solution I've found to visualize a variable list of colors.
Anyway. This is a weird problem; I'm curious if anyone has ideas for how I might do this?
I have kind of a tricky setup I'm trying to puzzle through. I have a "global control node" in my scene that's powered predominantly with Python. One of the things it does is pick a random image from a folder, analyze the image, and generate a selection of colors drawn from the image:
This works great in and of itself; I can poke the Randomize button and get a new color palette each time.
Throughout my scene, I would like to be able to refer to these colors, ideally in a nice user-friendly way. Sometimes I want to manually choose one of the colors, other times I want to say "give me a random color from the current 'scene palette'". So, I have a separate HDA I'm trying to build that can do this. Let's call this my ColorPicker node. It also is handled primarily with Python:
Where I'm having trouble is trying to figure out how to connect the two. I can create a callback on the ColorPicker whereby I specify the path to the "Controller" node so that it populates itself internally; but if I re-randomize the controller, all the child colorpicker nodes don't go along for the ride...I have to manually go refresh them all to get them to 'pull' new colors from the controller node.
I reasoned that maybe making a direction connection (via a channel reference or something) might cause the child ColorPicker nodes to update themselves when the Controller node is refreshed, but I'm having trouble figuring out how to do that efficiently...it seems that connecting multiparms programmatically doesn't work quite as simply as channel references, or at least I'm having trouble figuring out how to get this working.
I've also tried managing this with Context Options, but that's messy in a different way (Context Options don't have a Color type, so it's a little messier to see what I'm doing when I try to store a list of colors in a Context Option). My current multiparm on the Controller node is the best solution I've found to visualize a variable list of colors.
Anyway. This is a weird problem; I'm curious if anyone has ideas for how I might do this?
Technical Discussion » Adjusting range of parm templates
- dhemberg
- 207 posts
- Offline
Thank you! If I had a nickel for every time I'm told something I'd like to try is "uncommon" I could probably buy SideFX.
Unfortunately, though, I'm unsure what value to draw from that...it doesn't really dissuade me from needing to do the thing I need to do, which is to dynamically change the UI. Specifically, I am trying to design a colorpicker of sorts. There are no "menu" item options for the color UI widget, so I'm trying to hammer out a way to represent the various colors I want to offer as choices by way of a multiparm, then allow a user a way to choose which color they want to use.
I see there was a comment (now deleted) suggesting running the code on the HDA definition itself, rather than a node instance of the HDA, but I worry that would run into the same issue @ajz3d is describing (updating it in one location would propagate to all created node instances, rendering the operation useless).
The solution proposed by @EJaworenko does seem to be a likely path forward, though I find it kind of confusing (particularly for my future self...I will wonder why I had to do it this way). It seems odd to me that I have to do it this way.
Anyway, thanks again for humoring all this!
Unfortunately, though, I'm unsure what value to draw from that...it doesn't really dissuade me from needing to do the thing I need to do, which is to dynamically change the UI. Specifically, I am trying to design a colorpicker of sorts. There are no "menu" item options for the color UI widget, so I'm trying to hammer out a way to represent the various colors I want to offer as choices by way of a multiparm, then allow a user a way to choose which color they want to use.
I see there was a comment (now deleted) suggesting running the code on the HDA definition itself, rather than a node instance of the HDA, but I worry that would run into the same issue @ajz3d is describing (updating it in one location would propagate to all created node instances, rendering the operation useless).
The solution proposed by @EJaworenko does seem to be a likely path forward, though I find it kind of confusing (particularly for my future self...I will wonder why I had to do it this way). It seems odd to me that I have to do it this way.
Anyway, thanks again for humoring all this!
Technical Discussion » Adjusting range of parm templates
- dhemberg
- 207 posts
- Offline
No kidding! This exact code does NOT work for me; maybe some more details are needed here.
My parm "index" is on that I'm creating on an HDA. I have another parm ("set_index") that has a callback on it; this callback is defined in the HDA's Python Module, and is intended to resize my "index" parm when it's adjusted.
My function in the pythonModule exactly matches your code:
Is there some nuance here that I'm misunderstanding? Am I not allowed to do this in a parameter callback? Or, do I need to create the initial "index" parameter dynamically, rather than creating it via the Parameters pane of the HDA UI?
My parm "index" is on that I'm creating on an HDA. I have another parm ("set_index") that has a callback on it; this callback is defined in the HDA's Python Module, and is intended to resize my "index" parm when it's adjusted.
My function in the pythonModule exactly matches your code:
def resize_index_slider(**kwargs): node = kwargs["node"] scene_controller = hou.node(node.evalParm("scene_controller")) if not scene_controller: raise hou.NodeWWarning("Error: Please provide a Scene Controller") num_scene_colors = scene_controller.evalParm("scene_colors") group = node.parmTemplateGroup() parm_template = group.find('index') if parm_template: parm_template.setMaxValue(num_scene_colors ) parm_template.setMaxIsStrict(True) group.replace('index', parm_template) node.setParmTemplateGroup(group)
Is there some nuance here that I'm misunderstanding? Am I not allowed to do this in a parameter callback? Or, do I need to create the initial "index" parameter dynamically, rather than creating it via the Parameters pane of the HDA UI?
Edited by dhemberg - Feb. 23, 2023 12:36:14
-
- Quick Links