Search - User list
Full Version: Creating a custom Materialbuilder hda
Root » Solaris and Karma » Creating a custom Materialbuilder hda
Furrinator
Hey guys

I´m trying to build my own Material builder (Karma Materialbuilder with some exposed paramters etc.). However when i turn the customized subnet into an hda it shows up in /mat context but not inside a materiallibrary in solaris. I got it to show up inside karmamaterialbuilders by setting the VopNet Mask to "karma". But i want it to show up on the same lvl as all the other "matrialbuilder" subnets.

How can I achieve this?
arx_anima
Instead of karma you can say subnet or genericshader and this will show it on the same level as the other material builder.
Not sure tho if this is the official way, I only know it works
Furrinator
works perfect, thanks!
rafal
The Material Library LOP has a "Tab Menu Mask" parameter (inside the Tab Menu collapsible folder) that filters the entries in the Tab menu inside.

You can add your name to that list. Or if you name your HDD with the "builder" suffix, then the globbing entry '*builder' should already allow it.
Furrinator
Follow up question...

I noticed when Im using this shader hda karma XPU falls back to using CPU only. I could track it down to the use of a mix node inside the shader. I know that XPU is not supporting multiple shadermixes, so Im only using it to mix the color some color corrects before inputting it into the shader. I tested the same shader network in a normal Karmamaterial builder and it doesnt cause any XPU problems so im really confused why it wouldnt work in my hda...

Any ideas how to fix that?



Image Not Found
GnomeToys
AFAIK the XPU renderer has no such thing as CPU fallback, it's the same engine on CPU and GPU or it would be unable to use both at the same time and produce a consistent result.

Karma just doesn't support mtlxmix of more than two surface shaders in a material (as in mtlxStandardSurface inputs).

The manual's Karma XPU page:
Supports blending of two mtlxStandardSurface nodes via mtlxmix.

Mixing inputs to the surface itself is far less complex since it's just weighted averaging of color / vector / float maps or values being fed into a given channel and has no limits except memory related ones. I have a "parametric chrome" map from AMD's open mtlx libs that has 8 or 9 mix nodes chained to varying depths plugged into different inputs of the mtlxStandardSurface and it works fine. On a 4090 and a 32/64 threadripper it's about 95% GPU / 5% CPU. I only start hitting high CPU usage under XPU when the scene overflows from vram somehow or in certain cases where the material uses very intensive mixtures of transmission, SSS, coating, sheen, and has a complicated displacement map where every ray the card's RT cores has to trace is probably maxing out on allowable reflections / refractions, so I'm not sure what's happening in your case. I think trying to mix too many surface shaders might actually spit out an error rather than failing to render the surface texture but I'm not sure.

I'm not even sure if the 2 mixed surface shader limit applies to mtlxdisplacements or not, the manual only mentions surfaces, and displace seems like it would be a simpler blend operation.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB