Hi there,
I mentioned during our talk [www.sidefx.com] that I would make my slides available "later" since there wasn't really enough time to cover all the technical details on the day. Well, it seems we have reached "later", so here they are
All videos have been removed to reduce size, but the important bits are still there.
This is the second part of our talk, describing some of the challenges we faced in creating a (minimal) procedural hair and fur generator (which we named "Phur"). It was specifically developed as a plugin for Arnold but the analysis is generic and can be applied to any target.
Hope it's useful.
Cheers!
Found 345 posts.
Search results Show results as topic list.
Technical Discussion » HIVE23 Slides - Phur
- Mario Marengo
- 941 posts
- Offline
Technical Discussion » [GLSL] Multiple Texture Buffers
- Mario Marengo
- 941 posts
- Offline
I'm trying to see if I can get an array of textures into my glsl fragment shader, but I'm only partially able to do this, so I'm thinking there must be a better way to do it than what I'm trying right now.
Context:
Current Approach:
Problem:
It appears, from testing, that H19.0 has a limit of ~15 such buffers available/generated for the fragment shader, and H19.5 only gets around 10 max.
Is there an alternate/better way to do this so that I don't hit these texture buffer limits?
If the above approach is correct, is there maybe a way to increase these limits? (didn't see an obvious envar via hconfig)
Or maybe I'm going about it the wrong way?
Thanks in advance for any/all help!
Context:
- H19.5
- GLSL-330
Current Approach:
- My Houdini-side shader node (VOP) has a multiparm of image-path string parms, equiv to an array of images. This gets filled programatically and so can shrink and grow dynamically. Let's say each image-path entry in the array is named image#.
- The image index that should be used for a given prim is stashed as an int prim attribute that gets picked up by the glsl-geo shader and passed to the glsl-frag shader. All of this works as expected.
- The glsl-frag shader declares a set (finite, but large, say 50 or so) of uniform texture buffers named in the same way as the houdini-shader multiparm entries: uniform sampler2D image0; uniform sampler2D image1; etc.. These appear to be generated (loaded/assigned) correctly by the gl engine, and it all works fine... but only up to some limit...
Problem:
It appears, from testing, that H19.0 has a limit of ~15 such buffers available/generated for the fragment shader, and H19.5 only gets around 10 max.
Is there an alternate/better way to do this so that I don't hit these texture buffer limits?
If the above approach is correct, is there maybe a way to increase these limits? (didn't see an obvious envar via hconfig)
Or maybe I'm going about it the wrong way?
Thanks in advance for any/all help!
Edited by Mario Marengo - July 18, 2023 19:57:50
Technical Discussion » [HDK] CVEX Woes With HtoA
- Mario Marengo
- 941 posts
- Offline
-- SOLVED --
There was a "placement new" operation in the code which used an Arnold allocator. This interfered with Houdini's own allocation strategy and is explicitly warned against in the HDK docs:
Serves me right for being lazy and copy-pasting someone else's code!
There was a "placement new" operation in the code which used an Arnold allocator. This interfered with Houdini's own allocation strategy and is explicitly warned against in the HDK docs:
TFM
Custom Allocators
Houdini is built with special custom allocator libraries that override the default allocation routines (eg. malloc, free, etc.). As such any code you build or link into HDK plugins cannot use their own such libraries such as tbbmalloc_proxy. If you do, then conflicts and crashes can occur as a result.
Serves me right for being lazy and copy-pasting someone else's code!
Edited by Mario Marengo - Sept. 11, 2022 13:38:03
Technical Discussion » [HDK] CVEX Woes With HtoA
- Mario Marengo
- 941 posts
- Offline
Hi all,
I've been hitting a runtime (as opposed to compile-time) issue with a custom Arnold plugin (VOP shadeop) that calls a CVEX function.
I'm building against a bunch of HtoA/Hou versions, but the issue pops up in all of them, so for this post's sake, I'll pick: HtoA-6.1.3.1 (Arnold-7.1.3.0) and Houdini-19.0.657, using gcc-9 and hcustom's default -std=c++17 in this case (built in CentOS-7 using SCL devtoolset-9).
I'm linking against
The problem happens at runtime, and exactly at the point when
So it seems TBB-related, and taking a closer look at
Looking into the provenance of the signature
So the only thing left that I can think of to look at are the 3
Just in case, and grasping at straws at this point, I built my own
Does any of this ring any bells for anyone here?
Thanks in advance for any pointers!
I've been hitting a runtime (as opposed to compile-time) issue with a custom Arnold plugin (VOP shadeop) that calls a CVEX function.
I'm building against a bunch of HtoA/Hou versions, but the issue pops up in all of them, so for this post's sake, I'll pick: HtoA-6.1.3.1 (Arnold-7.1.3.0) and Houdini-19.0.657, using gcc-9 and hcustom's default -std=c++17 in this case (built in CentOS-7 using SCL devtoolset-9).
I'm linking against
-lHoudiniUT -lai
from the Houdini and HtoA installations respectively. The plugin compiles/links without errors and ldd -r
on the resultant .so
tracks no missing symbols or any other dependency issues.The problem happens at runtime, and exactly at the point when
CVEX_ContextT<VEX_32>.run()
is called (already verified that the context is valid, one per thread, and that the function is loaded successfully, i.e: no VEX errors before that call). At this point, the plugin crashes with the following output (calling kick
directly on an .ass
file for this test):... 00:00:00 224MB | starting 20 bucket workers of size 64x64 ... UT_TBBProxy: WARNING: Failed to bind to TBB - TBBPROXY_CreateTaskArena() UT_TBBProxy: WARNING: Failed to bind to TBB - TBBPROXY_DestroyTaskArena() UT_TBBProxy: WARNING: Failed to bind to TBB - TBBPROXY_TaskArenaExecute() UT_TBBProxy: WARNING: Failed to bind to TBB - TBBPROXY_ScalableAllocationCommand() UT_TBBProxy: WARNING: Failed to bind to TBB - TBBPROXY_InitializeTaskArena() kick: symbol lookup error: <houdini19.0.657>/dsolib/libpxr_plug.so: undefined symbol: _ZN3tbb10interface78internal20isolate_within_arenaERNS1_13delegate_baseEl <crash> $ c++filt _ZN3tbb10interface78internal20isolate_within_arenaERNS1_13delegate_baseEl tbb::interface7::internal::isolate_within_arena(tbb::interface7::internal::delegate_base&, long) $
So it seems TBB-related, and taking a closer look at
libHoudiniUT.so
shows that it contains the "Failed to bind to TBB"
string, though the code that issues the warning is not exposed in the HDK headers, but regardless, I believe the "undefined symbol..." error is the root cause from which all those "failure to bind" warnings cascade. Looking into the provenance of the signature
tbb::interface7::internal::isolate_within_arena(tbb::interface7::internal::delegate_base&, long)
, which was demangled by c++filt
above, I end up at the header <tbb/task_group.h>
(in $HDK/include), which only defines it when #if TBB_PREVIEW_ISOLATED_TASK_GROUP && __TBB_TASK_ISOLATION
. Okay, so then I took a look at hcustom
's compile flags and noticed they include -DTBB_PREVIEW_ISOLATED_TASK_GROUP
, so the compile side of that seems to be taken care of (I added both of those in my module and explicitly via $HCUSTOM_CFLAGS
just in case, but no improvement).So the only thing left that I can think of to look at are the 3
tbb
libraries (from ${HFS}/dsolib) that we're linking against (indirectly via -lHoudiniUT
), but when I check for the (demangled) public symbols, I see that the required signature is reported as available:$ nm -g libtbb.so.2 | c++filt | grep -i isolate_within_arena 00000000000220a0 T tbb::interface7::internal::isolate_within_arena(tbb::interface7::internal::delegate_base&, long)
TBB-2019_U9
making sure that TBB_PREVIEW_ISOLATED_TASK_GROUP && __TBB_TASK_ISOLATION
were both defined, then linked against that instead of the HDK-supplied tbb
, but the runtime "undefined symbol..." error persists <sigh>. Running out of ideas here...Does any of this ring any bells for anyone here?
Thanks in advance for any pointers!
Edited by Mario Marengo - Sept. 11, 2022 08:21:02
Technical Discussion » Camera Projection and NDC
- Mario Marengo
- 941 posts
- Offline
jason_iversen
I really do wish there was some deeper support for camera transforms within Houdini. We at R+H have obviously written our own formulas like the one above, but a good encapsulated camera definition and functions to transform by them are sorely needed.
Currently you're going to account for Window Size, Crops and Pixel Aspect yourself too; all three of these are commonly used in an FX pipeline for oversized camera projections (to account for lens warping) and such.
Completely agree.
In fact, I would love to not have to look at another NDC matrix ever again.
Technical Discussion » Camera Projection and NDC
- Mario Marengo
- 941 posts
- Offline
Technical Discussion » pyro2 noise volume vop context
- Mario Marengo
- 941 posts
- Offline
Technical Discussion » pyro2 noise volume vop context
- Mario Marengo
- 941 posts
- Offline
Glad to hear it.
(and yes, they're meant to work in all contexts, though I clearly missed at least one of them :roll: )
(and yes, they're meant to work in all contexts, though I clearly missed at least one of them :roll: )
Technical Discussion » pyro2 noise volume vop context
- Mario Marengo
- 941 posts
- Offline
Ah!
An oversight. Thanks for catching this.
I'll be submitting a few fixes soon, but if you need it to work “right now” and you're feeling adventurous, you can insert the following in $HH/vex/include/pyro_utils.h at line 660:
float pyro_vopfw_VOP_CTXT (float p) { return 0; }
float pyro_vopfw_VOP_CTXT (vector p) { return 0; }
float pyro_vopfw_VOP_CTXT (vector4 p) { return 0; }
Alternatively, you can replace that file with the one I'm attaching here (which has those 4 extra lines in it).
Cheers.
An oversight. Thanks for catching this.
I'll be submitting a few fixes soon, but if you need it to work “right now” and you're feeling adventurous, you can insert the following in $HH/vex/include/pyro_utils.h at line 660:
float pyro_vopfw_VOP_CTXT (float p) { return 0; }
float pyro_vopfw_VOP_CTXT (vector p) { return 0; }
float pyro_vopfw_VOP_CTXT (vector4 p) { return 0; }
Alternatively, you can replace that file with the one I'm attaching here (which has those 4 extra lines in it).
Cheers.
Houdini Lounge » Axyz Animation Now Hiring
- Mario Marengo
- 941 posts
- Offline
Axyz Animation [axyzfx.com], in beautiful Toronto, Canada, is now looking for a Houdini person experienced with lighting and shading. The position does not require technical knowledge of VEX and writing shaders per se (though some knowledge of VOPs is a plus), but rather an intimate knowledge of Mantra, shading concepts, and preparing scenes for efficient rendering, as well as an excellent eye for lighting, texturing, tone mapping, and generally integrating CG with live elements.
This is a full-time position, starting now.
Candidates should have 2 years experience or more.
Please contact:
John Stollar, General Manager,
js@axyzfx.com
(Please do not respond to this forum).
Thank you.
This is a full-time position, starting now.
Candidates should have 2 years experience or more.
Please contact:
John Stollar, General Manager,
js@axyzfx.com
(Please do not respond to this forum).
Thank you.
Houdini Lounge » Creating BDSF for PBR
- Mario Marengo
- 941 posts
- Offline
Wolfwood
From what I gather (pun!) on reading the GGX paper, the BRDF component of the GGX BSDF is the same as the Cook-Torrance except they divide by 4 instead of a factor of PI.
I don't think so. The difference is in the distribution (the “D” term), and all 3 versions are pretty different (as are their shapes). All the D's have a 1/PI term that comes from normalizing to projected solid angle (an extra cos(theta) term in the integral => Solve, which the original Cook-Torrance “D” didn't have, so the 1/PI term was missing there). And the 1/4 or 1/4(w.h) factor (for the whole BRDF, not the D term) comes from mapping an “h-distribution” (a distribution of halfway vectors about n) to an “o-distribution” (a distribution about the outgoing direction).
…at least that's how I understand it at the moment
Houdini Lounge » Creating BDSF for PBR
- Mario Marengo
- 941 posts
- Offline
Well… even though you can't currently define your own bsdf object, you can still define a bunch of (VEX) functions that mimic the features of a built-in bsdf. That is: implement your own versions of sample_bsdf(), eval_bsdf(), albedo(bsdf,<dir>), etc., and then use them in a Multiple Importance Sampling setting, similar to what's happening in pbrpathtrace.h (which you would also need to write yourself, but you can use that header as a guide).
I'm coincidentally in the middle of trying just that (for a Cook-Torrance based model) and it seems to be working pretty well so far (very efficient sampling), though I still have to tame some of the weights in the MIS portion :?
I'm coincidentally in the middle of trying just that (for a Cook-Torrance based model) and it seems to be working pretty well so far (very efficient sampling), though I still have to tame some of the weights in the MIS portion :?
Houdini Lounge » pyro shader gamma?
- Mario Marengo
- 941 posts
- Offline
Hi Jason,
Just in case this is just a viewing problem, let me first mention that if you're comparing the rgb and alpha values in mplay, then you should keep in mind that the viewing gamma (the gamma value you set in mplay) is *not* applied to the alpha channel – just the rgb. So if you set gamma>1 (in mplay) and then flip between rgb and alpha to compare, you might wrongly conclude that their falloff rates are wildly different (they *are* in the viewer – mplay is correcting rgb but not alpha – but not in the actual image).
With that out of the way, I'd say yes, you probably want to be viewing your renders (and adjusting your shading) at gamma=2.2 (or whatever correction mimics the final display device). If you want to check how the volume will composite (i.e: check how its opacity+rgb levels play with background objects), then render a background object as well (const-colored plane, whatever).
Now to the shader…
I'm pretty sure the shader isn't doing any tone mapping or other color-space manipulations. It works in “linear” space to the extent that all the inputs are used directly, without any correction – so I guess a more correct statement would be to say that it *assumes* linear space, like every other shader I know of.
Light intensity arriving at a microvoxel is used directly as reported by Mantra, and attenuated (or “premultiplied” if you like) by the fragment's opacity (1-exp(-k*density*dPdz)) which I believe is the correct composition in a ray-marching context. It also ensures that the fragment's opacity ends up in the expected range (which is guaranteed if ‘k’, ‘density’, and ‘dPdz’ are >=0, and which the shader goes out of its way to enforce).
Radiance, however, has no upper-limit applied to it (the shader only enforces that it be non-negative). This means that the “luminance(RGB)” of the result is allowed to have a higher value than the Mantra-composited opacity (or the Alpha of the final image). This (rgb>alpha) is guaranteed to happen when there is emission (directly from fire, or indirectly from the scattering), or when the sum of intensities of all the lights illuminating a fragment is >1… and there are probably other cases I'm forgetting. However, it's expected, not an error.
So… no, there's no “gamma correction” going on with any of the inputs (tone-mapping of the black body intensities excepted, since that doesn't apply here). At least, not that I remember
Hope that makes sense.
Cheers.
Just in case this is just a viewing problem, let me first mention that if you're comparing the rgb and alpha values in mplay, then you should keep in mind that the viewing gamma (the gamma value you set in mplay) is *not* applied to the alpha channel – just the rgb. So if you set gamma>1 (in mplay) and then flip between rgb and alpha to compare, you might wrongly conclude that their falloff rates are wildly different (they *are* in the viewer – mplay is correcting rgb but not alpha – but not in the actual image).
With that out of the way, I'd say yes, you probably want to be viewing your renders (and adjusting your shading) at gamma=2.2 (or whatever correction mimics the final display device). If you want to check how the volume will composite (i.e: check how its opacity+rgb levels play with background objects), then render a background object as well (const-colored plane, whatever).
Now to the shader…
I'm pretty sure the shader isn't doing any tone mapping or other color-space manipulations. It works in “linear” space to the extent that all the inputs are used directly, without any correction – so I guess a more correct statement would be to say that it *assumes* linear space, like every other shader I know of.
Light intensity arriving at a microvoxel is used directly as reported by Mantra, and attenuated (or “premultiplied” if you like) by the fragment's opacity (1-exp(-k*density*dPdz)) which I believe is the correct composition in a ray-marching context. It also ensures that the fragment's opacity ends up in the expected range (which is guaranteed if ‘k’, ‘density’, and ‘dPdz’ are >=0, and which the shader goes out of its way to enforce).
Radiance, however, has no upper-limit applied to it (the shader only enforces that it be non-negative). This means that the “luminance(RGB)” of the result is allowed to have a higher value than the Mantra-composited opacity (or the Alpha of the final image). This (rgb>alpha) is guaranteed to happen when there is emission (directly from fire, or indirectly from the scattering), or when the sum of intensities of all the lights illuminating a fragment is >1… and there are probably other cases I'm forgetting. However, it's expected, not an error.
So… no, there's no “gamma correction” going on with any of the inputs (tone-mapping of the black body intensities excepted, since that doesn't apply here). At least, not that I remember
Hope that makes sense.
Cheers.
Technical Discussion » macroses in VEX
- Mario Marengo
- 941 posts
- Offline
morzh
May be it's worth to post that stuff to RFE
Maybe… but I wouldn't hold my breath. The default behaviour of most cpp implementations is to *not* expand macros in pragmas (Microsoft might be an exception though).
Technical Discussion » macroses in VEX
- Mario Marengo
- 941 posts
- Offline
I'm pretty sure vcc doesn't do macro expansions for #pragma's.
So no, I don't think you'll be able to “macrotize” your pragma's
So no, I don't think you'll be able to “macrotize” your pragma's
Technical Discussion » speed up PBR
- Mario Marengo
- 941 posts
- Offline
Almost impossible to help you out by just looking at those images. I agree that H11 shouldn't be more than 3X slower than H10, but in order to find the culprit you're going to need to do a simple version of that scene that you can upload and we can all look at.
Maybe keep all the lighting, a simple box for the overall room, and a grid+box combo for the computer stacks – plus some of the shaders of course.
Maybe keep all the lighting, a simple box for the overall room, and a grid+box combo for the computer stacks – plus some of the shaders of course.
Technical Discussion » PBR and Shader with Alpha Channel
- Mario Marengo
- 941 posts
- Offline
winkel
but that was the first I've done…. a opacity Map with no results. I took a TGA-file with an Apha Channel - wrong?
Detlef
No; not wrong *if* you used the map's alpha channel to define the opacity ouput (Of). But you have to actually tell the shader to do that (a Texture VOP won't do it automatically for you, for example).
But setting opacity (Of) – whether from a map's alpha channel or somewhere else – is definitely the right thing to do.
Technical Discussion » op: syntax no longer works?
- Mario Marengo
- 941 posts
- Offline
Unless things have changed, I don't think the “op:” syntax will work in a SHOP parameter – there's no “local” or “remote” distinction going on that I know of. The scene format is always an IFD.
…but I could be wrong.
…but I could be wrong.
Technical Discussion » bullet time effect
- Mario Marengo
- 941 posts
- Offline
graham
I'd also most likely go the method of manipulating the simulated geometry in SOPs, but just for something slightly different there is a way you can actually “pause” a simulation.
That's a nice trick to keep in mind.
Thanks Graham.
Technical Discussion » bullet time effect
- Mario Marengo
- 941 posts
- Offline
i9089303
How is it possible to pause a simulation as the time keep running in order to create a bullet-time effect?
The simplest way I can think of is using the TimeShift SOP.
See attached.
-
- Quick Links