Yeah… at the geometry level, what Jason said. But since you mentioned “shader” and “cavity map” I figured you had some map with the cavity amplitudes. No matter what you do, if you want an actual shader (as opposed to some SOP-side point coloring OP), you'll have to pass the cavity information down to the shader somehow, and your only choices are maps or point attributes (and point attributes require a very hi-res mesh). The only other “quick” alternative on the shader side would be a curvature-based shader, but this works on all curvatures, not just the cavities (unless you masked it with another map) – and you'd have the same issue with a small-radius-occlusion approach.
So… since I haven't seen this “cavity map” that you mentioned… what is it exactly? It's not a texture map then?
Found 345 posts.
Search results Show results as topic list.
Houdini Lounge » zbrush cavity shader in houdini?
- Mario Marengo
- 941 posts
- Offline
Houdini Lounge » zbrush cavity shader in houdini?
- Mario Marengo
- 941 posts
- Offline
… real quick sketch (untested!):
#pragma hint uv hidden
shader cavity (
string bumpmap = “”; // cavity map (floating-point, preferably)
float bfloor = 0; // pixel value for “no bump”
int keepbelow = 1; // whether to use values below/above floor
float strength = 1; // Darkening strength
vector uv = 0; // texture uv's (hidden)
)
{
float Kcavity = 1; // cavity value
if(bumpmap!=“” && strength!=0) {
vector tuv = isbound(“uv”) ? uv : set(s,t,0);
vector mapval = texture(bumpmap,tuv.x,tuv.y,“filter”,“catrom”);
float depth = luminance(mapval)-bfloor;
depth = keepbelow ? max(0,abs(depth)) : max(0,depth);
Kcavity = exp(-5.0*strength*depth);
}
// Do usual shading…
Cf = diffuse()*Kcavity; // +…
F = bsdf(diffuse()) * Kcavity; // + …
}
#pragma hint uv hidden
shader cavity (
string bumpmap = “”; // cavity map (floating-point, preferably)
float bfloor = 0; // pixel value for “no bump”
int keepbelow = 1; // whether to use values below/above floor
float strength = 1; // Darkening strength
vector uv = 0; // texture uv's (hidden)
)
{
float Kcavity = 1; // cavity value
if(bumpmap!=“” && strength!=0) {
vector tuv = isbound(“uv”) ? uv : set(s,t,0);
vector mapval = texture(bumpmap,tuv.x,tuv.y,“filter”,“catrom”);
float depth = luminance(mapval)-bfloor;
depth = keepbelow ? max(0,abs(depth)) : max(0,depth);
Kcavity = exp(-5.0*strength*depth);
}
// Do usual shading…
Cf = diffuse()*Kcavity; // +…
F = bsdf(diffuse()) * Kcavity; // + …
}
Technical Discussion » VEX: 'reset' a pciterate loop
- Mario Marengo
- 941 posts
- Offline
nanocell
pcopen() looks up a bunch of points within the specified radius from a given point. These points are stored in memory (??) and an pciterate can be used to iterate over them. Now, to iterate over them a second time, I would have to close the handle, pcopen() again (which does the whole lookup thing again), stores the points in memory and only then I iterate over them again.
I don't know the details of how pcopen() was implemented either, but I'm guessing it just builds a new kd-tree on the same point data. It seems you're thinking about always using a single handle for both iterations, but you shouldn't (at least not if they're nested). Anyway… I had posted some more information in your odForce incarnation of this thread [forums.odforce.net] (sorry, didn't know you'd posted here as well).
Technical Discussion » VEX language description
- Mario Marengo
- 941 posts
- Offline
suz
Haai all!
i was wondering if there exists a full VEX language description,
not the only the http://www.sidefx.com/docs/houdini9.5/vex/lang [sidefx.com]
i'm looking for a list of the MACRO's -
in particular the equivalent of MINFILTWIDTH from renderman
and the equivalent of filterwidth() (also renderman)
':wink:'
if anyone could point me ina direction pls,
it'll b much appreciated!
These macros are not part of either language (VEX or RSL). They are all user-defined. The ones you mention sound like the stuff from Larry Gritz's filterwidth.h header (from waaaaay back during the Blue Moon days, I think) which have become sort-of standard-ish in the PRMan camp.
Similar macros (similarly named) can be found for VEX in the bundled header $HH/vex/include/voplib.h, which defines VOP_MIN_FILTER_SIZE and the wrappers FILTERSIZE and AREA. Another place where you can find vex versions of those Gritz macros is in the odWiki's “Writing Shaders” section [odforce.net].
Alternatively, you can write your own and improve on all of the above
A (probably incomplete) list of symbolic constants that are part of the VEX language:
__vex is set to 1
__vex_major is set to the major number of the Houdini release
__vex_minor is set to the minor number of the Houdini release
And inside the Vex Builder environment:
VOP_OP is defined for Sop, Cop, Chop, and Pop networks
VOP_SHADING is defined for all shading vop nets
VOP_DISPLACE is defined for the displacement context
VOP_FOG is defined for the volume/fog context
VOP_LIGHT is defined for the light context
VOP_PHOTON is defined for the (now deprecated) photon context
VOP_SHADOW is defined for the shadow context
VOP_SURFACE is defined for the surface context
… and I'm probably missing quite a few
Technical Discussion » point clouds and gather
- Mario Marengo
- 941 posts
- Offline
jacob clark
Thanks Mario. I got it working based off of a file I found on Point cloud Occlusion.
Though is seems that when I run the setup my refraction is actually darker than the original Image I trace.
I've attacted a pic of my surface with a grid behind it. The grid is a pretty bright white, while the refraction seems rather dark. Attached is the file used to generate this pic.
Any help with this is much appreciated, thanks!
-j
Hey Jason,
Digging through vop spaghetti is like getting a root canal… or two.
Anyway, I managed to spot a couple of things:
* Bias was way too big. Changed it to 0.005.
* For a 1-degree cone angle, 40 samples is way overkill, I changed it to 4.
* For an IOR of 1, you don't need to call fresnel(), just use the incidence direction.
* The incidence direction on a PC point is not the global “I”. This global is only valid for the shade point. For a PC point, the incidence is “PC.P-Eye”. Where PC.P is the PC point's position transformed to current space (or camera space).
* Fresnel was giving you a Kt that was less than 1, even though your IOR was 1. Dunno what that's all about (bug?), but that's what was causing it to go dark.
* For the filtering, I'd recommend to either work with the radius (in object-space units), or the number of points, but not both. So I used the radius (0.2) and set the number of points to a maximum of 1e+6.
* Maximum distance of -1 means “search infinitely far”, which is probably a better value than 30, since you might get flicker during animation.
I made those changes (though there are probably other issues I missed, but vops defeated me), and removed fresnel altogether. Seems stable now… I think.
Edited by - Oct. 7, 2008 13:17:59
Technical Discussion » point clouds and gather
- Mario Marengo
- 941 posts
- Offline
jacob clark
Does anyone know of an example of the gather loop being used with Point Clouds? I find myself stumbling over the concept of the two.
I don't know of any specific examples for this, but gather() just takes a direction and a cone angle, so I don't think there should be any problem calling it for each point in a point cloud. The only thing I would caution against is using derivatives, as I'm not sure if they're reliable for PC points.
Houdini Lounge » multiparm "#" symbol? how it works?
- Mario Marengo
- 941 posts
- Offline
tamte
please, anyone?
can you at least tell me if it should work that way?
can this be considered as a bug?
or am I doing it wrong?
It sure looks like bug to me. (please report it).
As a possible workaround in the meantime, you could try an inline function instead of the `ifs()` expression… something like:
`{ id = #; string out = “”; if(id) out = opsubpath(“./obj”+id); return out; }`
Although it looks to me like that call to opsubpath() will always give you what you already know, namely: “obj#”…. and if that's true, then you could just turn it into:
`{ id = #; string out = “”; if(id) out = “obj”+id; return out; }`
But maybe there's more going on that I'm missing.
HTH.
Technical Discussion » how to rotate() around custom axis?
- Mario Marengo
- 941 posts
- Offline
edward
Well, if you set up the TransformAxis SOP's parameters correctly, then you can just turn off Recompute Point Normals to rotate N.
How would you set that up?
As I understood it, he wants to rotate a point attribute (vector “v”) of the current geometry (“.”) about an axis that results from the cross product of two point vector attributes (“N” and “v”) which come from two separate external sources (“../a” and “../b”).
Doesn't TransformAxis operate on the whole geometry?
P.S: Sorry ykcosmo. I should have said “houdini expression language” (HEL? ) instead of “hscript”. We're talking about the same thing.
Technical Discussion » how to rotate() around custom axis?
- Mario Marengo
- 941 posts
- Offline
ykcosmo
I want to rotate my v around the axis which is the cross product of my N and v, I tried rotate(35, cross(points(“../a,$PT,”N“), points(”../b“,$PT,”v"))), it shut down my houdini at once….
Judging by your syntax, I'll assume you're talking about hscript and not VEX.
The function points() returns a string, and is meant for fetching string attributes, not numerical ones – you're clearly looking for a number not a string, so you'd need to use point() instead.
The function rotate() returns a matrix, and is meant to post-multiply your “v” in order to transform it. However, it can only rotate about three predefined axes: the basis vectors “x”, “y”, and “z”, which is not what you want in this case. To rotate by an arbitrary axis, you want to use the function rotaxis().
Putting it all together, and assuming the vector to be rotated (“v”) is locally available as , you'd end up with something like this:
{
float ax = point(“../a”,$PT,“N”,0);
float ay = point(“../a”,$PT,“N”,1);
float az = point(“../a”,$PT,“N”,2);
vector a = vector3(ax,ay,az);
float bx = point(“../b”,$PT,“v”,0);
float by = point(“../b”,$PT,“v”,1);
float bz = point(“../b”,$PT,“v”,2);
vector b = vector3(bx,by,bz);
vector v = vector3($VX,$VY,$VZ);
float angle = 35; # <-some expression for angle goes here
return (v*rotaxis(angle,cross(a,b)));
}
The array index at the end "“ extracts element zero of the result (i.e: ”x"), you'd change it to or to extract y and z respectively, or remove it altogether if you want the whole vector returned.
… and you can write it all in a single line if you wish. I split it up for clarity, not because you have to.
HTH.
Houdini Lounge » isolate texture on surface
- Mario Marengo
- 941 posts
- Offline
Hey Rob,
Looks almost right… but you kind'a mangled the inputs to the smooth VOP.
The output of multiply1 goes into the “amount” input of the smooth vop, and “roloff” gets no input, so disconnect the output from stripes2 from it. You can replace “constantB” and “subtract1” with a ComplementVOP, and lose “floattovec1”.
With those changes, the output of smooth1 is just a weight – it's meant to multiply the texture pattern, whatever that might be (in this case it's the output of stripes2, but it could just as easily be the output of a TextureVOP or anything else).
I'm attaching a modified version of your hip file with the changes.
HTH.
Looks almost right… but you kind'a mangled the inputs to the smooth VOP.
The output of multiply1 goes into the “amount” input of the smooth vop, and “roloff” gets no input, so disconnect the output from stripes2 from it. You can replace “constantB” and “subtract1” with a ComplementVOP, and lose “floattovec1”.
With those changes, the output of smooth1 is just a weight – it's meant to multiply the texture pattern, whatever that might be (in this case it's the output of stripes2, but it could just as easily be the output of a TextureVOP or anything else).
I'm attaching a modified version of your hip file with the changes.
HTH.
Houdini Lounge » isolate texture on surface
- Mario Marengo
- 941 posts
- Offline
Assuming the fadeout is happening in the u dimension (also typically inside the range), then you could try:
float weight = 1.0-smooth(ws,ws+fs,2*abs(u-center));
where:
center => center of window, float, range , default 0.5
ws => window size, float, range , default 0.5
fs => fade size, float, range , default 0.5
Those defaults would give you a 0.5-wide window centered at u=0.5 with the two sides fading out over a 0.25 range… filling out the interval in u.
… I think…
float weight = 1.0-smooth(ws,ws+fs,2*abs(u-center));
where:
center => center of window, float, range , default 0.5
ws => window size, float, range , default 0.5
fs => fade size, float, range , default 0.5
Those defaults would give you a 0.5-wide window centered at u=0.5 with the two sides fading out over a 0.25 range… filling out the interval in u.
… I think…
Houdini Lounge » Rendering biological iridescences with RGB-based renderer
- Mario Marengo
- 941 posts
- Offline
Hey Andrew,
It would seem that the preprint version of that paper (which was available for free for a very long time), has been taken down. It's now only available through the ACM digital library. You could purchase the final version from the library if you wish (as far as I can tell, both versions are identical).
I hesitate to post the preprint here as I'm unsure of the copyright issues (even though that version was available for free for.. oh… a year at least).
It would seem that the preprint version of that paper (which was available for free for a very long time), has been taken down. It's now only available through the ACM digital library. You could purchase the final version from the library if you wish (as far as I can tell, both versions are identical).
I hesitate to post the preprint here as I'm unsure of the copyright issues (even though that version was available for free for.. oh… a year at least).
Technical Discussion » energy conserving shader
- Mario Marengo
- 941 posts
- Offline
anamousAgreed – that's kind'a what I use too. Unfortunately it doesn't answer you initial question: is it energy conserving? – my guess is that it's probably close to being energy conserving but not guaranteed.
fresnel weighting is definitely better than simply combining the two components - see attached sample.
anamousYeah, the results of two distributions get added – at least that's how I think of it.
also, what exactly happens when I sum two BSDFs? is the result of each calculation added or something else? interesting to know
anamousThanks!
BTW your spectral Mantra talk is fantastic, thanks for sharing, very much appreciated!
Houdini Lounge » Implementing Spectral Colors in Mantra Now Posted
- Mario Marengo
- 941 posts
- Offline
Thanks for the kind words, everyone.
I just now gathered enough courage to listen to it myself.
It didn't turn out as bad as I thought, given that the mic kept falling off every time I moved… the fun of live content
Next up is Wolfwood, who's been hallucinating some pretty cool stuff with POPs and VEX.
May the mic be with you, brother
Cheers!
I just now gathered enough courage to listen to it myself.
It didn't turn out as bad as I thought, given that the mic kept falling off every time I moved… the fun of live content
Next up is Wolfwood, who's been hallucinating some pretty cool stuff with POPs and VEX.
May the mic be with you, brother
Cheers!
Technical Discussion » energy conserving shader
- Mario Marengo
- 941 posts
- Offline
anamous
I was thinking that since most reflecting materials utilize a fresnel coeff, maybe I could just primitively use that as a multiplier for the different components. so if there's a base lambert and a specular reflection on top, I would use kr as multiplier for the specular and (1-kr) as multiplier for the Lambert BSDF.
Ahhhhh…. my friend Fresnel…
Yeah. Things definitely get more complicated when Fresnel enters the picture. To quote Ashikhmin: “Another, perhaps more important limitation of the Lambertian diffuse term is that it must be set to zero to ensure energy conservation in the presence of a Fresnel-weighted term.”
To balance the two he uses the relationship:
diffusebrdf = C * Rd*(1-R(k1))*(1-R(k2))
where R(k) is the total hemispherical reflectance of the specular term in both the incident (k1) and outgoing (k2) directions, Rd is the diffuse albedo of the surface, and C is a normalization constant computed such that for Rd=1 the total incident and reflected energies are the same.
@Andrew: would you say albedo(bsdf) returns the “total hemispherical reflectance” of the given bsdf?
Anyway, he then goes on to compute C using the particular flavour of phong and fresnel that he's writing about. In our case, using the built-in fresnel, I guess weighing diffuse by (1-kr) – or just kt – is probably not a bad approximation, though I have no idea how successfully it may or may not conserve energy.
Transmission, I think, shouldn't be forced to share weights with lambert's (1-kr) because it's from a different source – true, it is also weighted by kt=1-kr), but its contribution is additional to whatever is going on at the surface with phong/lambert.
For sss, at least the portion of it that is not directly reflected at P but comes from a far field, again I think shouldn't be forced to share lambert's (1-kr) portion, for the same reasons as transmission. In reality, sss *should* take care of both what we consider to be lambert diffuse as well as the subsurface component (using its own set of fresnel terms along the way, actually). So, in a perfect world, I see sss as replacing lambert, not adding to it… but some of the models may have limitations that require a tiny bit of lambert to be added to the mix.
In short, when Fresnel comes into the picture, I personally have no clear idea of the “correct” way of doing things I've tried several different things over time, and some of them look pretty convincing, but I'm in no way qualified to say whether any of it is physically valid, or even plausible. Fresnel and I have been fighting for a while now… and I always loose! :shock:
Cheers.
Technical Discussion » energy conserving shader
- Mario Marengo
- 941 posts
- Offline
Hey anamous,
Hmmm…. you lost me a little bit with the “variant weights” comment. As I understand it, energy conservation means that the bsdf shouldn't distribute any more or less energy than the incident amount (well, possibly less if there is absorption, and possibly more if you include interference, I think), but over the entire scattering volume, not in a particular viewing direction (i.e: energy distribution/conservation is completely defined by the bsdf, and only sampled in a particular viewing direction). And I guess what I'm saying is that I think that the built-in bsdf's are already taking care of that. So, as I see it, the energy conservation problem is only a result of applying more than one bsdf to the same incident energy: so adding 3 bsdf's, for example, will result in 3X the energy output, but a constant (not varying) set of scaling factors is all that's needed to restore it to unity, no? (assuming each bsdf itself is conserving, which again, I think they are).
I think that direct transmission and direct reflection would have come from different sources to begin with, and so wouldn't be distributing the same energy, so I don't see why they should have to share any limits on their outputs (outisde of those calculated by their respective brdf/btdf's that is). Ditto for their indirect cousins, including scattering. Again, I think that it is only when you're applying more than one bsdf to the same energy source that you need to worry about wrecking conservation (like when adding lambert and phong which are essentially operating on the same incident energy). And then it is only a problem because we're splitting what should really be a single brdf into separate components (diffuse and phong).
I get the feeling that what you want to do is have a shader that smoothly goes all the way from flat diffuse to a perfect mirror in a physically plausible way. Let's talk about just reflectance for now (leave out transmission and scattering and whatever else). I think that for something like that to work you'd first need to restrict the user parameterization to a single surface quality like, say, “glossiness” with e.g, an internal mapping of glossiness to bsdf and all the “correct” or “expected” (as well as energy-preserving) mixtures in between.
In that case you could use a single unified brdf… but we can't write our own, as you well know… so we have to balance up to three bsdf's (which are the result of redistributing the same input energy) in such a way that the final scaling is still unity (for the combined scattering volumes not for a particular viewing direction).
Maybe you could try using lambert as the controlling bsdf and fill the energy gap with one of the specular models – all the way to perfect mirror (i.e: specular()) at glossiness=1. So, again assuming each built-in bsdf is already energy-conserving, the bsdf weights, based on a single user input of “glossiness”, might go something like:
Glossiness(g) | Lambert | Phong | PhongExponent | Specular
—————|————|———–|——————|———–
0 to 1 | 1-g | g*(g<1) | 400*exp(6*g-6) | (g>=1)
And of course I'm just guessing at the exponent: replace 400 for whatever you like as the smallest dot size, and replace 6 to change the rate at which it “shrinks”.
All other possibilities that come to mind right now for doing something more “correct” would involve writing a custom brdf (like an ashikhmin that includes the diffuse part, or his D-BRDF thing which is also pretty cool, etc), so yeah… the above hack is all I can think of at the moment (for the reflectance case anyway)
Andrew? Mark?
Cheers!
anamous
this means that the kx constants that are used as multipliers in you code snippet would have to be variant
Hmmm…. you lost me a little bit with the “variant weights” comment. As I understand it, energy conservation means that the bsdf shouldn't distribute any more or less energy than the incident amount (well, possibly less if there is absorption, and possibly more if you include interference, I think), but over the entire scattering volume, not in a particular viewing direction (i.e: energy distribution/conservation is completely defined by the bsdf, and only sampled in a particular viewing direction). And I guess what I'm saying is that I think that the built-in bsdf's are already taking care of that. So, as I see it, the energy conservation problem is only a result of applying more than one bsdf to the same incident energy: so adding 3 bsdf's, for example, will result in 3X the energy output, but a constant (not varying) set of scaling factors is all that's needed to restore it to unity, no? (assuming each bsdf itself is conserving, which again, I think they are).
anamous
and depending on a procedure (get direct reflection, get direct transmission, get scattering, and finally get indirect reflection) and using them in that order, each dictating the multiplier for the next one. the most important factor is the amount of incoming energy that these components get to share. so if the surface is a full mirror, the direct reflection component uses almost all of the incoming energy, leaving little to nothing as multipliers for the other components.
I think that direct transmission and direct reflection would have come from different sources to begin with, and so wouldn't be distributing the same energy, so I don't see why they should have to share any limits on their outputs (outisde of those calculated by their respective brdf/btdf's that is). Ditto for their indirect cousins, including scattering. Again, I think that it is only when you're applying more than one bsdf to the same energy source that you need to worry about wrecking conservation (like when adding lambert and phong which are essentially operating on the same incident energy). And then it is only a problem because we're splitting what should really be a single brdf into separate components (diffuse and phong).
I get the feeling that what you want to do is have a shader that smoothly goes all the way from flat diffuse to a perfect mirror in a physically plausible way. Let's talk about just reflectance for now (leave out transmission and scattering and whatever else). I think that for something like that to work you'd first need to restrict the user parameterization to a single surface quality like, say, “glossiness” with e.g, an internal mapping of glossiness to bsdf and all the “correct” or “expected” (as well as energy-preserving) mixtures in between.
In that case you could use a single unified brdf… but we can't write our own, as you well know… so we have to balance up to three bsdf's (which are the result of redistributing the same input energy) in such a way that the final scaling is still unity (for the combined scattering volumes not for a particular viewing direction).
Maybe you could try using lambert as the controlling bsdf and fill the energy gap with one of the specular models – all the way to perfect mirror (i.e: specular()) at glossiness=1. So, again assuming each built-in bsdf is already energy-conserving, the bsdf weights, based on a single user input of “glossiness”, might go something like:
Glossiness(g) | Lambert | Phong | PhongExponent | Specular
—————|————|———–|——————|———–
0 to 1 | 1-g | g*(g<1) | 400*exp(6*g-6) | (g>=1)
And of course I'm just guessing at the exponent: replace 400 for whatever you like as the smallest dot size, and replace 6 to change the rate at which it “shrinks”.
All other possibilities that come to mind right now for doing something more “correct” would involve writing a custom brdf (like an ashikhmin that includes the diffuse part, or his D-BRDF thing which is also pretty cool, etc), so yeah… the above hack is all I can think of at the moment (for the reflectance case anyway)
Andrew? Mark?
anamousHeh. I think it went well, thanks. At the very least, the slides might be useful to some people… I hope – SESI will be posting the stuff up sometime soon I think.
BTW, how was your spectral mantra talk? :wink:
Cheers!
Technical Discussion » energy conserving shader
- Mario Marengo
- 941 posts
- Offline
anamous
How would an energy conserving shader be approached in vex/vops etc? I have an idea involving using the reflected light vop to get the amount of incoming light then doing some math to distribute that amount among the shading components (diffuse, reflection, transmission, perhaps sss).
Hey anamous,
I think the bsdf's are already energy-conserving (on their own, that is). So it may be just a matter of distributing the weights among all the bsdf's you end up using in your composition (again, assuming that each bsdf, on its own, is energy-conserving). Here's a sketch of what I mean:
surface blah ( float k1=1,k2=1,k3=1; ) {
bsdf F1 = specular() * k1;
bsdf F2 = phong() * k2;
bsdf F3 = lambert() * k3;
F = (F1+F2+F3) / (k1+k2+k3);
}
Is that kind'a what you're after?
Edited by - May 2, 2008 02:57:11
Houdini Lounge » Implementing Spectral Colors in Mantra Lecture!
- Mario Marengo
- 941 posts
- Offline
Houdini Lounge » Physically Based Rendering book
- Mario Marengo
- 941 posts
- Offline
Well, maybe you should get it then, if for no other reason than that it does a great job at explaining all the concepts very clearly. Also, from the point of view of a shader writer, the more technical details are priceless.
I'm no expert on the subject, but at least conceptually, the thing itself is not complicated: PBR is an attempt at modelling how light interacts with objects in a way that is physically plausible. So instead of talking about a “diffuse and specular color”, as in the standard local illumination model for example, it describes emission, reflectance, and transmission as distribution functions (of probability or power) over a (hemi)sphere. It then uses these distributions to choose in which directions to sample the scene as efficiently as possible.
In Mantra's implementation (as well as in pbrt), all the distribution functions for the different aspects of light scattering (reflectance, transmission, etc) are bundled into a single object: a “bidirectional scattering distribution function”, or “BSDF”. That's what the F slot in VOPs (and the global var ‘F’ in the shading context) stands for: it describes the shape of these distributions.
So yes, you can plug in the F output from some lighting model VOP to the F input of any other VOP (and ultimately to the “F” input of the Output VOP). There are some basic arithmetic operations that can be applied to the bsdf data type as well (the data type of that “F”), like addition, multiplication by a scalar, etc, which allows you to manipulate them somewhat.
Currently though, you can only select among a set of pre-defined distributions (F's), you don't write your own in your shaders.
For more details on the VEX/VOP interface to Mantra's PBR implementation, you can start by looking here:
1. Physically Based Rendering [sidefx.com]
2. Writing VEX Shaders For Physically Based Rendering [sidefx.com]
HTH.
I'm no expert on the subject, but at least conceptually, the thing itself is not complicated: PBR is an attempt at modelling how light interacts with objects in a way that is physically plausible. So instead of talking about a “diffuse and specular color”, as in the standard local illumination model for example, it describes emission, reflectance, and transmission as distribution functions (of probability or power) over a (hemi)sphere. It then uses these distributions to choose in which directions to sample the scene as efficiently as possible.
In Mantra's implementation (as well as in pbrt), all the distribution functions for the different aspects of light scattering (reflectance, transmission, etc) are bundled into a single object: a “bidirectional scattering distribution function”, or “BSDF”. That's what the F slot in VOPs (and the global var ‘F’ in the shading context) stands for: it describes the shape of these distributions.
So yes, you can plug in the F output from some lighting model VOP to the F input of any other VOP (and ultimately to the “F” input of the Output VOP). There are some basic arithmetic operations that can be applied to the bsdf data type as well (the data type of that “F”), like addition, multiplication by a scalar, etc, which allows you to manipulate them somewhat.
Currently though, you can only select among a set of pre-defined distributions (F's), you don't write your own in your shaders.
For more details on the VEX/VOP interface to Mantra's PBR implementation, you can start by looking here:
1. Physically Based Rendering [sidefx.com]
2. Writing VEX Shaders For Physically Based Rendering [sidefx.com]
HTH.
Houdini Lounge » Physically Based Rendering book
- Mario Marengo
- 941 posts
- Offline
That is one of my all-time favorite rendering books, but it's all about the technology and algorithms behind PBR. It will help you understand how a PBR renderer might be implemented (with wonderful clarity and in minute detail), but I'm afraid it will not help you much with *using* Mantra's implementation.
So, if you're interested in the grizzly details of how to write a PBR renderer, then I highly recommend it. Otherwise I'd say don't waste your money.
As an aside: pbrt now has a fork called “luxrender” which aims to make pbrt a more production-ready renderer.
HTH.
So, if you're interested in the grizzly details of how to write a PBR renderer, then I highly recommend it. Otherwise I'd say don't waste your money.
As an aside: pbrt now has a fork called “luxrender” which aims to make pbrt a more production-ready renderer.
HTH.
-
- Quick Links