Well; I managed to escape Harry Potter long enough to have some time to look at this bssrdf thing a little more closely.
Here's a first stab at the single-scattering term (in VEX):
WARNING: This is *NOT* a shader (yet
– so please don't download these files expecting a full subsurface scattering solution. I'm posting these so as to start a discussion on some of the issues involved in implementing something like this in VEX. And in the end, eventually, yes: maybe even a shader
At the moment, these files represent only one half of one function (one term in an equation) that, when coupled with the other half, will form a type of illumination model (like Diffuse, Lambert, Oren-Nayar, and so on).
The good news: it works
The bad news: well.. there really isn't any *bad* news, but there are a bunch of issues, as with anything meaty like this
Tech NotesMaps:Among the key elements in Hery's approach are shadow maps. In the prman implementation, a shadow map is used to extract both the depth of the projected sample point and the percent occlusion for the light. This, as it turns out, is not possible in VEX. The shadowmap() functions return only occlusion, and the texture() function, only RGB or RGBA (according to the docs). This leaves only the depthmap() function for accessing depth, which requires a “mantra -Z” pass – an exclusive pass that generates nothing *but* z-depth: you can't e.g, generate both z-depth *and* the normal map *and* the shadow map for the light (if you choose not to use traced shadows) in one pass.
This means that the map-making side of the pipeline becomes:
Pass 1: z-map with light as camera.
Pass 2: N-map plus shadow map with light as camera.
What I ended up doing instead, is changing the singlescatter() function so it uses a map with a special format: the normal (N) is expected to be stashed in channels R,G, and B, and the depth in Alpha. This means you can generate all three maps (shadow, z-depth, and Normal) in one pass. It also means this pass has to be generated with “-b float” since both z and N should be floating-point values. For this pass, you can probably also get away with setting the filter to “box 1 1” and super-sampling to 1x1, which makes it really fast.
The bad news is that you need a different shader active for all objects (or a different set of objects) for this pass to work. Here is a simple test
shader [
members.rogers.com] that puts N and z in the right slots.
Spaces:I ended up writing it backwards: from the singlescatter() function back to VEX/Houdini; simply assuming that things like xform matrices and such would be dealt with by the caller, and passed in properly… well; it was a nice dream while it lasted
The result was that doing the singlescatter() translation took about 1/10th the total time, and the other 9/10ths were spent hallucinating ways to extract the necessary bits from VEX :shock:
The function needs two matrices: one to go from “current” to “light” space, and one to go form “current” (through “light”) to the light's “NDC” space. In PRMan, both of these are stored with the shadow map, so it's a no-brainer. In VEX, AFAIK, they're only accessible at run-time, and only from within the light context… except they aren't, really… at least not in the form of matrices, that is
You can xform a vector from/to one of these spaces, for example, but you can't (again, AFAIK) extract the actual matrices.
Long story short: I ended up building them by hand from inside the light context.
The current-to-light construction assumes the xform will always be affine/linear, which should be a safe assumption (I hope). And light-to-NDC is not a matrix at all!
– it's just the scaling factors (minus the .5 offset that puts (0,0) at the bottom-left corner) for x and y, based on a light-to-NDC projection of the (light-space) point (1,1,1). This means the client has to do the z-divide and add back the offset to finally get NDC.
It is messy. I don't like it… :evil:
I got tired of thinking about it after a while, but if someone can think of a way to generate the NDC transform as a *matrix* from inside the light context (i.e: a way to get at the near, far, left, right, top, etc.).
Oh; I also snuck in a much-needed bias parameter in there… we're dealing with maps after all. Here's a small
shader [
members.rogers.com] to test-drive this side of things.
Sampling:Hery pushes the samples away from the refracted outgoing ray “To” (and into the volume projected by the shading area) using a Poisson lookup table. Two problems with this; one conceptual, and one practical:
1. It's hard to (for me at least) to infer *exactly* how the poisson distribution was intended to be used. My gut tells me it's meant to be centered at 0.5 so the majority of the samples lie close to the central axis of the volume. i.e: lambda=0.5, with a spread of .
But it could also be interpreted as an amplitude distribution based on sample number – earlier samples get jittered more than later samples… although this wouldn't make much sense, would it?
2. Regardless of what the correct interpretation may be, VEX does not support arrays (and you can't do it with the pre-processor; I don't know what I was thinking when I suggested that, lol), so the possible implementations for VEX are quite limited:
A) Do it as a DSO shadeop. The most sensible solution (and the one I'll adopt), but unfortunately non-portable: platform/compiler issues, not to mention the fact that you'd need the HDK to port it to Windoze… but if others chip in, then…
B) Write it as a VEX function and call it from inside a surface shader. Apply the shader to a grid that fills frame, render a square, point-sampled image, and use *it* as the lookup table. You'd need to pass it to the singlescatter() function, which would then use a point-sampled texture() call to get at the Poisson distribution indexed as u
i%resx)/resx and v
i%resy)/resy.
C) Use a uniform distribution until SESI adds support for arrays in VEX
For this test, I've left it with a uniform distribution, but I also sketched in some of the necessary bits for a possible DSO
implementation [
members.rogers.com]. The static lookup-table-like output from such a function would look something like
this [
members.rogers.com].
The Single Scatternig Term:I wrote the translation with both the Hery slides and Jensen's siggraph paper side by side, checking every line against each other. As you can imagine, there were *lots* of little details that came up, but I've included copious amounts of comments in the
code [
members.rogers.com], so I won't repeat them here. I basically dissected it line by line, trying to explain each and every step.
Anyway… fun stuff!
Ideas, suggestions, corrections, flames… all welcome.
Edit:
Here's a little shader [members.rogers.com] to drive the singlescatter() function