Another subsurface scatting method

   9172   11   3
User Avatar
Member
511 posts
Joined:
Offline
In response to a question on the 3Dworld contest forum, and a question at the end

Here's how it works.

point2:
all this does is aim the normals towards a light, using the “point(”../KeyLight“,0 , ”P“, 0) -$TX” expression.

Ray1:
using the modified normals the “Ray1” sop shoots rays towards the lights. The “intersect farthest surface” option is necessary because it has the effect of stopping shadowed rays from reaching the light, thus you get self shadowing.

The Ray Sop's second input is the geometry that gets used to test for ray collision. We can manipulate this to have more control over how the shadow will look. “facet2” returns the normals back to what they originally were, and the “ShadowBias” (its a push deform operator) uses this to push the surface in… it has the effect of shrinking the size of the shadow.

The “Point Intesection Distance” option puts the distance that the ray travellled into the “dist” attribute (middle click on the ray sop to verify its presence). This is very important because we will use it to control how far light can penetrate into the suface.

Throughout all this I had a point sop that converted the “dist” attribute into color, so I could see what I was doing.


Facet:
This re-creates the normals as they were before we modified them for the Ray Sop.

sssController:
This node exposes some controls that I set up inside its associated Vop Network. It is also used to bring in the light position, you will see the expressions that does this if you click on the poorly named “Parameter” label.

Jump inside the vop network sssController1:
Essentially what we are doing here is processing the dist attribute with simple Lambert shading to achieve something that looks somewhat like sss.

I compute the Lambert shading by importing in the Light position through “parameter2” (this is the thing that we connected the light position to earlier) and calculate the dot product with the surface normal.

The resulting float value is converted to Vector and multiplied with the “dist” attribute (this is imported via the “Parameter1” node). The result of this also gets multiplied with a “Color” parameter, and this is what gets exposed in the Sop network as the “Diff Intensity” control.
At this point we have simple Lambert Shading with realtime raytraced shadows in OpenGL.

For the sss part, we manipulate the “dist” attribute with a shift range node, the “shift1” node has a parameter connected to it so that we can control how far light can penetrate the surface, it gets exposed in the sop network as the “depth” slider. The result of this is multiplied with a color parameter that again gets exposed as the “sss intensity” and finally we add this to the Lambert shading at the end of the network. The color that this process computes gets passed back up to the Sop Network and applied to our model.

The result of all this looks like the simple sss shading that I used to get with Lightwave's G2 shader plugin or older versions of project-messiah's rendering engine. In other words, light is shadowed and can penetrate the surface but it doesnt get bounced around or scattered in any way.

Which brings me to the last part of the process:
A long time ago I read an article on how they did the skin shading for the matrix movies, so I set about experimenting in Lightwave to do the same, basically what they did was bake the diffuse lighting onto a texture map, blurred it and applied it back to the model, by bluring the RGB channels diferently you could even fake how light scatters more or less depending on the light different wavelengths, the result looks awesome but its very impractical to bake a texture for every frame and its impossible to blur an image continuously across discontinuous UV edges and not to mention keeping the blur size consistent across a UV map that is inevitably streched. Btw they combined this with a raytraced back-scaterring shader (for the glowing ears part of the effect).

But since we arent baking any images in houdini theres nothing to stop us from doing the same sort of thing to our colored points

So… the output from our sssController goes into both inputs of an attribute transfer node, the effect of transfering the color onto itself and sampling/averaging neighbouring points looks remarkably as if the light is scattering inside the material
Play with the parameters in there to get different looks.
Blurring the Lambert shading with the dist gives the material the appearance of solidity.
The smooth node processes this a bit further to remove slight banding artefacts caused by the attribute transfer.
The result of this is imported into a surface shader so that we can multiply it with highres texture maps and some diffuse and specular lighting.

Your probably asking, why do all that instead of using Mario's excellent sss shaders?
well, just to see if I could make the idea work for fun… there is no other software that gives that much freedom to put an idea that was going on in my head onto practice… I know absolutely nothing about programming so it helps ALOT
Plus there are some things where I think this thing has advantages, not sure the bad makes up for the good though.

The bad:
- accuracy is limited to the resolution of the mesh
- unlike proper physically based sss, light isnt scattered INTO the shadowed area, instead the shadow creeps outward into the light. The Bias control is what was supposed to alleviate this.
- It's much better suited for materials that are highly translucent.
- For every additional light you must duplicate parts of the network and Add the colours together
- Can only handle light from point sources

The good:
- The whole effect is pre-computed so it renders blazingly fast.
- Believable results with very translucent materials
- no render artefacts
- Dont have to manage point clouds
- works well with displacement shaders, unlike point clouds there are no artefacts introduced by the point cloud not being displaced as accurately as the displacement shader.
- Uses a a lot less memory than generating PClouds on highly subdivided and displaced meshes (PClouds crash Houdini on machine at about 300K polys because of the additional attributes it's trying to generate).

Where I want to take it further (swap some good for some bad and vice versa):
- remove the limitation of the geometry resolution by doing the computation with point clouds and applying the result at shader time.
- see if theres a way to make shallow scattering look good.
- try to seperate forward from back scattering for control.
- simulate distant light sources
- build a proper HDA for it

but I need help with the point cloud stuff… All I want to do with them for now is have a cloud of colored points and use them to paint the splotches onto a surface shader. Can anyone with more experience with PClouds help with an example?

Feel free to do whatever you want with the Hip, just share the improvements with us

Cheers
Sergio

Attachments:
sss.hip (155.0 KB)

User Avatar
Member
941 posts
Joined: July 2005
Offline
Good stuff, Sergio!

It's cool to see this done in SOPs (it's kind'a the reverse of Christophe Hery's shadowmap approach). And, as you say, it has a lot of nice quick-feedback and ease-of-use qualities that the pointcloud method lacks.
Probably the way to go for highly translucent materials.

Thanks for sharing!

P.S: I'm pretty sure Simon had a SOP-based method that was somewhat similar to yours (if memory serves). If it's not in the Exchange, then search od… or maybe Simon can post it again here. Maybe you guys can share insights and come up with the ultimate, killer SOP_SSS HDA!.
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
User Avatar
Member
511 posts
Joined:
Offline
Thanks for the comments Mario

Just done a new version, which scatters differently on each RGB channel, it's starting to look like skin!

Lets see how much we can mix and match the different techniques

Serg

Attachments:
fakesss.hipnc (173.3 KB)

User Avatar
Member
2199 posts
Joined: July 2005
Online
The original is still up on exchange somewhere, I extended it to use a similar idea for doing shadows by checking for occlusion of each point but I don't think I posted it. The version that is currently up uses a shadow map I think…

The method I came up with uses the divide sop to create a dual of the input surface, I then use that to calculate the area that each point covers in the mesh. That way you can compensate for how sparse or dense the the point sampling is. So the point cloud is in fact the points of the mesh which can often be a low res version of the final one for large scattering distances.
The trick is finding just the right hammer for every screw
User Avatar
Member
2 posts
Joined: Nov. 2006
Offline
hi freind
the shader look realy nice and i wanna know if this model is multiple sss or Di-pole shader wich needed for skin also how to add a cooktorrence specular model and disp made .
No fear SmallpixeL are here.
User Avatar
Member
2199 posts
Joined: July 2005
Online
My one was based on the di-pole technique
The trick is finding just the right hammer for every screw
User Avatar
Member
99 posts
Joined: Sept. 2006
Offline
hi
can you please post second version hip file so i dont get non commercial version
thanks
z
User Avatar
Member
2199 posts
Joined: July 2005
Online
I'll try and get one up on my tools site in the next few days

http://www.houdinitools.com [houdinitools.com]
The trick is finding just the right hammer for every screw
User Avatar
Member
99 posts
Joined: Sept. 2006
Offline
ok thanks
z
User Avatar
Member
14 posts
Joined:
Offline
WOW !
I cant believe it is made in SOPs !!! This is amazing !
User Avatar
Member
2199 posts
Joined: July 2005
Online
I've uploaded a newer commercial version. Its still a bit slim on the help file front but its a start.

You'll find it at the bottom of the otls page

http://www.houdinitools.com/ [houdinitools.com]
The trick is finding just the right hammer for every screw
User Avatar
Member
99 posts
Joined: Sept. 2006
Offline
thank you simon
z
  • Quick Links