Mantra VS Mantra VS Etc...

   9736   12   3
User Avatar
Member
511 posts
Joined:
Offline
Recently I have begun to evaluate and update our rendering pipeline to work with H11, in particular to look at perhaps starting to use PBR and Photon maps for indirect lighting.

A bit of background.

Our current method has been to use the Gather raytracing mechanism to look for specific exports as opposed to Cf (for control and optimization purposes), this has been successful so far in matching (H10) PBR renders/speed with the regular Raytrace renderer and was faster if optimizations where in use (like max distance, object scopes, not Gathering expensive stuff, etc).

Its amazing that the flexibility to do this stuff in Houdini exists, and although I have a great deal of fun doing this I'd rather Mantra had the Vray/Modo like lighting accelerators to begin with. It's quite a lot of work for me to create and support this method, when I should probably focus more of my Ubershader maintenance time on building better shading interfaces for my users.

My test environment started as Cornel box… I needed a scene that was fast to work with but also challenging for the renderer, so my cornel box quickly got extruded all over so that I could test more complicated light paths, my main interest being to light things almost completely indirectly because that is hard to solve…
As a result my cornel box now has a tunnel going round the back, and this is where the only light source resides. There are a couple of other features intended to expose defects, but they are extremely likely to appear in an actual shot.
Glossy reflections are also important so the scene has them everywhere (it's all the same shader).

Apologies if it isn't the most pleasing model, it was totally random. In hindsight I would have done it using golden ration throughout

So my first port of call was PBR with indirect photon mapping since there is no hope in hell I'm going to be able to brute force this scene this week with pbr or anything else alone. Max distance tricks an all that are irrelevant because the point is for the light to actually reach the darkest corners fast and with no noise.

Unfortunately it quickly becomes clear that photons are not good at solving general global illumination (caustics excluded). After just one bounce they become total garbage, with most of the photons focused where they are LEAST needed, i.e. nearest the light source and where direct illumination is most prominent. This is set to ten million, clearly to be able to solve the area around the stan bunny and teapot I'm going to need about 100 million and 99.9% of them are going to be in the wrong place. They are already pixel size around the bright area behind the teapot.

There is no hope for using them directly as a sort of light bake. Their usefulness seems limited to being used for indirect lighting lookup only.

Continued on next post… theres a lot more to come.

Attachments:
scene.jpg (57.6 KB)
photon_map.jpg (65.9 KB)

User Avatar
Member
511 posts
Joined:
Offline
But it's immediately obvious that using the photon map for indirect bounces is EXTREMELY helpful in reducing noise… clearly raytracing against a lighting cache of some sort is the way to go hence forth.

As you can see there is a massive difference on noise level between with and without photon map, this makes sense if you look at it from the pov of a ray, without a photon map the ray can only see a noisy mess like the one it's sitting on and it has to pick a random value from it, on the other you see a nice smooth view picking a random value from this isn't likely to be dramatically different each time you do it.
Because the photon map is so inadequate for this scene the Photon Distance threshold has to be very large in order to hide this (it is set to 1 which is the same size as the hole the light is shining through), so most of the scene is ignoring the photon map on the first bounce. Using this parameter can cause a some darkening of corners, doesn't look bad but it could be a big pita is a director decides to pick on it.
The overall light levels in these areas are not correct because the large blobs still manage to illuminate areas that should actually be darker, but this is acceptable. Bigger problems will manifest if something manages to interfere with the path of these mahossive photons, low frequency strobing is bad.

You can also see a render with glossy reflections + photon map, unfortunately the photon map is ignored by them (i think), therefore they are noisy as hell, basically the reflection sees that noisy render instead of the good one.
Not sure how I can independently improve them without adding to the diffuse lighting expense.

All PBR renders were conducted with 5x5 pixel samples, 4min 32max samples with 0.001 noise level. 1 Diffuse bounce (the indirect photon accounts for another 2 I guess) and 2 reflection bounces. The non-photon mapped render was set to 2 diffuse bounces to account for the missing photon bounce.

Continued on next post…

does anyone know if I can do text pic text pic?… this is the only reason I'm breaking up these posts.

Attachments:
photon_map_and_pbr_and_glossy_11m40s.jpg (127.6 KB)
just_pbr.jpg (159.2 KB)
photon_map_and_pbr.jpg (120.7 KB)

User Avatar
Member
511 posts
Joined:
Offline
So it seems the biggest two things that we need to have in order to produce a reasonable image in a reasonable amount of time are:

- Better than photon way to cache lighting, something like an Irradiance cache, and use it for indirect rays.
- Reflections to have access to such cache.

Turns out that we can already do this the manual way. There is nothing to stop you from writing your own irradiance cache to point clouds, and use the Gather system to look up these clouds for indirect rays!
What you wont have is as nice a workflow as one that is built into the software by default… most notably generating the point cloud without having to generate another ifd and render sequence. And re the Gather system, it seems like it's going to be deprecated some time and support/new features will cease, such is the enthusiasm for this pbr… That is what is seems like to me anyway.

Anyway that is what I did to produce the following render.

The output of a diffuse bounces shader was written to a point cloud cache from the pov of a camera where the entire entire scene was within the view fustrum.
The raytrace render was used, resolution set to 256x256 (this governs the cache resolution) and the vm_hidden property was also used (forces the rendering of back and hidden faces).
This was then rendered onto the surfaces with a point cloud shader, the result was exported to a variable that is picked up by diffuse and reflection Gather loops.
The gather loops shot 64 samples, this was further refined in the raytrace rop by 5x5 pixel samples. The comparison PBR render was set the same except the min rays was set to 4 and Max set to 32.
Variance antialiasing doesnt do anything at all for Gather loops apart from making it really slow if min samples > 1, so it is kept to default. Doesnt bother me since it clearly didnt help PBR much…. it doesnt like the gradual increase in noise amplitude as the light darkens.
I gave up optimizing the PBR quality because an increase in quality would inevitably require a disproportionate increase in render time. I limited the setting so that both PBR and RT/Gather renderers was roughly the same time.

On a 4 core (ht disabled) i7940
PBR was 48m28s
RT was 43m30s

I will upload a clean scene for you guys to try and achieve a better result than I did with the PBR renderer, but I doubt very much that better quality can be achieved without a heck of a lot more render time.

First is the irradiance cache image, it looks a heck of a lot better than photons although it takes roughly the same amount of time to compute.

Next Is the PBR image… not good. I know you can do better but I cant wait an hour every time to find out.

Then its the raw rt render, followed by a tone mapped and noise reduced version (Neat Video under eyeon Fusion).

cheers

S

Attachments:
Manual_Irradiance_cache.jpg (81.9 KB)
pbr_with_photons_46m28s.jpg (471.7 KB)
rt_with_irradiance_cache_43m30s.jpg (326.6 KB)
rt_with_irradiance_cache_43m30s_ToneMapped.jpg (104.5 KB)

User Avatar
Member
511 posts
Joined:
Offline
I will also post the same scene under the only other renderer I have available to me, Modo. Sometime this weekend… I actually think H will compare favorably.
User Avatar
Member
1002 posts
Joined: July 2005
Offline
Just a note that photon maps were never intended to be rendered directly. Your image rendered with PBR and indirect photons (which looks pretty good) is the intended workflow. Directly visualizing photon maps is available mainly for debugging.

Also, the photon distance threshold is not an absolute distance but is relative to the distance between photons - which appears to be smaller than the opening in your scene based on the photon render.

I think you're custom solution appears to be another method to generate photons that is view-dependent?

Andrew
User Avatar
Member
511 posts
Joined:
Offline
andrewc
Just a note that photon maps were never intended to be rendered directly. Your image rendered with PBR and indirect photons (which looks pretty good) is the intended workflow. Directly visualizing photon maps is available mainly for debugging.

Perhaps the parameter should be named “Visualize Photons” or something. It sure makes you wish there was a good enough approximation to render diffuse bounces extremely quickly

andrewc
Also, the photon distance threshold is not an absolute distance but is relative to the distance between photons - which appears to be smaller than the opening in your scene based on the photon render.

Some of the photons around the entrance to the cavity in the top right really looked that big. When I looked at the cloud in sops it was extremely dense near the light, elsewhere it was very sparse.

andrewc
I think you're custom solution appears to be another method to generate photons that is view-dependent?

I generated the cloud from a camera looking down on the entire scene isometrically… to avoid not having enough samples at grazing angles, but they aren't photons (you made me look up how they fundamentally work in the beta forum) , I'm just running my regular diffuse bounces shader with 512 samples and pcwritting the stuff at a low 256x256 resolution.
Btw, I would love to be able to setup pre-rendering processes in the same way we can calculate the photon map without rendering a whole separate ifd and render sequence.

For diffuse bounces this seems much better than photons because the resolution is relatively uniform, and you can just do “sampling” as opposed to shooting an enormous amount of photons (generation time and storage become significant not to mention massively wasteful), because of this you don't have to use Distance Threshold. Its just a lot easier to deal with… you can always just turn up the samples or sample resolution, whereas the photon method could prove impossible to deal with when given a very adverse situation, and sod's law says that is precisely what will happen the first time you use it for real.

In a shot where the camera walks through a large scene and there is animated stuff it may be better to bake with a polar projection camera parented to the shot camera…. The drop in samples at very grazing angles may turn out to be problematic though. It really needs the super brains such as yourself to implement irradiance cache, as opposed to me doing yet another hack.

A as simple as possible irradiance cache (one that renders as simple as a pcloud shader) would be very useful right now because it would be limited to being visible to indirect rays, the fancy stuff can wait… like how modo/vray do their fancy sample interpolation schemes to be able to render the stuff directly super quick.

I think the biggest thing to take away from this is, by far, the massively dramatic impact of using the irradiance cache to trace glossy reflections against! This is more than huge enough for me to stick to my current Gather technique and skip PBR altogether for another good while!

Cheers

Serg

PS. OT
Btw I noticed the pcwritting stuff now writes a point every time a ray event happens (this is cool, even though I cant think of anything to use it for right now)… but took me ages to realise I had to block the writing if ray bounce level > 0 or else I would quickly accumulate 100s of millions of the stuff. There should a section on the help somewhere where we can view changes to existing functions as well as new stuff… i.e. the tangent space function now has a load more arguments than before so my normal map shaders are now broken. Have a look at the “What's New” section and compare that to what's actually happened.
User Avatar
Member
345 posts
Joined:
Offline
Hey Serg, very cool Cornell box, share it on the forum please! I'm very keen to have a go with it.

I'm not sure if you are aware of it but there's a VEX pathcache asset on the exchange which does roughly something similar to what you described.

Have you ever tried to incorporate light portals into render tests. Render times should be cut down significantly (I'm curious how much in your case). On the other hand I have no idea if that works correctly with the manual irradiance cache baking.

Lastly, I was expecting SESI will implement eventually irradiance caching in H11. They didn't and I don't understand why ( I remember long time ago MTL and LightCache for PBR where already partialy implemented (at least in documentation) and later removed. My guess is that irradiance caching might be in fact a step backwards and produce more problems in motion then it gives in still renders. And in the end Mantra will support GPU in the next release anyway :roll: . Bur since I'm not an expert in this matter I'd love to hear a reasonable argumentation why Irradiance caching is still missing in PBR.

Cheers,
kuba.
User Avatar
Member
511 posts
Joined:
Offline
Hi,
Here's the PBR scene.

I may get round to setting up other “standard” scenes like Sponza n all that. Very useful for use as direct comparison to other renderers.

cheers

S

Attachments:
CornelBox_pbr.hip.rar (1.4 MB)

User Avatar
Member
511 posts
Joined:
Offline
btw these images were all saved with a 3.0 gamma or they would be just too dark to really show what's up.
User Avatar
Member
511 posts
Joined:
Offline
Naturally people reading this might wonder why don't I just write the cache with the Irradiance Cache stuff that's already there in the rop. After all it already does a lot of the stuff I need, it already knows to place samples in important places like corners and with Poisson disc distribution (from looking at cloud).

Unfortunately my attempts to use it didn't work out… technical reasons for not using rop irr cache are (some of these are bugs), and rfe's for my ideal caching system:

- It is totally buggered when used with the Irradiance function, there are impossible to solve large black unpainted areas everywhere.

- The point cloud saved to disk seems incomplete/corrupted relative to what you can see sampled on screen. This means I cant use the cloud in a simple pcloud shader. Maybe this related to the above.

- Oddly it seems to work properly with “occlusion”, though the saved cloud is still corrupted.

- I need a cache which I can direct to the Gather loops for tracing against in secondary rays. To do this the cache system has to output from a Vop so that I can plug it into the parm that gets exported to the Gather loop.

- I need a cache that is not limited to specific functions or one time use only. i.e. currently it only works with occlusion or irradiance but not both at the same time, else you get garbage out… You cant for example use irradiance for irradiance AND use another irradiance do cached reflections trick, coz they step on each others toes.

I want to add as many caches per shader as I want and read them wherever I want. Basically I would like this to be as easy as the AOV system:
- plug the stuff you want to cache into a cache vop and set name.
- go to rop and use multiparm (like the aov system) and match the names.
- each instance in the multiparm is basically a independent full featured cache ui, here you choose whether it's view dependant, interpolation/error option, whatever.
- back in the vopnet plug import cache vop to whatever you want. Done.

Or maybe the better answer is to make the pcwrite/generate/read more full featured so that they can generate irr cache like point distributions (maybe already possible and I just don't know how) and pcread to have more sophisticated interpolation methods? or both methods… The former method being enough for 99% cases and easy to understand since its analogous to AOV's, and the more obscure stuff for when job calls for extra clever coolness some sort…

cheers
S
User Avatar
Member
89 posts
Joined: July 2006
Offline
Can I just say that Serg's implementation looks awesome!
User Avatar
Member
243 posts
Joined: Oct. 2007
Offline
Wow, this sound all so interesting, but I understand only half of it…
Is there any chances you could explain the process in a really simple manner step by step way? Or this process already in Houdini's manuel by any chance?

thanks!
JR Gauthier
Character Animation & Design
www.turboatomic.com
http://www.vimeo.com/user2847970 [vimeo.com]
User Avatar
Member
3 posts
Joined: Aug. 2010
Offline
that's great thing!
  • Quick Links