A bit of background.
Our current method has been to use the Gather raytracing mechanism to look for specific exports as opposed to Cf (for control and optimization purposes), this has been successful so far in matching (H10) PBR renders/speed with the regular Raytrace renderer and was faster if optimizations where in use (like max distance, object scopes, not Gathering expensive stuff, etc).
Its amazing that the flexibility to do this stuff in Houdini exists, and although I have a great deal of fun doing this I'd rather Mantra had the Vray/Modo like lighting accelerators to begin with. It's quite a lot of work for me to create and support this method, when I should probably focus more of my Ubershader maintenance time on building better shading interfaces for my users.
My test environment started as Cornel box… I needed a scene that was fast to work with but also challenging for the renderer, so my cornel box quickly got extruded all over so that I could test more complicated light paths, my main interest being to light things almost completely indirectly because that is hard to solve…
As a result my cornel box now has a tunnel going round the back, and this is where the only light source resides. There are a couple of other features intended to expose defects, but they are extremely likely to appear in an actual shot.
Glossy reflections are also important so the scene has them everywhere (it's all the same shader).
Apologies if it isn't the most pleasing model, it was totally random. In hindsight I would have done it using golden ration throughout
![](/static/djangobb_forum/img/smilies/wink.png)
So my first port of call was PBR with indirect photon mapping since there is no hope in hell I'm going to be able to brute force this scene this week with pbr or anything else alone. Max distance tricks an all that are irrelevant because the point is for the light to actually reach the darkest corners fast and with no noise.
Unfortunately it quickly becomes clear that photons are not good at solving general global illumination (caustics excluded). After just one bounce they become total garbage, with most of the photons focused where they are LEAST needed, i.e. nearest the light source and where direct illumination is most prominent. This is set to ten million, clearly to be able to solve the area around the stan bunny and teapot I'm going to need about 100 million and 99.9% of them are going to be in the wrong place. They are already pixel size around the bright area behind the teapot.
There is no hope for using them directly as a sort of light bake. Their usefulness seems limited to being used for indirect lighting lookup only.
Continued on next post… theres a lot more to come.