bounced light / object lights

   28657   37   5
User Avatar
Member
55 posts
Joined: July 2005
Offline
There's a video tut for the Irradiance stuff. It's here:
http://sidefx.vislab.usyd.edu.au/houdini_video/by_topic/rendering/index.html [sidefx.vislab.usyd.edu.au]

Yeah thats the tutorial I mentioned earlier - where I got my method for GI Full Irradiance from - there is also a caustics one, which is, as far as i remember, the only one that deals with photon mapping, but it does so only for caustics.

my biggest problem when first learning was trying to think relative to how things are done in other apps
Yeah very true mark2, as much as im a whiney git, i really do enjoy learning cg techniques, and then finding out how to do them in Houdini.

Thats what i mean about my lack of experience - i didnt think to relate the caustics photon mapping to also being able to create a global map with photons too. Not that i studyed GI that hard as a lighting technique anyway, im only just starting to consider stuff like that - better to get head round direct lighting first. I just remember going through the descriptions for tutorials and thinking the caustics one wouldnt have the information i wanted.

my bad -as they are great tutorials

J

ooohh - look at the new ‘add attachment’ button! 8)
Edited by - Oct. 14, 2005 07:13:11
User Avatar
Member
55 posts
Joined: July 2005
Offline
just wondering - as Mark mentioned the Irradiance help tutorials are dead in Houdini.

One that is mentioned is…
Incadesence - Objects with constant shading that simulates incadesence can also be used as sources of light in a global illumination rendering.

Any clues as to how to do this - or is this done purely through photon mapping as established in this thread?

Cheers

J
User Avatar
Member
7025 posts
Joined: July 2005
Offline
It should just happen automatically if you use Full Irradiance in the GI Light shader… Place a constant shaded sphere on a grid, render with only the GI light and see what happens. Of course, I haven't actually tried it )

Cheers,

Peter B
User Avatar
Member
55 posts
Joined: July 2005
Offline
Thanks Pete!

Ill let u know if it works, gonna give it a shot now.

Ive just been playing with Deecue's setup - and it seems to me that the for this scene photon mapping made very little difference to a normal Full irradiance render without mapping - except for a slight increase in overall brightness and a really long render.

I assume using photon-mapping would actually serve better with a scene that had more objects or varied colours and lights around the room for a more realistic look? But as for a single light source, simply setting to Full Irradiance seems to be just as fine to me. Thanks though, Deecue - was v helpful

spaceboy - have u managed to get what u wanted? If u are still seeing black, im pretty certain it has something to do with the global tint not being high enough.
User Avatar
Member
7025 posts
Joined: July 2005
Offline
Yeah, one of the reasons that the photon mapping hasn't been pushed harder is that it generally isn't needed except in extreme cases. As you observe, irradiance gives you the look without the fuss and bother of generating (and keeping track of) photon maps. I've really only used photon maps for caustics, never for lighting. Nice to have though.

Incidentally, a neat trick is to convert your photon maps to .bgeo files (use i3dconvert) then you can load them in and actually see where the photons lie on the geometry. Very helpful debugging photon renders.

Cheers,

Peter B
User Avatar
Member
412 posts
Joined: July 2005
Offline
Any clues as to how to do this - or is this done purely through photon mapping as established in this thread?

stu's lightbulb example over at the odforce thread is doing just this.. using a constant shaded bulb as the source geometry for light..

and it seems to me that the for this scene photon mapping made very little difference to a normal Full irradiance render without mapping - except for a slight increase in overall brightness and a really long render.

yea, what peter said i've actually only used photon mapping for caustics as well.. but do realize the difference (whether its visual in this particular case or not).. as mentioned earlier, it allows for multiple bounces as opposed to a single one which can be useful at times.. also, each of your objects can have a specifically designed photon shader attached to it. each with a level of control (in the case of the vex photon plastic shader: diffuse, specular, and transmission as well as probabilties for each).. this will give you more control on how each object receives and casts the photons.. will you always need this level of control? obviously not as you found out the looks can be quite similar.. but still nice to have if you did find it to be more accurate, realistic, etc.. also, check out the irradiance caching (explained in peters vid tut..) that will help you with shortening up the rendering times as well..

Thanks though, Deecue - was v helpful

glad it helped

Incidentally, a neat trick is to convert your photon maps to .bgeo files

wow.. i totally forgot how cool it looks when you do that..

@sesi: muchas gracias… :wink:
Dave Quirus
User Avatar
Member
710 posts
Joined: July 2005
Online
Deecue's file prompted me to dive into Houdini's global illumination again.
In my humble opinion, there are some limitations in the GI implementation here that puts the quality of renders behind the output of other GI capable renderers, such as Brazil or Vray for example (in terms of realism at least). I'll try to go into more detail.

Take this scenario: A room, with a window or skylight. The only source of illumination can be sunlight and/or hemispheric light (ambient occlusion) from outside. Here is such a scene modeled in Houdini:



The expectation here is that, in real life, if the sun is shining directly into the room, there ought to be enough light bouncing around to illuminate the entire space. I think most would agree that this is an accurate assumption.

Now, any GI implementation worth its salt should be able to solve this if you ask me. There are going to be various degrees of “accuracy” depending on the renderer used, as they all do things a bit differently (the most physically correct being Maxwell I suppose), but generally they should at least be able to give you a solution that approaches the behavior of real light.






There are two ways of doing GI in HDN at the moment: Irradiance and Photon Mapping.
I first tried Full Irradiance with Background Color (direct light + occlusion). The sunlight hits the wall on the left and bounces the light once.



Now right off the bat, we've hit a problem. After our light has bounced once, the Irradiance method has exhausted itself. There is no more light to go around. No amount of adjusting Global Tint or light intensity will make that sphere in the bottom right of the frame visible. Notice that the shadow cast by the shelf is not illuminated by any bounced light.

Here is the same scene in 3ds max, using Brazil. It's set to use 7 bounces.



Say what you want about the artistic merit of the two renders, but there is no doubt that the Brazil version shows more realistic light transport. I wonder why Full Irradiance is limited to one bounce. It seems to me that it's doing the same thing as the quasi monte-carlo algorythms in many other GI renderers, but then again, I don't know the first thing about the nitty-gritty details of GI rendering.

Same scene, different angle:










“But Juice, what about Photon Mapping?”

Ok, where to start…
One problem that pbowmar already pointed out: There is no way to “aim” the photons. We're trying to simulate sunlight here. That means our light is not actually inside our space. As a result, only a fraction of the total number of photons seem to make it into the interior. I think the only thing you can do is to bring the light really close to the point of entry. The problem is, now you have a situation where your solution looks completely different depending on how far/close you've placed your light. How would this integrate into scenes where you need to be able see this interior as well as the outside? I'm not saying it's not workable, but it does create more problems. Not exactly robust or predictable.

Also in my test rendering last night it seemed like changing the number of photons significantly changed the look of the solution, and I don't mean less photons looked crude, while more photons looked refined.

The controls for the photon shader are pretty confounding too, I didn't find them very intuitive. All in all I found it very difficult to get predictable or good looking results with this scene using photon maps, and none of my renders looked remotely “correct”. Something else - using both Irradiance and Photon Mapping combined was much slower than using them seperately.

I've uploaded the file in case anyone wants a shot at it.

http://www.pixelheretic.com/misc/GI_test.zip [pixelheretic.com]



Finally, I'm sorry if my post sounds a bit harsh. Just putting it out there that this is an area where Houdini can be improved.
User Avatar
Member
7025 posts
Joined: July 2005
Offline
So, I just realised that I'm being stupid with the photon projection stuff. It should be fairly easy to build a reflector geometry with Specular set to 100% in the photon shader. Basically, duplicating what a “real” spotlight does! After all, a real spotlight is simply a point light with a really shiny hemisphere behind it.

I don't disagree that we should be able to focus photons wherever we like natively (via an Emitter VEX context) however this might be a useful workaround…

The upper limit on the number of photons is too bad, but again we could work around that by using multiple photon maps and modifying the GI Light shader to read and use multiple photon maps.

Having said all that, I do think SESI should focus some love on the GI capabilities, given how far behind Brazil, Vray and others Mantra is in this area. Of course, VEX makes Mantra far _ahead_ of these renderers so it's always a tradeoff )

Cheers,

Peter B
User Avatar
Member
509 posts
Joined: July 2005
Offline
Thanks a lot DaJuice… this really explained me the current limitation and workflow for Global Illumination… I'm also trying to render an Interior scene (I'll post the wip once I'll get the first lighting setup) with houdini to learn Lighting in Houdini itself….

I just hope mantra will be improved this way cause now its pretty hard to archieve correct, or maybe just nicer, results..

any words by someone from SideFx on this point?

cheers.
JcN
VisualCortexLab Ltd :: www.visualcortexlab.com
User Avatar
Member
330 posts
Joined: July 2005
Offline
thanks for providing such a clear example DaJuice. i think most people would incorrectly assume it worked like in your max example.

im just one non-commercial user but i cant think of anything id prefer to see at the top of the priority list (mostly because solutions/workarounds are beyond my skillz).

/vote multibounce irradiance
User Avatar
Member
1002 posts
Joined: July 2005
Offline
DaJuice
In my humble opinion, there are some limitations in the GI implementation here that puts the quality of renders behind the output of other GI capable renderers, such as Brazil or Vray for example (in terms of realism at least). I'll try to go into more detail.

I think that this thread has raised a lot of valid concerns - we are aware of these limitations in the mantra architecture with respect to physically based rendering, and I'm hoping that we can address these in a clean way at some point in the future. However, I'd like to bring up some of the technical difficulties with integrating a physically based shading model into a traditional renderman-type pipeline.

First, let's look at the surface shader. The surface shader is responsible for providing the renderer with a color and opacity (in vex, Cf and Of). The computation of these values in a physical renderer involves a number of distinct stages:
- Sampling and filtering texture functions defined over the surface
- Finding the illumination at the surface due to light sources, and other surfaces
- Shadowing (finding visibility)
- Evaluating the surface BRDF for this illumination
- Integrating all the components into the resulting color

In the vex surface shader, all these operations can be computed in whatever way that the user wishes - this provides a lot of flexibility to the user. Unfortunately, it does not provide much flexibility to the renderer, which for physically based rendering is quite important. For example:
- For efficiency, we would want to use monte carlo sampling to select one feature to sample. The integrated surface shader makes it difficult to isolate features to sample (eg. how to decide when shadow shaders need to be called or which lights need to be sampled? How to select between a phong or diffuse lobe for sampling?)
- Different features might need different sampling rates (eg. a filtered texture may need to be sampled about once per pixel, while the lillumination might need to be sampled a lot more for accurate shadows).

I think we could see similar limitations in the light shader, and in the fog shader which afford flexibility to the user at the expense of sampling flexibility in the renderer. Some of these problems are already solved for specific cases but the architecture falls short when we want to try something more complex (ever try to render GI in mantra with area lights?)

The problem of repurposing this type of shading pipeline to a physically based renderer seems difficult and its something that I've given a lot of thought to. Here's one alternative:

- Split up the surface shader into components:
- texture shading, which computes texturing on the surface
- brdf shading, which given an input/output vectors and incoming energy will compute outgoing energy
- light shading, which computes an energy distribution

Treat these comments as my own an not the opinions of Side Effects Software

Andrew
User Avatar
Member
941 posts
Joined: July 2005
Offline
Hey Andrew,

andrewc
- Split up the surface shader into components:
- texture shading, which computes texturing on the surface
- brdf shading, which given an input/output vectors and incoming energy will compute outgoing energy
- light shading, which computes an energy distribution

That sounds really, really good to me!
Just recently I've been thinking how great it would be to be able to specify a custom brdf to the gathering functions like reflectlight(), refractlight(), etc. – and possibly/hopefully the sampling density distribution as well. Right now I'm shooting my own rays in a few cases, but having it all split up like that would be… well… “beyond kewl”!

Cheers!
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
User Avatar
Member
4261 posts
Joined: July 2005
Offline
andrewc
- Split up the surface shader into components:
- texture shading, which computes texturing on the surface
- brdf shading, which given an input/output vectors and incoming energy will compute outgoing energy
- light shading, which computes an energy distribution

spiffy.

Would you want to make all new contexts? Or add some special sauce to the surface context?

Like have some special blocks where you promise to the renderer that you'll only be doing texturing here, and lighting here…. ie)


surface mySurf() {
…. init stuff ….
textureshading “” {
… all those fast noise functions ….
}

lightshading “” {
… only illuminance loops …
}

brdfshading “” {
… brdfs ….
}
}


These blocks would be hints for the renderer. The reason for the “” is so you can define different subcontexts…

like lightshading “physical” { //empty } Would tell the renderer to calculate do physically based rendering

or if you a tracing a secondary ray (reflect/refract) you could specify which brdf subcontext you want.

Anything in the textureshading could be cached in the IPR for faster light parameter updates.

*shrug* Just rambling.
if(coffees<2,round(float),float)
User Avatar
Member
175 posts
Joined: July 2005
Offline
Wouldn't it be possible to temporarily walk this problem around (If it's so complicated it would probably take some time to introduce) and incorporate mental ray into houdini?

It would benefit from a ready architecture and should be as easy as translating the shaders and attaching MR to houdini package

It is distributed with almost all serious software, why sesi does not take the advantage of it?

It would be great to use vops to build MR shaders.

Mantra is great but It's not a fast raytracer and todays fast, “wash and go” renderers are beginning to rule the market.
Don't you think that incorporating one of them into houdini would be a jump start popularity igniter? It's by far the only thing I really miss in this package.
User Avatar
Member
710 posts
Joined: July 2005
Online
Thanks for the detailed explenation AndrewC, I appreciate it. Now that you mention it, I remember reading about similar difficulties with fitting GI into Renderman because the architecture was not well suited to it.

Knowing the devs are pondering these issues puts my mind at ease.
User Avatar
Member
7025 posts
Joined: July 2005
Offline
Actually, Mental Ray support has been in Houdini for quite a while. What is missing is support in VOPs. The problem is, making a shade-tree for Mental Ray is not easy due to the way “shaders” are created in MR. VEX is far superior to MR's shading language (though people can argue that MR's is more flexible) because it's easy and is specifically designed for shading.

Personally, I'd rather see the resources go into Mantra and have it become more powerful and optimized. Remember, MR costs $500 or more per CPU, Mantra costs $0 per CPU (with appropriate Houdini ownership) so even if you have a 100 CPU render farm (tiny by industry standards) you're shelling out $50,000.00 just for rendering.

Cheers,

Peter B
User Avatar
Member
12461 posts
Joined: July 2005
Online
Hi Andrew, all,

I've been interested in this thread too - there is quite a lot of pressure to have renders with such GI considerations in production, especially from the world of commercials where due to time constraints there is an allowance for brute-force rendering at video resolutions, which seems more acceptable because it pans out to be the same time impact for an optimized film-resolution render and the producers don't raise eyebrows.

The PBR approach sounds awfully exciting and I honestly hope that we could see something like that happening in Mantra sometime in the near future. You have our 100% enthusiasm!

That said though, to answer DaJuice's question is there is reason why we can't perform multiple bounces in the irradiance function without this architecture change?
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
175 posts
Joined: July 2005
Offline
MR is expensive as a standalone product, XSI foundation is 500$ with MR onboard.
It's a ready, full blown product with huge userbase and already present and developed features like accelerated ray tracing GI, SSS, great volume rendering etc. Features that user could just use without thinking how to trick them using pointclouds or whatever.
Perhaps it would be faster and more profitable to incorporate MR (developing mantra anyway) than using all the resources to recreate these features in mantra.
I am no technician nor am I interested in learning some hard core programming stuff. I'm interested in houdini because it's the most powerful software I've ever used and it doesn't force me to write some stupid scripts for achieving basic effects (unlike any other soft), on the other hand it's almost unusable for fast projects because rendering in it is just a tough time consuming task.
That said, instead of working in houdini, I still use max.

cheers

Peter
  • Quick Links