Wren
If I use an illumination model where the diffuse is constant how will it responds to light and recieve the projection?
Like a diffuse surface.
The “diffuse” model will, for every light source in the loop, do something like this:
result += Cl * diffuse_color * dot(normalize(N),normalize(L));
The dot product at the end there, means that even if the parameter “diffuse_color” is constant, the intensity of the reflected light (the result) won't be. That's not a mistake though, it's simply the way the diffuse model works.
But when the intent is to collect a texture color (as projected from a light), then we don't care about the surface's reflectance properties; we just want to look up a (projected) color. In that case then, the illuminance loop simply accumulates incoming color (without a BRDF like “diffuse”).
Here's a simple example in vex. It accumulates all the lights in “lmask” (or the object's lightmask by default) and gives you back exactly what came in – a “pass-through” or “constant” BRDF. Typically the object will be hit by a single projector (or non-overlapping ones), but it's possible to get a lot fancier and weigh contributions from several lights according to how well they line up to the surface normal (instead of just adding them) – useful when re-projecting live footage taken from different angles onto a cg object, for example.
#include <shading.h>
#include <math.h>
surface Canvas (
string lmask = “”;
)
{
Cf = 0;
vector Nf = normalize(frontface(N,I));
illuminance(P,Nf,M_PI_2,LIGHT_DIFFSPEC,“lightmask”,lmask) {
shadow(Cl); // optional
Cf+=Cl;
}
}
There was a discussion at odForce that touched on these issues (and where I posted some VOP versions of things like the above code snippet). You might want to have a look
here [
odforce.net].
Hope that helps.