I'm trying to set up an animation pipeline for a projection mapping project. The requirements are as follows -
First scene, viewed by Camera 1
The resulting texture projected onto a surface in Scene 2
Scene 2 viewed by Camera 2, which is the output
How might I set this up in Houdini? Ideally I will be able to control the rendering of both stages from one place, so I can render a portion of the timeline, and and watch the final output in one go.
Double ideally is that I can use the Open GL renderer. The output does not need to be hifi as it will just be used as a template, and I'd like to be able to review changes quickly.
You can create a spot light and put the image from Camera 1 in the Projection map parameter, set the Attenuation to 1, and then match the FoV. You'll still get normal falloff though. Note you'll need to use High Quality Lighting to see it in the viewport.
Because of the performance I would do it like: Start Houdini twice, render a flipbook from scene 1, save it to disk. In scene 2 use uvtexture with “Perspective from Camera” then assign maybe a uv quickshade and use saved flipbook from scene 1.
If you really want to have this in one file, while updating when changing the timeline… use a cop, and there a render node and feed this in as projection image via op:……/img/render but you'll see, that this is… hmmm, it slows down the workflow a liiitttle bit.
I don't think uvtexture is viable, because it only evaluates the projection per-point, which would cause a lot of warping that might not be acceptable (I don't know the original poster's intent.)
The idea to use a light to project is probably the best one, but you'll need to use mantra so that isotropic w/o shadows instead of diffuse.
What do you mean when you say the uvtexture only evaluates per point?
The uv texture node is a geometry operation. It only deals with discrete bits of geometry, such as points or vertices. That means that, at shading time, to get the uv value between two points, i.e. a surface position, the uv point values are naively blended using linear interpolation. This may work fine for planar projections, however the perspective projection is not linear. The Z axis is logarithmically related with the x and y coordinates. This will lead to a ‘wiggly’ look when used as the texture space. You can work around it by dicing up the geometry very finely before projecting the coordinates, however you may end up with many gigabytes of geometry if you do so.
Projection handled by the shader on the other hand is computed per pixel/texel using surface positions not vertices. These make the projection results accurate per sample, giving smooth and clean results which is probably what you want.