2.5D Matte Painting (AKA Camera Projection Mapping)

   7622   29   5
User Avatar
Member
6 posts
Joined: Sept. 2021
Offline
Dear Houdini experts and fellow users,

I'm currently moving to Houdini for background work, in the context of "traditional" hand-drawn animation, and one mission-critical procedure that remains to be worked out is that of so-called camera projection mapping, i.e., texture mapping using the projection defined by a camera (typically not the taking camera), basically creating a slide projector from a camera. Our usual workflow is something like this:
  1. Create rough 3D "layout" scene (based on storyboard, 2D layout, or previz)
  2. Render out guide frames corresponding to extremes and/or intermediate camera positions (depending on range of camera motion required for the shot and other particulars). Each guide frame will have a corresponding static projector-camera for subsequent projection of our matte painting back into the scene.
  3. Create initial matte painting at best position and then project this painting onto guide geometry from corresponding projector
  4. Refine geometry as necessary to show off the "shape" of the painted background's subject, to make it seem like more than a flat painting. This might only be the most rudimentary of models, but with bumps or indentations or overhangs in just the right locations to sell the illusion. Other times, the geometry might be a more detailed 3D representation of what's in the painting. The important thing is that the model(s) gets textured (and lit!) in the painting, from the POV of our projectors, so that only surfaces visible by the viewer are fully realized.
  5. Create secondary paintings from other camera positions to cover areas that might be hidden in initial painting, but revealed during the move
  6. Refine, refine, refine...

You can see an example of our technique here (in Japanese, sorry)
https://youtu.be/P9oRNa8z9Fs?t=491 [youtu.be]
From around 8m11s. We did many hundreds of shots like this on that particular show, to great effect IMHO. There's also some examples of this "2.5D" method in an earlier project I was involved in:
https://streamable.com/k9v6o [streamable.com]

Anyways, in Houdini, creating a UV texture with type "Perspective From Camera" is a fine way to start out "camera projecting" matte paintings onto our guide geometry. Wonderful workflow to this tool, as far as it goes. But the resulting texture mapping is afflicted by warping distortion in areas of low mesh density or where the incoming primary ray's incident angle deviates from head-on. The only way to surmount this is to subdivide the mesh, but this turns out to be an exercise in diminishing returns... In olden times we'd have remedied this by resorting to an "implicit texture" (I guess that's Softimage/XSI nomenclature) to force our renderer to derive the texture projection at render time, in a shader, per-sample, based on an ideal camera projection, rather than relying on per-vertex UV data to be interpolated across the polygon (a so-called "explicit" texture mapping). The problem with the explicit method, of course, is that the UV lookup gets less accurate in the interior of a polygon, as a function of the UV interpolation.

I've found some posts and wonderful tutorials regarding use of the to/from NDC operators to get us in and out of camera space. In the context of a Point VOP to/from NDC even allow us to specify which camera (as does the UV Texture tool). But the in-shader variants of the to/from NDC operators always assume that the camera in question is the taking camera and does not allow for an arbitrary camera to be specified (to be fair, that would usually be a valid assumption).

Using a light as a slide projector seems to be one work around, albeit a relatively cumbersome one since it requires conversion of camera lens parameters (focal length and x/y window offset) to analogous light parameters, and managing of lights. And one of the key benefits of using actual cameras is that it is very easy to stop and render additional frame at any point along the move, copy cameras at the new positions, tweak, re-render, have access to all the ancillary features particular to cameras and texture mapping as well, etc., etc. whereas with lights, not so much

I've looked around a fair bit, here and a couple other posts, and have failed to find any good out-of-the-box solutions. Given the plethora of staggeringly complex tools on offer in Houdini, it's hard for me to accept that there's not one hidden somewhere... Anything?

Please forgive the newbie nature of this post. Any helpful replies very much appreciated!

m
Edited by tekkonkinkreet - Oct. 23, 2021 04:23:52
User Avatar
Member
6 posts
Joined: Sept. 2021
Offline
Didn't imagine this would be such a sticky issue, but it has not yet yielded to my attempts and investigation...
User Avatar
Member
679 posts
Joined: Feb. 2017
Offline
Hey tekkonkinkreet,

I get your problem and the only other thing that comes to my mind is the redshift camera projection on shader level. You can specify a different camera in there than the render cam. As I only use redshift I don't know if there is an equivalent for mantra.

Cheers
CYTE
Edited by CYTE - Nov. 2, 2021 08:04:40

Attachments:
Screenshot 2021-11-02 104746.jpg (273.7 KB)

User Avatar
Member
238 posts
Joined: Nov. 2013
Offline
to be clear you want to render the projection from a arbitrary camera out of the main cam right?
What I dont understand is why the perspective from camera uv projection does not work for you.
you could set up different uv sets with as much arbitrary cameras as you like and mix those inside a shader to counter the distortion.
Edited by sekow - Nov. 2, 2021 06:50:46
http://www.sekowfx.com [www.sekowfx.com]
User Avatar
Member
679 posts
Joined: Feb. 2017
Offline
Hey sekow,

Its because the UV from camera projection creates jittery results on low poly geo. You can see it on my screenshot.

Cheers
CYTE
User Avatar
Member
238 posts
Joined: Nov. 2013
Offline
ok, but thats a very lowpoly ngon, its should be enough with slight more faces
Edited by sekow - Nov. 2, 2021 07:44:05

Attachments:
camprj_remesh.png (271.9 KB)

http://www.sekowfx.com [www.sekowfx.com]
User Avatar
Member
679 posts
Joined: Feb. 2017
Offline
sekow
ok, but thats a very lowpoly ngon, its should be enough with slight more faces
yup, but he needs it for his workflow as low as possible.

tekkonkinkreet
The only way to surmount this is to subdivide the mesh, but this turns out to be an exercise in diminishing returns...

on the other hand its super easy to make that procedural in Houdini...
Edited by CYTE - Nov. 2, 2021 08:05:58
User Avatar
Member
1621 posts
Joined: March 2009
Offline
I would recommend doing this in a shader and not subdivide the geometry further to get rid of the uv jiggle.
This used to be complicated in the past, but is now relatively effortless.

- You grab P and convert that to normalized device coordinates looking through a projection camera (this was difficult in the past, because NDC would always be the render cam, but in the modern age you can select another one)

- feed that into your uv coordinates (NDC goes from 0 to 1) to drive your texture.

- profit

An example scene is attached (note: you need to render it to see it).

Attachments:
projection_shader.hip (469.7 KB)

Martin Winkler
money man at Alarmstart Germany
User Avatar
Member
238 posts
Joined: Nov. 2013
Offline
weird I tried this too and only had the toNDC vop without camera input available in the mat context
http://www.sekowfx.com [www.sekowfx.com]
User Avatar
Member
7762 posts
Joined: Sept. 2011
Offline
sekow
weird I tried this too and only had the toNDC vop without camera input available in the mat context

Yeah, I don't think mantra can do projection from other camera.

I've always had to pass a projection matrix to the shader. This can be done with a detail attribute, and a bind in the shader.
User Avatar
Member
1621 posts
Joined: March 2009
Offline
jsmack
Yeah, I don't think mantra can do projection from other camera.

A look at the provided file may change your mind
(Both of you may not have Kim Davidsons portrait as a desktop background and thus not access to the full node catalogue)
Martin Winkler
money man at Alarmstart Germany
User Avatar
Member
7762 posts
Joined: Sept. 2011
Offline
protozoan
jsmack
Yeah, I don't think mantra can do projection from other camera.

A look at the provided file may change your mind
(Both of you may not have Kim Davidsons portrait as a desktop background and thus not access to the full node catalogue)

Your provided file doesn't work. the tondcgeo node is only for use in sops. It doesn't appear to be making an error though, instead simply passing the current space ndc.
User Avatar
Member
6 posts
Joined: Sept. 2021
Offline
Many thanks to all who responded so generously with such illuminating advice. The topic seems to have been neglected or just plain obscure -- somewhat surprising, because of the utility of the technique.

Just to clarify -- the reason I want to avoid subdividing meshes to overcome the inherent inaccuracy of vertex-level UV-coordinate interpolation is that I would have to subdivide down to micro-polygons to truly eliminate the jitter/warp problem. It's a valid workaround, yes -- indeed, that is what I am doing now in my beginning experiments -- but it does impose a kludge-y workflow for getting around a problem with an elegant "ideal" solution. Presumably, it is already possible to do perfect (implicit) projections, at render time, for cylindrical, spherical, and planar, projections and the like; so it should be thus for camera projections as well (after all, they are just a special case of planar projection).

Since it appears that Redshift offers a shader implementation of (arbitrary) camera projection, does anyone know about other Houdini-compatible renderers? Is there perhaps such a feature in Karma?

Thanks again. This is really very helpful!

m
User Avatar
Member
1621 posts
Joined: March 2009
Offline
jsmack
Your provided file doesn't work.

Yeah man, that was a dud. I only glanced over the result and misinterpreted it.
I also distinctly remember using such a tool on a production a couple of years back, in retrospect that may have been an inhouse tool..

Anyway. I tried it the manual way: transform using inverted camera matrix, then use a projection matrix, use the result of THAT to drive it.
I got the first part right I think, but bungled up the projection matrix and it was too late last night to go on :/
Martin Winkler
money man at Alarmstart Germany
User Avatar
Member
238 posts
Joined: Nov. 2013
Offline
Still don't understand why not use a bit more polygons to begin with. That showcase above is not available unfortunately.

We sometime are a bit over obsessive with clean/ideal solutions when the visual end product doesn't show that extra effort.
I mean how often do we go the extra mile/overnight session to get the last flicker out when in the end it is getting obscured by additional layers, shadows, motion blur and defocus.

But that said I also can not explain why the jitter in the first place, as this should be a linear interpolation between those vertices.
http://www.sekowfx.com [www.sekowfx.com]
User Avatar
Member
6 posts
Joined: Sept. 2021
Offline
sekow
Still don't understand why not use a bit more polygons to begin with. That showcase above is not available unfortunately.

We sometime are a bit over obsessive with clean/ideal solutions when the visual end product doesn't show that extra effort.

I'm always happy to go quick/dirty if the results hold up and don't cause too many glitches down the line!

Part of the issue is that, in my case, the geometry we are projecting on is almost always created through multiple iterations. Successive approximation I guess would be the best way to describe the process. My knowledge of Houdini is as yet very limited, so I cannot say for sure how much of a hindrance/benefit Houdini's particular workflows will be for this. I have my suspicions, based on experience with other packages, but I'm still getting the lay of the land...

As I attempted to explain before, we typically start with a 3d layout and render images of this rough "guide" geometry that our background team will then use as the basis for a painting (usually multiple paintings). Next we texture map our first draft painting onto said geometry using the camera projection, then modify the geometry, usually adding detail, so that the model conforms to the painting better. Then repeat the entire process from other camera angles, to cover areas of the model exposed by the change in position/angle of the taking camera at other points in its move, or add detail to specific surfaces. And so on.

So, when I store multiple sets of UV coordinates (one for each "projector") on each vertex of my model, it stands to reason that the more finely subdivided my geometry is, the more cumbersome the process of updating all the UVs to reflect changes in geometry, as well as for adding more projectors (and hence more layers of UV coordinates). Every time I add a vertex (either by globally subdividing the entire surface or through localized "manual" changes) does Houdini go back and re-project my camera mapping and update all the UVs for me? Or does it interpolate new vertex UVs from the UVs already there on pre-existing vertices? If I move my camera, is my projection updated and the UVs recalculated auto-magically, or do I have to explicitly "bake" new UVs by reapplying the projection? When I move in close to my model, do I now have to subdivide even more, to compensate for more noticeable warping as the size of a polygon covers more screen space? Obviously, I don't know the answers yet but I am testing the various cases. I'm pretty certain there's a way to automate all the subdivision and updating in Houdini, but all of this introduces a significant degree of uncertainty to what might otherwise be a somewhat straightforward process (if the camera projection were being calculated in a shader at each sample, rather than extrapolated from per-vertex UVs).

I guess this is a more esoteric issue than I imagined. But it does somehow seem like an area where Houdini would excel. Still wondering if Karma might feature a built-in shader for doing projection mapping from an arbitrary camera.

Again, thanks so much for the thoughtful ideas and advice on this topic!

m
Edited by tekkonkinkreet - Nov. 3, 2021 23:02:27
User Avatar
Member
7762 posts
Joined: Sept. 2011
Offline
tekkonkinkreet
I guess this is a more esoteric issue than I imagined. But it does somehow seem like an area where Houdini would excel. Still wondering if Karma might feature a built-in shader for doing projection mapping from an arbitrary camera.

Many renderers support arbitrary projection mapping with specified cameras. Mantra just isn't one of them. The whole notion of storing projected uv's on vertices is incredibly inefficient and error prone for all the reasons you describe. What is most accurate, efficient, and reliable is storing the projection matrix, which is what something like maya will do when creating a projection. Houdini just makes storing a projection matrix overly obfuscated. There's a vex function 'perspective' which returns a perspective matrix given camera parms, but unfortunately it doesn't take a camera.
Often you can get away with projecting onto verts, but with models that are low poly or have stretched triangles or other topologies that will result in wiggly wobbles no matter how much you chop it up. Also, with a subd model, you don't know where the surface even is until it's rendered.
Edited by jsmack - Nov. 3, 2021 15:57:28
User Avatar
Member
6 posts
Joined: Sept. 2021
Offline
Enormously helpful.
User Avatar
Member
238 posts
Joined: Nov. 2013
Offline
I start to understand (finally..) and yeah thats a tough one.
I've managed to get this to work in Arnold with the camera_projection shader.
But only in the ROP context and not via LOPs

so I would think that it could be possible to model with a sop projection. That should give you a rough estimation, but for actual final render output use the shader version.

Unfortunately not in Mantra nor Karma at the moment, but please submit a RFE to SideFX, as this should be in the toolbox.
Edited by sekow - Nov. 4, 2021 04:41:34
http://www.sekowfx.com [www.sekowfx.com]
User Avatar
Member
7762 posts
Joined: Sept. 2011
Offline
sekow
But only in the ROP context and not via LOPs

Linking shaders to cameras in USD was still in the discussion phase and not yet implemented, last I heard.
  • Quick Links