Project image to UV space texture via a camera?

   2344   3   0
User Avatar
Member
684 posts
Joined: Aug. 2019
Offline
In Houdini, how to project an image back onto a 3D model? Assuming the mesh has UV unwrapped and I have a camera with the exact angle/FoV set up.
Edited by raincole - Feb. 29, 2024 02:20:59
User Avatar
Member
779 posts
Joined: Feb. 2017
Offline
In the UV Texture SOP you can choose "Perspective from camera" as a texture type.

Cheers
CYTE
Edited by CYTE - Feb. 29, 2024 02:53:34
User Avatar
Member
684 posts
Joined: Aug. 2019
Offline
CYTE
In the UV Texture SOP you can choose "Perspective from camera" as a texture type.

Cheers
CYTE

Sorry, I didn't make my question clear.

The mesh has UV already. I'd like to project an image onto it, then bake it to a texture file according to its existing UV. Basically the opposite of rendering.

UV Texture SOP generates a new uv coordination, so I'm not sure how helpful it's in this case.

I suppose I need COP? But I'm not familiar with COP enough to figure out the whole thing myself
Edited by raincole - Feb. 29, 2024 11:38:10
User Avatar
Member
8173 posts
Joined: Sept. 2011
Offline
raincole
CYTE
In the UV Texture SOP you can choose "Perspective from camera" as a texture type.

Cheers
CYTE

Sorry, I didn't make my question clear.

The mesh has UV already. I'd like to project an image onto it, then bake it to a texture file according to its existing UV. Basically the opposite of rendering.

UV Texture SOP generates a new uv coordination, so I'm not sure how helpful it's in this case.

I suppose I need COP? But I'm not familiar with COP enough to figure out the whole thing myself

I usually use COP for this, but you can use mantra baking too. Use the vopcop2generator with a snippet vop and write a shader to get from the target uv space to world space using uvdist with primuv getting the value of P from the intersection point. If the image has the camera matrix baked into it already, such as an exr produced from a 3d render, the camera matrix can be used to transform P to the uv space of the camera. This camera uv space would then feed a texture vop to read the color. This final color would be the output of the generator. If you have a camera object, the toNDC function can maybe be used instead, but I'm not sure if it works in COP context. If not, there is a perspective function for building a camera matrix from the parameters of the camera.

If you have Labs installed there is the labs maps baker, it uses some of these techniques internally. You can use it as a starting point to take apart and compute your own values.

Mantra baking should be a bit simpler since it does the uv unwrapping for you. You just have to get the camera coordinates into the texture.

Karma is another option, it even supports getting the coordinates from a camera using coordsys, unlike mantra where you would have to reconstruct the camera matrix yourself.

If your mesh is sufficiently dense with points, you could also try what CYTE suggested and just add a second uv set using the UV Texture SOP, but it will always have a bit of distortion when using vertex interpolation unless the geometry is perfectly flat and parallel to the camera plane.
  • Quick Links