Houdini Fisheye lens for Hemispherical Projection

   15882   8   7
User Avatar
Member
4 posts
Joined: July 2008
Offline
Hey everyone,

So, I'm working on a planetarium dome projection project and i'd really like to use Houdini. There is a Mental Ray lens shader that can emulate a 180degree fisheye lens. I'm wondering if there is a way to do this in Houdini. There is a lens_curvature parameter that can be applied to cameras, but i've not been able to get the proper effect. I suspect it's the relationship between the focal length, aperture, and lens_curvature that will yield the proper result, but i'm unsure how to set it all up. Have any of you worked on a similar project or have an idea about how to approach this? Any examples or tips would be greatly appreciated. Thanks!
User Avatar
Member
224 posts
Joined: June 2009
Offline
I would also be interested in this - any suggestions?
Patrick
User Avatar
Member
1390 posts
Joined: July 2005
Offline
Look for renderman shaders like the one below. They are most often easily transferable to vex. I've done it years back, but the code wasn't actually ever tested, most probably it doesn't work: http://www.forum3d.pl/f3dbb/index.php?md=post&nw=1&pid=123563#123563 [forum3d.pl]


hth,
skk.


http://terpsichore.stsci.edu/~summers/viz/fisheye/ [terpsichore.stsci.edu]
User Avatar
Staff
121 posts
Joined: Oct. 2010
Offline
Lens curvature can have a value of positive or negative, and simply distorts the film plane in a kind of parabolic shape (leaving the edge unchanged).

It's really only there for backward compatibility though, and it would be better to stick a grid in front of the camera with a surface shader to perform the distortion.
Side Effects Technical Support
User Avatar
Member
12427 posts
Joined: July 2005
Offline
Unfortunately you are forced to raytrace if you're doing that trick – which in many cases (eg. volume rendering) is untenable.

The common way to do this in production is to render over-sized images (say 10% bigger) where you're rendering more image area outside the projection, and then running a warp & crop in compositing software. The warp is based of a vectorfield derived from mapping lens by shooting checkerboards. The oversized images allow for the pincushioning effect where new image outside the projection will be sucked into the frame.

The big advantage of this is that it allows a multitude of renderers to be used regardless of their varying lens-warping feature sets. The composite is performed all on the oversized images and the final step is the lens warp & crop. This preserves maximum fidelity and efficiency by not warping & filtering many, many layers.

Jason

PS. Many studios render oversized images anyway so that some camera shake can be applied in composite.

PPS. Oversized images mean an aperture and resolution change - not a focal change. In effect, if you crop out the centre region of an oversized image you should end up with exactly the same resolution and projection as your original 2k.
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Staff
2540 posts
Joined: July 2005
Offline
or brute force build a physical lens and have it refract the rays as well as one of my answers in the 3d Buzz forum.
There's at least one school like the old school!
User Avatar
Member
321 posts
Joined: July 2005
Offline
Jenny
Lens curvature can have a value of positive or negative, and simply distorts the film plane in a kind of parabolic shape (leaving the edge unchanged).

It's really only there for backward compatibility though, and it would be better to stick a grid in front of the camera with a surface shader to perform the distortion.
I was wondering what the heck it was doing! That visual explains it nicely, thanks.
Antoine Durr
Floq FX
antoine@floqfx.com
_________________
User Avatar
Member
4 posts
Joined: July 2008
Offline
(copied post from alternate forum)

First off, thanks for the replies! I ended up approaching this from a different angle. Unfortunately it requires Nuke to work. Basically I wound up just rendering out a six-pack through Houdini's built in cube-map generator, then plugging each of the resultant image planes into Nuke's Spherical Transform Node and transforming from cube-map to 180 degree fisheye.

We did some tests comparing the Mental Ray shaders distortion to what Nuke was generating and discovered it was a 100% match (although we had to use mirror nodes to get the coordinate space to match, although this step could be handled during the icp processing). The beauty of this method is that it requires no raytracing. The drawback is having 6x the number of images (although each cubemap image can be 1/3 the final resolution size, which may help with memory issues in certain situations) for each frame, and needing Nuke. Nevertheless, it works well if you're lucky enough to have access to both software packages.

One thing I'd like to look into over the summer is writing a python script to assemble and distort the six-pack output automagically, without any help from Nuke. I'll post here and over on Odforce regarding the progress and share the code for feedback and testing as I go. Thanks again.

example files are available for download here
http//www.peterstratton.com/DLsOnTheDL/cubemap2fisheye.zip
User Avatar
Member
61 posts
Joined: Dec. 2005
Offline
The Nuke options sounds pretty good.
It would be great if you could share the file so I can see the cube-map generator your talking about.

Wondering if anyone has tried using Navegar by Dome3D:
http://www.dome3d.com/products/navegar-software [dome3d.com]
it's a plugin for After Effects

Also, I ran across this QuickStitcher:
http://pineappleware.com/sub/stitcher.html [pineappleware.com]
Not sure how recent it is.

~Shawn
Touch Designer User
  • Quick Links