stereoptics.. has anyone attempted it?

   4125   3   0
User Avatar
Member
412 posts
Joined: July 2005
Offline
So I've been interested in stereo optics for some time now and recently have been doing some research on possibly doing it on my own in 3d.

From what I've found, the cameras they use is a combination of two small cameras attached to each other, sitting side by side, with the center of each lens being apart from each other approx the distance of our pupils. (i think i read 8cm somewhere but dont hold me to that)…and those are both slightly angled in and pointing at a specific distance away (much like our own eyes operate).. and of course, depending on the showing, the image is processed for either the red\blue anaglyph method or the 45 degree polarized method.

im not so interested in the polarized (even tho it looks much better). mainly because you need expensive special projectors and i didn't know if problems would occur due to interlacing if i end up putting something to tape. so im going the anaglyph red/blue method to see if i can just get some test studies to work.

basically so far, i have setup 2 cameras that are both being controlled by a control null. then i also have a lookat null that is set off and locked at a specific distance ( just did a random distance at this point).. and of course some dumb sample model of spheres, boxes, and circles to test on. so i render that out and it seems to be working camera wise.

now comes the compositing aspect of the two cameras. unfortunately i don't know cops, so i went the after effects route since my background is in motion graphics. AE has a nice little 3d glasses effect that can set up the layers to work in different modes.. but i really don't know how exactly this plugin is working, so i was kinda hoping to just be able to do it on my own (probably through the shifting of channels i would expect). my results arent great, but seem to be getting there (i think)…

so basically i was wondering if any of you have messed around with this at all and could give some insight. or if you haven't before, but might be interested in figuring out a solution..

any help is appreciated….

thanks,
dave

oh and btw, if you wanna see my setup, heres a file:

http://mywebpages.comcast.net/daquirus/stereoptics.hipnc [mywebpages.comcast.net]
Dave Quirus
User Avatar
Member
1631 posts
Joined: July 2005
Offline
Hi Dave,

Rhythm and Hues worked on a stereoscopic IMAX film & there was a thread in the mailing list about it. I remembered there were threads about rendering stereo images as well.

Please search through the mailing list archive for more information.

Cheers!
steven
User Avatar
Member
412 posts
Joined: July 2005
Offline
thanks steven.. found them.

ive done some more tests and the two camera system setup seems to be working.. but now im wondering if there might be a way to do this with one camera..

basically the way it works for the anaglyph red/blue method is that the closer the object is, the closer the red/blue should converge (i.e. the image should have no red/blue offset at 100% foreground).. and the farther the object in space is, the more the blue and red should offset itself.

so i was wondering if i could figure out some kind of setup where i can get houdini to calculate the distance in between the camera and an object (or even point on that object). After it figures that out, i thought maybe i could have it genereate some kinda depth map file that a compositer could look at and offset channels based on that information (i.e. shift A amount at B distance). or maybe i could send the distance information directly to cops somehow and tell it to shift channels based on that.

dont really know how to do that tho. i thought i could just take the world position of the camera and the world position of the object/point and just subtract them to get the difference (distance) in between them. but the problem is i need to convert that information to a 0 - 1 bracket (like a distance of this value = 100% farthest away or 1, and a distance of this value = 0% farthest away or 0).. so im getting kinda lost there.. and then i would need to have that 0 - 1 information sentto a channel shifter in cops i suppose.

any thoughts would be great.

thanks,
dave
Dave Quirus
User Avatar
Member
7740 posts
Joined: July 2005
Online
I don't know anything about stereo optics but for depth info per pixel, it sounds pretty easy to do.

In your mantra rop parameters under the Deep Raster tab, choose Pz for your VEX variable. Now render your picture as a .pic (or .picnc if Apprentice). Bring your pic into COPs using a File COP. Append an Equalize COP. Now you have a depth plane that goes from 0 to 1.
  • Quick Links