Newbie question - world/scene geometry deformation

   8184   7   1
User Avatar
Member
4 posts
Joined: July 2005
Offline
Hi all,

this is my first post on this forum so hello everybody!

I am working on particular project right now that has certain very specific 3d visualisation needs. And friends have advised me that Houdini is most likely to achieve these. So I have downloaded Apprentice to explore and see if this would work.

The situation is this (warning: it gets a little technical here): I want to render a series of frames of a scene, but with the distortion caused by Einstein relativity factored in. So each point in the scene would keep (most of) its normal attributes, but would be displaced to a different position in the eye of the beholder (according to a quite straightforward formula depending purely on the angle between the centre of the view and the pixel).

I am looking for ways to implement this deformation before rendering, after which rendering the frame will not lose quality (the deformation can become quite extreme in some situations). I understand that Houdini supports some very flexible scripting and VEXing, but am too kinda new here to understand if what I am looking for is possible.

In summary - I'm not looking for a deformation of a single object, but a deformation of the whole scene (sorry for showing my Lightwave background here…) where the visible position of each point is already calculated and THEN needs to be modified.

Any advice - ideas, sections of the manual to read closely, etc - will be very gratefully received.

Regards,

Julian
User Avatar
Member
4140 posts
Joined: July 2005
Offline
What an interesting challenge! I'm still sucking back my Saturday morning coffee so I doubt I can offer much that is tangibly useful , but to rephrase your need(to see that I understand it) you would like to be able to project a cone from your camera and when it hits a point, that point is translated towards or away from the centre of view based on a formula? Is all of this happening *strictly* in screen space i.e. x and y of the cams view? In other words, it doesn't matter that a point lying on one pixel might be 5 units from cam and the one next to it might be 100 - you want them translated in screen space, not 3d world space?
Like I said, I'm still waking up.
So apart from resolution issues, assuming you're always rendering from the observer's POV, it's really an image distortion…it's just that natty pixel sampling problem that keeps you from doing this in comp.

I'm going to go percolate on this a bit…heh…by the way, if you haven't already, try going over to www.odforce.net and pose the question there. There are some people that hang out there that don't come here very often and they love this sort of stuff. I never seem to have the time to get there myself…

Cheers,

J.C.
John Coldrick
User Avatar
Member
4 posts
Joined: July 2005
Offline
John,

yes that's exactly what I am after - you obviously have 4 star coffee there.

It's funny but the relativity thing does indeed result in a distortion in perceived position only - an object 5 times as far away seems distorted by exactly the same angular amount as one closer.

If the 3rd or 4th cup produces that ahah! moment, please post back!

Regards,

Julian
User Avatar
Member
4140 posts
Joined: July 2005
Offline
Well, it's more like 3-star coffee, since I haven't had any epiphanies yet. I'm not near Houdini at the moment, but it seems to me that it could be useful fiddling with doing some sort of pre-render pass - calculating for each pixel the angle from the centre of view. This could be output to a special layer of your own making(see the mantra output driver). It could be calculated as a vector of direction(away from centre screen space) and magnitude(whatever the formula determines). This should only have to be done once, since it's a relationship between pixels strictly in screen space, not with any animating data. This render could be projected from the cam and whenever it hits a point, displacement happens. That is, a given attribute which contains direction and magnitude. displace by that amount. One thing that occurs to me is how would you deal with resolution issues? The sampling rate of screen space is relatively coarse when you project outwards hundreds or thousands of units - if it were to be a big displacement I could see some sampling issues coming back to haunt you and really you're only a little better off than doing it in 2D. Some sort of sample smoothing would be in order…

All well and good, but missing the nitty gritty - I'd probably start by poking around there first. Hopefully this might get you in the general direction. Going to need some essential reading on how mantra deals with screen space, etc.

All off the top of my caffeine-deprived head - but since no-one else has thrown out an idea maybe this one is useful to tear apart…

Cheers,

J.C.
John Coldrick
User Avatar
Member
33 posts
Joined: July 2005
Offline
I decided to make my try at it, with some success I think. I got a hip I'll send you but here's basically it does. I did it all with default CHOPs.

The philosophy is to get every point's transform in global space, then convert it to camera space. Since you pointed out that the only factor to weight the deformation is angle to camera we calculate just that number (exactly we get the tangent of that angle).

So with xy coords and pitagoras we compute the distance of every point to the camera's Z axis, divide that by Z (that is P.z, there's a discussion on the sidefx mailing list about this same thing) and we obtain the tangent.

This tangent expresses the deviation of every point from the direction the camera is looking, a point with 0 tangent is exactly in te middle of the screen, and from there to infinity. Here i added an expression chop with a custom expression just as an example. This tangent channel can be used in a lookup chop or other stuff to get the deformation you are after (i'm curious what ecuation is it?)

the result of the deformation then multiplies (in my example) every point's x y coords so we move the points around parallel to the camera and then we reuse the original P.z coord which we dont modify. Now all that we need is to convert these new transforms to world space, and get them into SOP domain through a Channel SOP.
User Avatar
Member
33 posts
Joined: July 2005
Offline
Some comments:

The value of the tangent is 1 when the deviation is 45 degrees, that's off camera with normal camera settings. So although the function grows to huge numbers you're mostly involved with roughly a 0-1 function to use as weight of your deformation. Take that into account.

When getting every point's transform, you need to account for the points locations in SOP domain (stuff you'd change with bone deformations, etc) then add the transforms you may do at OBJ level. I spent about half of time developing the scene dealing with this (silly me ) and why it wasn't doing as expected…

In my scene you can see both the original geo and the deformed one. I first animated the camera, then only the geo so you can see that when te object passes near offscreen the deformation is the same, wheter it's because the camera is moving or because of object/SOP points moving around.
User Avatar
Member
33 posts
Joined: July 2005
Offline
I posted the scene here [tinyurl.com], juzliqan. I think it's similar to what you are after. It has given me ideas for some realtime application too, hmm
User Avatar
Member
4 posts
Joined: July 2005
Offline
I don't believe that this really is three years later!! :shock:

I have had my eye on other balls for quite a while and only now can I return to this problem. Thanks for the ideas Miguel, that sounds like just what I am looking for here.

For your information the distortion is calculated on the tangent of HALF the angle (that's just how the relativity maths works out), so things don't get infinite at 90 degrees!

Erm, if you still have your scene available, could you post it via tinyurl again?

Regards,

Julian
  • Quick Links