Houdini and Machine Vision

   7008   12   4
User Avatar
Member
132 posts
Joined: July 2007
Offline
Here's an interesting, and I think challenging, question of how to get houdini to do something:
How could I feedback a rendered frame into a SOP network such that I could, for example, control a virtual robot based on what its ‘eyes’ (the camera) sees? Without building any custom plugins would something like this (even crudely done) be possible? The limitation may come down to image analysis filters in Houdini, right?
I actually read somewhere a mention of this idea with Houdini.

Can't find much discussion of the concept anywhere though.

Thought I'd ask for help.
Thanks.

-Len
User Avatar
Member
1390 posts
Joined: July 2005
Offline
Well, image analysis it's a different story. There is no such a tool in Houdini or any other 3d package (except Massive) of course. But why you want to render an image? Imagine that your robot makes some decisions based on what camera sees without rendering. Just evaluating object position from his point of view (or in NDC).

Perfectly doable in Houdini. The same as an obstacles avoiding and such. Look for topics about crowd simulations in H. Quite advance scene can be handle much easier then in any other software.

Last summer I worked on a crowd sim based on AI - I reached the point of finding a path in a small maze by one agent. Not very practical since you can get much more from Houdini's particles. Something like here for example:
http://www.youtube.com/watch?v=L1eDW2sINkw [youtube.com]

Perhaps the best thing in H. is that it is so easy to reference one part of the scene into another. Rendering a scene from an agent point of view and processing this data is VOPS or Python can easily be done but to make some optical flow analysis… hmm.. hard topic…

cheers,
sy.
User Avatar
Member
132 posts
Joined: July 2007
Offline
Aha! You're quite right. Going all the way to render and dealing with a 2d image may not be the way to go.
However, say you are trying to prototype robot algorithms in Houdini.
That is, you are ultimately going to build the actual robot that you've built out of a few cones and cylinders in Houdini.
So that “actual” robot will only have a ‘rendered’ view, if you will, of the scene in front of it through its on-board camera ‘eye’.

I think that's the key distinction of what I'm attempting here. I don't want to just create certain behavior (like flocking or your very cool YouTube video).
I need to simulate an actual robot interacting with an actual world by using virtual stand-ins.

Make sense?
So, if that's the case…how best to start? Is it even possible?
If not in Houdini, custom code the way to go?
User Avatar
Member
1390 posts
Joined: July 2005
Offline
User Avatar
Member
1390 posts
Joined: July 2005
Offline
If you want a very basic example about robotics in python check out: http://pyrorobotics.org/ [pyrorobotics.org]

I think that it wouldn't be too difficult to write Houdini's based simulator for a common robots models via this module or explore a topic you're interested in by studying their code.

good luck! sy.
User Avatar
Member
132 posts
Joined: July 2007
Offline
Hey, that's a fantastic link! Lots of relevant info. Thanks so much Symek!
User Avatar
Member
401 posts
Joined:
Offline
In “The Magic of Houdini” Caleb Howard writes about a species simulation:

“In the compositing context (COPs), I attached a GL renderer (ROPs) and some image analysis networks (CHOPs) to make them understand what they saw. By these simple applications of different facets of Houdini's toolset, the creatures could see the water they craved”
Howard,C. in Cunningham, W., The Magic of Houdini, Thomson Course Technology, Boston, 2006, p. 141

So its possible in Houdini.

Georg
this is not a science fair.
User Avatar
Member
832 posts
Joined: July 2005
Offline
i did something very similar to this years ago in prisms. in brief:

- the artificial eye is a poly sphere which fires rays off into the scene
- groups the hit points & del the rest
- you can then use the hit points to act as +/- attractors (did some stuff with the prisms feedback sop so hits dissipated over time)
- then drove the ‘creature’ as a particle move towards/away from food or predator

i believe you could do some pretty cool stuff with the DOPs sop/pop solver. also point clouds will be able to help massively as well (getting neighbor velocities/directions/states). then chuck in some craig renolds boids logic (the 4 basic rules of flocking)- and you may see some interesting emerging behaviours…

-paul

EDIT: of course, this is real machine vision. rather, “simulated” vision within a virtual world.
User Avatar
Member
132 posts
Joined: July 2007
Offline
Thanks for the ideas!
If I had the virtual world to work with I think there are a huge number of interesting games one could play with rays, etc.
Though to truly simulate the robot I'm imagining (with one or more mounted cameras) I think I'd have to force myself to work with a render ROP feeding back in.
I read that section that Caleb wrote (I do have that book) and totally missed that bit where he said he did something like this inside his procedural ‘habitat’.
Very cool.
I should give him a call and hire him!

The whole idea brings up some interesting possibilities with actually simulating robots accurately. Technically, for an accurate test of the robot's ability to survey the environment and make the right maneuver you would need a scene that's as detailed and complex as the real world as possible. Then comes the challenge of actually rendering each frame (full on with texture, complex geometry, lighting etc) to get something as photo-realistic as possible. Then sending this into the robot ‘brain’ to have it analyze and choose its next move. If your images are good enough, it seems like you'd start to approach a valid test scenario. Could be very useful if your actual testing environment was too difficult or expensive to work in (like the deck of an aircraft carrier, say). I wonder if anyone has been doing that sort of robotic simulation?…Interesting.
User Avatar
Member
12482 posts
Joined: July 2005
Offline
I'd think that Pauls “eye” would be something similar to a scanning laser range finder that is common in robotics - just see how much they used them on the DARPA challenge. Robots have the advantage, I suppose, of being able to see in ways that we're not able to.

Here is one of those laser products: http://www.acroname.com/robotics/info/articles/laser/laser.html [acroname.com]
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
832 posts
Joined: July 2005
Offline
yes, but i cheated a lot. ie, i interrogated the object/group hit to identify it was friend/foe/wall. just relying on distance data would mean a whole new level of work. btw, there's some pretty interesting robot simulators out there. check out player/stage:

http://playerstage.sourceforge.net/ [playerstage.sourceforge.net]

…two toolkits that mimic few robots in detail vs swarms of lo-rez robots. these toolkits “talk” to the python robot library mentioned in a earlier post. it all looks really fascinating.
User Avatar
Member
132 posts
Joined: July 2007
Offline
Yes! That player/stage stuff is really cool.
Makes me want a robot of my very own!
Simulating one in Houdini will be a start!
User Avatar
Member
7730 posts
Joined: July 2005
Offline
Isn't there a Render COP?
  • Quick Links