Why not just use Houdini's own Geometry Output capabilities?!? Make a “Geometry” ROP in the Outputs menu. You can bake out frames in almost every standard format.
Why bother writing something?
-Craig
Found 227 posts.
Search results Show results as topic list.
Houdini Lounge » Exporting Houdini Meshes
- craiglhoffman
- 252 posts
- Offline
Technical Discussion » RFE: Blend SOP HudSlider naming...
- craiglhoffman
- 252 posts
- Offline
It would be wonderful if we had an option to have the HudSliders created from a Blend SOP have names that reflect the input SOP that is relative to the slider.
For instance instead of always getting these names for the slider handles:
hudsliderobj/model/blend1/blend1
hudsliderobj/model/blend1/blend2
:
:
It would be great to have the slider names come up with labels like “Frown”, “Smile”, “EyeBlink” if that is what the relative input SOP node is named. Plus it would be great to have hudsliders created only for the number of input nodes. So if we are only wiring in 4 nodes, we should only get 4 sliders instead of 16, of which the last 12 do nothing.
Plus, how do we rename sliders? I thought I knew how to do this, but couldn't figure it out… I am running 6.1.200.
Cheers,
Craig
For instance instead of always getting these names for the slider handles:
hudsliderobj/model/blend1/blend1
hudsliderobj/model/blend1/blend2
:
:
It would be great to have the slider names come up with labels like “Frown”, “Smile”, “EyeBlink” if that is what the relative input SOP node is named. Plus it would be great to have hudsliders created only for the number of input nodes. So if we are only wiring in 4 nodes, we should only get 4 sliders instead of 16, of which the last 12 do nothing.
Plus, how do we rename sliders? I thought I knew how to do this, but couldn't figure it out… I am running 6.1.200.
Cheers,
Craig
Technical Discussion » Another small rfe: Selected nodes auto-input to Blend SOP
- craiglhoffman
- 252 posts
- Offline
Actually, I figured out how to do it based on Jason's input, but not quite the same since the Operator pulldown only has “Merge” and doesn't change to “Blend” when you select blend.
If you footprint all of your desired geometries, then select “Blend” while in the ViewPort, you can select all the footprinted geometry and right click and they all get wired in! This also works for “Sequence Blend”.
Quite a time saver! This allows me to not have to name my input SOPs with consistent naming as suggested earlier since I would like to name my modelled SOPs with descriptive names like “frown”, “Eye blink”, etc.
But it's too bad there isn't a straightforward way to do it in the Network Editor.
-Craig
If you footprint all of your desired geometries, then select “Blend” while in the ViewPort, you can select all the footprinted geometry and right click and they all get wired in! This also works for “Sequence Blend”.
Quite a time saver! This allows me to not have to name my input SOPs with consistent naming as suggested earlier since I would like to name my modelled SOPs with descriptive names like “frown”, “Eye blink”, etc.
But it's too bad there isn't a straightforward way to do it in the Network Editor.
-Craig
Technical Discussion » Another small rfe: Selected nodes auto-input to Blend SOP
- craiglhoffman
- 252 posts
- Offline
It would be great in the SOP editor if we could just group select several source nodes and then plunk down a Blend SOP and have them all be wired up as inputs to the Blend SOP in the order in which they were selected. It gets to be a little cumbersome sometimes to wire up a whole mess of input nodes to the Blend SOP one by one.
Just an idea…
-Craig
Just an idea…
-Craig
Technical Discussion » translating a combined surface + displacement shader
- craiglhoffman
- 252 posts
- Offline
Renderman now allows you to do displacement in a surface shader, and as far as I know at this time Houdini does not. (Someone please correct me if I am wrong.)
You will have to seperate the displacement code out into a different shader.
-Craig
You will have to seperate the displacement code out into a different shader.
-Craig
Houdini Lounge » Disney shuts down production on ...
- craiglhoffman
- 252 posts
- Offline
Seen it? I lived it.
I am one of those folks.. There may be something at the Orlando Studio come January, but we aren't holding our breaths. I could probably go back to the Burbank Studio (8 years there and 1.5 here in Orlando), but my family and I are trying to avoid going back to Los Angeles.
I am hoping to find a decent Houdini opportunity because I am sick of Maya and tons of proprietary stuff. And as we all know Houdini is so much more fun!
-Craig
I am one of those folks.. There may be something at the Orlando Studio come January, but we aren't holding our breaths. I could probably go back to the Burbank Studio (8 years there and 1.5 here in Orlando), but my family and I are trying to avoid going back to Los Angeles.
I am hoping to find a decent Houdini opportunity because I am sick of Maya and tons of proprietary stuff. And as we all know Houdini is so much more fun!
-Craig
Technical Discussion » How can I convert PAL to NTSC?
- craiglhoffman
- 252 posts
- Offline
Here is a good site about deinterlacing with a lot of pictures:
http://www.100fps.com/ [100fps.com]
It doesn't cover PAL to NTSC though..
-Craig
http://www.100fps.com/ [100fps.com]
It doesn't cover PAL to NTSC though..
-Craig
Technical Discussion » How can I convert PAL to NTSC?
- craiglhoffman
- 252 posts
- Offline
Well, there is no perfect way to do this since you are taking PAL 25 frame per second video (50 interlaced half frames) and trying to convert it to NTSC 30 frame per second (60 interlaced half frames).
There are ways to try to “deinterlace” them (if they are interlaced, but most likely they are) and then do motion estimation algorithms to try to put them in the right place for the different frame rate (which is what a program called “Twixtor” does), but that is difficult and complex and expensive and may be overboard for what you are trying to do.
If you don't care about it looking perfect to the biggest critical video geek (like me) and want it to stay cheap, there are a couple options.
Outside of Houdini I would suggest getting VirtualDub (www.virtualdub.org) which is a free program to process video (or use some other video tool) to do the PAL to NTSC conversion which may be decent or crappy, depending on how the program does it.
Inside of Houdini I would suggest that you treat your PAL frames like they are 24 FPS film frames and just to a 3:2 pulldown (perhaps after deinterlacing it first in COPs if it does that or do a “smart deinterlace” in VirtualDub before bringing it into COPs..) that is used to convert film to video. The files will slow down very slightly upon playback but it should be imperceptible. A lot of independent film-makers are shooting their films on PAL 25 FPS DV camcorders and then transferring it to film frame for frame and don't feel the slowdown is noticeable.
Also, if there isn't a lot of fast motion in your video clip, you can get away with much less effort in this conversion as dumb deinterlacing algorithms in almost any video package work well when there isn't much movement. With a lot of movement the dumb deinterlacing algorithms make the image a lot blurrier and “softer”.
Check out www.vcdhelp.com for articles on dealing with PAL to NTSC conversion.
-Craig
There are ways to try to “deinterlace” them (if they are interlaced, but most likely they are) and then do motion estimation algorithms to try to put them in the right place for the different frame rate (which is what a program called “Twixtor” does), but that is difficult and complex and expensive and may be overboard for what you are trying to do.
If you don't care about it looking perfect to the biggest critical video geek (like me) and want it to stay cheap, there are a couple options.
Outside of Houdini I would suggest getting VirtualDub (www.virtualdub.org) which is a free program to process video (or use some other video tool) to do the PAL to NTSC conversion which may be decent or crappy, depending on how the program does it.
Inside of Houdini I would suggest that you treat your PAL frames like they are 24 FPS film frames and just to a 3:2 pulldown (perhaps after deinterlacing it first in COPs if it does that or do a “smart deinterlace” in VirtualDub before bringing it into COPs..) that is used to convert film to video. The files will slow down very slightly upon playback but it should be imperceptible. A lot of independent film-makers are shooting their films on PAL 25 FPS DV camcorders and then transferring it to film frame for frame and don't feel the slowdown is noticeable.
Also, if there isn't a lot of fast motion in your video clip, you can get away with much less effort in this conversion as dumb deinterlacing algorithms in almost any video package work well when there isn't much movement. With a lot of movement the dumb deinterlacing algorithms make the image a lot blurrier and “softer”.
Check out www.vcdhelp.com for articles on dealing with PAL to NTSC conversion.
-Craig
Houdini Lounge » Does any shader in SHOP like dent ?
- craiglhoffman
- 252 posts
- Offline
Well, UV is UV and doesn't have anything to do with the position in space. A shader using UV doesn't need rest position or anything piped in- it already sticks to the surface. In your example it seems like you are converting UV to position coordinates which the shader seems to be using.
So it seems that the shader was written as a 3d procedural that is using position for it's calculations and not UV coordinates from the surface. But why would the position P that the VEX shader see be different from the position Mantra sees? VEX Shaded mode just calculates the shader that you hook up to the object at each CV or vertex. It should be the same calculations as a Mantra render, just different resolutions and quality. Putting the Rest VOP down is intended to keep the texture stuck to the surface, so it doesn't “swim” through it when animated. I can't think of any reason why VEX Shaded Mode is in a different space than the Mantra render.
Anyway, I am having PC problems on my Houdini PC right now so I can't test it out.
Oh well. It just seemed weird and unexpected.
-Craig
So it seems that the shader was written as a 3d procedural that is using position for it's calculations and not UV coordinates from the surface. But why would the position P that the VEX shader see be different from the position Mantra sees? VEX Shaded mode just calculates the shader that you hook up to the object at each CV or vertex. It should be the same calculations as a Mantra render, just different resolutions and quality. Putting the Rest VOP down is intended to keep the texture stuck to the surface, so it doesn't “swim” through it when animated. I can't think of any reason why VEX Shaded Mode is in a different space than the Mantra render.
Anyway, I am having PC problems on my Houdini PC right now so I can't test it out.
Oh well. It just seemed weird and unexpected.
-Craig
Houdini Lounge » Does any shader in SHOP like dent ?
- craiglhoffman
- 252 posts
- Offline
Funny- I was just playing with this last night! I also found that it worked best to plop down a Rest Position VOP and pipe that into the “P” input of the Fire VOP. This made the VEX shaded display result match with the mantra result. I am not sure why they didn't match without that… I guess the shader space is not necessarily the same.
Also, be sure to pipe out “Offset” as a parameter so that you can animate the fire (I just put $F/10 or something like that in the Y “Offset” channel of the SHOP) and do this for the other parameters you may want to tweak in your SHOP.
But what is really cool is the VEX Shaded mode works really great for this!! I made a NURB grid in XY facing the camera, gave it UV texture coordinates, slapped the Fire SHOP I just made onto it and turn on VEX Shading and got a pretty darn good representation of the fire. I animated the offset like I mentioned above and did a Flipbook of 100 frames. This looked fantastic and allowed me to really see the timing and size/scale/detail of my fire in less than a minute rather than having to render out a test animation. It was almost like having OpenGL hardware rendering, but not quite as fast.
I feel that it was good enough that if scaled down it could be composited into a scene without rendering, although I forgot to check to see if it had Alpha. Sometimes flipbooks (like Maya's don't preserve Alpha.
This is utterly fantastic!!! I wish I had this ability to do quick preview flipbooks for animated procedural shaders for every movie I worked on before now…
(By the way, this topic probably should have been presented in the Technical Forum since it is more specific and technical than General, and there is more traffic there, meaning you are more likely to get a response.)
-Craig
Also, be sure to pipe out “Offset” as a parameter so that you can animate the fire (I just put $F/10 or something like that in the Y “Offset” channel of the SHOP) and do this for the other parameters you may want to tweak in your SHOP.
But what is really cool is the VEX Shaded mode works really great for this!! I made a NURB grid in XY facing the camera, gave it UV texture coordinates, slapped the Fire SHOP I just made onto it and turn on VEX Shading and got a pretty darn good representation of the fire. I animated the offset like I mentioned above and did a Flipbook of 100 frames. This looked fantastic and allowed me to really see the timing and size/scale/detail of my fire in less than a minute rather than having to render out a test animation. It was almost like having OpenGL hardware rendering, but not quite as fast.
I feel that it was good enough that if scaled down it could be composited into a scene without rendering, although I forgot to check to see if it had Alpha. Sometimes flipbooks (like Maya's don't preserve Alpha.
This is utterly fantastic!!! I wish I had this ability to do quick preview flipbooks for animated procedural shaders for every movie I worked on before now…
(By the way, this topic probably should have been presented in the Technical Forum since it is more specific and technical than General, and there is more traffic there, meaning you are more likely to get a response.)
-Craig
Technical Discussion » video camra
- craiglhoffman
- 252 posts
- Offline
Yes it would be cool in this ever increasing DV (and digitally compressed movie) world of ours to have Houdini read these streams right in, but for now you have to convert it to a series of files (like tifs) with another program. I use Media Studio Pro, but Premiere, After Effects, Vegas, Final Cut Pro, etc. can all do it.
If you want something free, I think the shareware program VirtualDub (www.virtualdub.org) or perhaps AviSynth (www.avisynth.org) can do it. I am not sure if they can write out individual uncompressed frames, but they can do so many other things I wouldn't be surprised. They both also have tons of filters to clean up (temporal noise reduction for example) and improve your movies and deinterlace them in a nice way if you want.
-Craig
If you want something free, I think the shareware program VirtualDub (www.virtualdub.org) or perhaps AviSynth (www.avisynth.org) can do it. I am not sure if they can write out individual uncompressed frames, but they can do so many other things I wouldn't be surprised. They both also have tons of filters to clean up (temporal noise reduction for example) and improve your movies and deinterlace them in a nice way if you want.
-Craig
Houdini Lounge » New Wildcat VP drivers at 3dlabs
- craiglhoffman
- 252 posts
- Offline
If you have the Houdini displayList environment variable disabled to allow you to work with your Wildcat VP, the new drivers seem to fix this. The parameter tab update problem is still there, however.
The added plus is that the new driver also allows you to play with their OpenGL 2.0 demos! (Can't wait until I can use OpenGL 2.0 shaders in Houdini!)
Here is where you can get the driver:
http://www.3dlabs.com/support/drivers/wildcatvp_drivers.htm [3dlabs.com] (Version 3.01-0621)
By the way, this isn't their “Certified” driver.
Cheers,
Craig
The added plus is that the new driver also allows you to play with their OpenGL 2.0 demos! (Can't wait until I can use OpenGL 2.0 shaders in Houdini!)
Here is where you can get the driver:
http://www.3dlabs.com/support/drivers/wildcatvp_drivers.htm [3dlabs.com] (Version 3.01-0621)
By the way, this isn't their “Certified” driver.
Cheers,
Craig
Technical Discussion » bones, follow curve ik and twist attr.
- craiglhoffman
- 252 posts
- Offline
I believe that creating a Curve at the Object level with the Path tool gives you this option. The handles from the path can be twisted and make the bones twist appropriately.
I haven't tried this yet (don't have Houdini at work) and I think this was covered earlier, so please check an earlier post if this isn't correct.
-Craig
I haven't tried this yet (don't have Houdini at work) and I think this was covered earlier, so please check an earlier post if this isn't correct.
-Craig
Houdini Lounge » video footage in houdini
- craiglhoffman
- 252 posts
- Offline
You need to convert it to individual frames first. Houdini doesn't support any of the compressed movie formats (like DV, AVI, Quicktime, etc.).
Then you bring it into COPs or into your background or whatever as “image.$F.tif” or whatever.
-Craig
Then you bring it into COPs or into your background or whatever as “image.$F.tif” or whatever.
-Craig
Technical Discussion » how can I write complex expressions
- craiglhoffman
- 252 posts
- Offline
Some of those things are much better to do in CHOPs, depending on what you are trying to do. And it saves a lot of the expression writing and looking up syntax and allows you to play around and try different things, as well as see visually what your channels are doing.
-Craig
-Craig
Technical Discussion » how can I write complex expressions
- craiglhoffman
- 252 posts
- Offline
You put it right in the translate X channel. Just start typing in there.
You can do logic and anything else. You can keyframe animate from frame 1 to 10, then from 21 to 30 and put your expression on the curve between frames 10 and 21 using the Graph Editor if you want.
No crazy “Expression Editor” like in some other packages. You just put it where you want it.
Quick example: Try putting sin(10*$F) in the translate X channel. Your object will move back and forth.
Does that answer your question, or did you mean something else? Also open up the Textport and type “exhelp”. It will give you a listing of all expressions available to you. For more info on a particular expression type it again with the expression name: “exhelp sin”.
-Craig
You can do logic and anything else. You can keyframe animate from frame 1 to 10, then from 21 to 30 and put your expression on the curve between frames 10 and 21 using the Graph Editor if you want.
No crazy “Expression Editor” like in some other packages. You just put it where you want it.
Quick example: Try putting sin(10*$F) in the translate X channel. Your object will move back and forth.
Does that answer your question, or did you mean something else? Also open up the Textport and type “exhelp”. It will give you a listing of all expressions available to you. For more info on a particular expression type it again with the expression name: “exhelp sin”.
-Craig
Technical Discussion » Cg fragment/vertex shaders in Houdini
- craiglhoffman
- 252 posts
- Offline
You're right of course, but the majority of stuff being generated today isn't cutting edge.
GPU's are getting faster at a much higher rate than CPU's currently and it points to a future where a majority of Rendering that is being done today can be done in real time, or at least much much faster than on CPU's. People still watch “Toy Story”, and that could probably be completely hardware rendered with today's graphics cards (once an OpenGL 2.0 standard is finalized).
We will always need software rendering, especially in the Visual Effects world, but I am very excited at the prospect of doing acceptable quality renderings for visual development, Pre-Vis, SHOP shader development, or even for doing final images for a Kids straight-to-video animated movie on a laptop while sitting on the beach at Bali.
I realize that this won't be as much use to most Houdini folks since they tend to be more “Cutting Edge” types who like to do things that others can't do and are focussed on high quality film output, so mainstream hardware rendering doesn't appeal as much to them, but with the new character tools and pipeline issues in Version 6, Houdini could be in a perfect position for some guy doing the next “Veggie Tales” in his garage (or a loosely knit web of folks all over the world passing OTL's back and forth). Real time rendering would be a big boon to him.
We aren't there yet, but the future looks real rosy to me (a guy who dreams of doing his own “Veggie Tales” {in concept, not in quality} cheaply and quickly someday).
-Craig
GPU's are getting faster at a much higher rate than CPU's currently and it points to a future where a majority of Rendering that is being done today can be done in real time, or at least much much faster than on CPU's. People still watch “Toy Story”, and that could probably be completely hardware rendered with today's graphics cards (once an OpenGL 2.0 standard is finalized).
We will always need software rendering, especially in the Visual Effects world, but I am very excited at the prospect of doing acceptable quality renderings for visual development, Pre-Vis, SHOP shader development, or even for doing final images for a Kids straight-to-video animated movie on a laptop while sitting on the beach at Bali.
I realize that this won't be as much use to most Houdini folks since they tend to be more “Cutting Edge” types who like to do things that others can't do and are focussed on high quality film output, so mainstream hardware rendering doesn't appeal as much to them, but with the new character tools and pipeline issues in Version 6, Houdini could be in a perfect position for some guy doing the next “Veggie Tales” in his garage (or a loosely knit web of folks all over the world passing OTL's back and forth). Real time rendering would be a big boon to him.
We aren't there yet, but the future looks real rosy to me (a guy who dreams of doing his own “Veggie Tales” {in concept, not in quality} cheaply and quickly someday).
-Craig
Technical Discussion » Cg fragment/vertex shaders in Houdini
- craiglhoffman
- 252 posts
- Offline
But is it so hardwired? I was under the impression that the whole point of the new hardware was programmibility so that it wasn't so hardwired as it has been in the past.
I mean things like vector math (dot products, cross products) and depth sorting algorithms and texture filtering, etc. and other things that are so prevalent in CG rendering should run much faster on the graphics hardware than doing it on a CPU I would think.
Sure, there are non-standard things that won't be a lot faster perhaps, but isn't the point to move the standard complex vector things to optimized hardware that deals much better with vector math operations than a CPU that is designed to handle everything?
The shaders that are being written for OGL 2.0 look remarkably similar to Renderman shaders, so I think this is really the future for our industry (at least certain parts of it).
http://www.extremetech.com/article2/0,3973,1154426,00.asp [extremetech.com]
-Craig
I mean things like vector math (dot products, cross products) and depth sorting algorithms and texture filtering, etc. and other things that are so prevalent in CG rendering should run much faster on the graphics hardware than doing it on a CPU I would think.
Sure, there are non-standard things that won't be a lot faster perhaps, but isn't the point to move the standard complex vector things to optimized hardware that deals much better with vector math operations than a CPU that is designed to handle everything?
The shaders that are being written for OGL 2.0 look remarkably similar to Renderman shaders, so I think this is really the future for our industry (at least certain parts of it).
http://www.extremetech.com/article2/0,3973,1154426,00.asp [extremetech.com]
-Craig
Technical Discussion » Cg fragment/vertex shaders in Houdini
- craiglhoffman
- 252 posts
- Offline
Good point. Perhaps the math is the same whether it goes through the chip or on the CPU… That would be a good question for Nvidia and for the MI folks.
But then again, who needs render farms if the card speeds your rendering up so darn much?
-Craig
But then again, who needs render farms if the card speeds your rendering up so darn much?
-Craig
Technical Discussion » Cg fragment/vertex shaders in Houdini
- craiglhoffman
- 252 posts
- Offline
Anyone know anything about this new software renderer NVidia is developing that will use the hardware in conjunction with software to render? I believe it was called “Galileo” and that was changed to “Gelato” or something like that.
I think it is designed to take advantage of Cg in your OGL display, but allow a higher quality render through software but sped up using the graphics hardware. I think they are shooting for winning people over from Renderman.
I wonder if VMantra can start taking advantage of Graphics card hardware to speed up it's rendering?… Anyone have any theories on this?
-Craig
I think it is designed to take advantage of Cg in your OGL display, but allow a higher quality render through software but sped up using the graphics hardware. I think they are shooting for winning people over from Renderman.
I wonder if VMantra can start taking advantage of Graphics card hardware to speed up it's rendering?… Anyone have any theories on this?
-Craig
-
- Quick Links