The File SOP supports dxf files directly.
Remember in Houdini, the File SOP can load in all kinds of geometry formats by extension and magic data.
e.g.: In the File SOP's path parm, use the file browser and find your .dxf file
To quickly see what extensions are supported, in the lower right of the File Browser is a very long string of supported extensions.
Found 2144 posts.
Search results Show results as topic list.
Technical Discussion » Importing DWG CAD files into Houdini?
-
- old_school
- 2538 posts
- Offline
Houdini Indie and Apprentice » Pyro / Billowy Smoke Resolution
-
- old_school
- 2538 posts
- Offline
Yes very low resolution volumes.
After vdbcombine1, insert a VDB Resample SOP
Define Transform: Using Voxel Scale Only
Voxel Scale: 0.5
That will double the voxel resolution.
Then append a VDB Smooth SOP
Operation: Median Value (seemed to look the best here)
Iterations: 1
That gives you a smoother result for sure.
After vdbcombine1, insert a VDB Resample SOP
Define Transform: Using Voxel Scale Only
Voxel Scale: 0.5
That will double the voxel resolution.
Then append a VDB Smooth SOP
Operation: Median Value (seemed to look the best here)
Iterations: 1
That gives you a smoother result for sure.
Houdini Indie and Apprentice » render pyro problem
-
- old_school
- 2538 posts
- Offline
Tanto is right.
In the Pyro material, density volume renders into color and alpha.
Pyro contribution is emitting and it does not generate an alpha by design.
Yes to comp to convert pyro color to alpha and use that.
In the Pyro material, density volume renders into color and alpha.
Pyro contribution is emitting and it does not generate an alpha by design.
Yes to comp to convert pyro color to alpha and use that.
Technical Discussion » Multiple houdini.env files possible?
-
- old_school
- 2538 posts
- Offline
The short answer is you need to work with Houdini's configuration source launch file. Extend it to support beyond $HSITE, say $HSITE_SITE and $HSITE_CUSTOM .
This is all done in your houdini install directory: C:\Program Files\SideEffects Software\Houdini 19.0.513
This is where all the houdini configurations start. It's where the initial configuration is set.
The main use of houdini.env by users is to extend Houdini's own search path which starts with $HOUDINI_PATH and then cascades down form there into the various different components.
https://vfxbrain.wordpress.com/2019/11/20/hudini-env-file-on-windows/ [vfxbrain.wordpress.com]
Then figure out a way to push this out and maintain. On Linux you use shells and all this is in a typical day.
On windows, nope, rarely done.
----
There is another way to extend Houdini and that is to directly reference locations and files using Houdini "packages".
Because of the difficulty in setting up good Houdini environments on Windows in a robust way including the limitations of houdini.env files, we added Packages:
https://www.sidefx.com/docs/houdini/ref/plugins.html [www.sidefx.com]
https://www.sidefx.com/tutorials/houdini-environment-setup/ [www.sidefx.com]
The new installer also allows you to manage and install packages, especially handy for windows configurations:
https://www.sidefx.com/docs/houdini/ref/utils/launcher.html [www.sidefx.com]
I think we should get rid of the Beta in the doc... I'll check on that. To see if it indeed is still in beta.

Move to packages. Full stop. Your life becomes so much easier managing environments on windows imho.
This is all done in your houdini install directory: C:\Program Files\SideEffects Software\Houdini 19.0.513
This is where all the houdini configurations start. It's where the initial configuration is set.
The main use of houdini.env by users is to extend Houdini's own search path which starts with $HOUDINI_PATH and then cascades down form there into the various different components.
https://vfxbrain.wordpress.com/2019/11/20/hudini-env-file-on-windows/ [vfxbrain.wordpress.com]
Then figure out a way to push this out and maintain. On Linux you use shells and all this is in a typical day.
On windows, nope, rarely done.
----
There is another way to extend Houdini and that is to directly reference locations and files using Houdini "packages".
Because of the difficulty in setting up good Houdini environments on Windows in a robust way including the limitations of houdini.env files, we added Packages:
https://www.sidefx.com/docs/houdini/ref/plugins.html [www.sidefx.com]
https://www.sidefx.com/tutorials/houdini-environment-setup/ [www.sidefx.com]
The new installer also allows you to manage and install packages, especially handy for windows configurations:
https://www.sidefx.com/docs/houdini/ref/utils/launcher.html [www.sidefx.com]
I think we should get rid of the Beta in the doc... I'll check on that. To see if it indeed is still in beta.
Move to packages. Full stop. Your life becomes so much easier managing environments on windows imho.
Houdini Lounge » No multi selection in the Tree View
-
- old_school
- 2538 posts
- Offline
Nothing new that hasn't been mentioned above.
One of those annoying things if you use this to select multiple objects.
As mentioned above SHIFT-T to list mode. It is per network type and is just odd UX... but if all you want is to select objects in a list, use that.
Again submit RFE's any time. Duplicates are fine, especially this one that spans multiple releases.
Every release a part of Houdini's UI is improved either within our own UI development or moved over to QT. RFE is the key to get your ticket in.
One of those annoying things if you use this to select multiple objects.
As mentioned above SHIFT-T to list mode. It is per network type and is just odd UX... but if all you want is to select objects in a list, use that.
Again submit RFE's any time. Duplicates are fine, especially this one that spans multiple releases.
Every release a part of Houdini's UI is improved either within our own UI development or moved over to QT. RFE is the key to get your ticket in.
Houdini Indie and Apprentice » Manually Capture Cloth for Rig
-
- old_school
- 2538 posts
- Offline
Have you tried using the Bone Capture Proximity SOP?
Just feed in the tube skirt geo into the left input and the KineFX skeleton into the right input and you should get a decent set of weights on your geometry.
You can follow up with a Capture Layer Paint with a large brush set to Smooth Final (RMB in the viewport menu choose option for LMB or MMB).
Just feed in the tube skirt geo into the left input and the KineFX skeleton into the right input and you should get a decent set of weights on your geometry.
You can follow up with a Capture Layer Paint with a large brush set to Smooth Final (RMB in the viewport menu choose option for LMB or MMB).
Technical Discussion » is there a way to do "bones from curve" but proceadurally?
-
- old_school
- 2538 posts
- Offline
You can use bones to drive kineFX and back quite easily. If you can wait a day, I have a file in the illume kinefx package where I demo the IK Solver VOP to drive a t-rex leg rig that has a file that uses obj bones to drive the rig and back.
One key reason for KineFX is procedural rigging full stop.
curve, resample to segments, into Rig Doctor SOP and those points are the locator joints and segment prims are the bone segments. Quite simple.
If you need to, use the ReOrient Joints SOP to build a good rest_position and then can feed into IK Skeleton VOP if you want to drive further with controls in local or world space, or whatever transform space you want.
You can use all the same rigging SOPs for Obj Bones to do weighting and bone deform SOP as with KineFX bones.
If they have the same name as the object bones, they will even drive the same skin weights.
One key reason for KineFX is procedural rigging full stop.
curve, resample to segments, into Rig Doctor SOP and those points are the locator joints and segment prims are the bone segments. Quite simple.
If you need to, use the ReOrient Joints SOP to build a good rest_position and then can feed into IK Skeleton VOP if you want to drive further with controls in local or world space, or whatever transform space you want.
You can use all the same rigging SOPs for Obj Bones to do weighting and bone deform SOP as with KineFX bones.
If they have the same name as the object bones, they will even drive the same skin weights.
Technical Discussion » Can't join Curves
-
- old_school
- 2538 posts
- Offline
A Polygon curve can't split into two branches. At least not in Houdini. You can only have a single non-forking backbone.
You can use a Fuse SOP to consolidate the points of all the curves. This gives you no gaps but also leaves you with three primitives. Many tools in Houdini will work on the curves with consolidated points though.
In general the PolyPath SOP is used to take several curve segments and joint them into a single curve. The documentation for PolyPath also states that when it reaches a branch, that it will stop that curve and continue down each branch to join. Because your example has three curve segments, PolyPath SOP will not do anything as it assumes the result is valid from your input curves.
What do you want to do with the curves?
You can use a Fuse SOP to consolidate the points of all the curves. This gives you no gaps but also leaves you with three primitives. Many tools in Houdini will work on the curves with consolidated points though.
In general the PolyPath SOP is used to take several curve segments and joint them into a single curve. The documentation for PolyPath also states that when it reaches a branch, that it will stop that curve and continue down each branch to join. Because your example has three curve segments, PolyPath SOP will not do anything as it assumes the result is valid from your input curves.
What do you want to do with the curves?
Solaris » Editing parameters on USD file
-
- old_school
- 2538 posts
- Offline
Yes to using the Edit Material LOP and Drag-N-Drop the material you wish to edit and hit "load" to reconstruct the VOP shader network from the USD shade nodes.
As for Lights, the common strategy is to use the Light mixer for all light adjustments.
In H18.5 we added the ability to not only modify the light brightness and exposure but also the position of the lights and some of the Light primitive vars.
The only current limitation with the light mixer is the ability to select multiple lights and transform them. Only one at a time.
You can use the Edit LOP to move multiple lights.
When doing viewport selections absolutely use the select filter on the arrow select button and set to Lights. If you are in the Edit LOP you can get to the same options on the top bar filters as well. If you hit the "S" key in the light mixer you will also get the same selection filter icon strip at the top of the viewport to filter on light selections only.
As for Lights, the common strategy is to use the Light mixer for all light adjustments.
In H18.5 we added the ability to not only modify the light brightness and exposure but also the position of the lights and some of the Light primitive vars.
The only current limitation with the light mixer is the ability to select multiple lights and transform them. Only one at a time.
You can use the Edit LOP to move multiple lights.
When doing viewport selections absolutely use the select filter on the arrow select button and set to Lights. If you are in the Edit LOP you can get to the same options on the top bar filters as well. If you hit the "S" key in the light mixer you will also get the same selection filter icon strip at the top of the viewport to filter on light selections only.
PDG/TOPs » Making a blocking task that waits for input from a human
-
- old_school
- 2538 posts
- Offline
Note that both those functions return data so you can use this to make further decisions.
PDG/TOPs » Making a blocking task that waits for input from a human
-
- old_school
- 2538 posts
- Offline
Cheap and cheerful you can use the hscript “message” command anywhere you evaluate a parameter or houdini hscript (HDA scripts like onupdate script, shelf button, etc):
In a texptort type:
to see help usage on the command
Then try:
will pop up a dialog that will pause EVERYTHING until you hit that button.
If Python and you want also cheap and cheerful, you can use hou.ui.displayMessage()
In a Houdini Python Shell type:
Beyond that you are writing your own Python +QT interrupt.
In a texptort type:
help message
Then try:
message -b "Press Me"
If Python and you want also cheap and cheerful, you can use hou.ui.displayMessage()
In a Houdini Python Shell type:
hou.ui.displayMessage("My Dialog", buttons=("OK",))
Beyond that you are writing your own Python +QT interrupt.
Houdini Indie and Apprentice » Select N-Gons
-
- old_school
- 2538 posts
- Offline
PolyDoctor can mark any polys above quads.
Set 5+ Edges to Mark then set all the other options below to None.
This creates a poly attribute called valid_poly where all the invalid polygons are set to 0.
There is a visualizer option. In the Visualize tab turn on Invalid Polys to see the offending n-gons.
You can follow with a Blast SOP and in the group filed use:
and you can see all the invalid polys isolated.
Or use viewport selectors by attribute to do all this in the viewport if you want.
Set 5+ Edges to Mark then set all the other options below to None.
This creates a poly attribute called valid_poly where all the invalid polygons are set to 0.
There is a visualizer option. In the Visualize tab turn on Invalid Polys to see the offending n-gons.
You can follow with a Blast SOP and in the group filed use:
@valid_poly==0
Or use viewport selectors by attribute to do all this in the viewport if you want.
Houdini Indie and Apprentice » Passing parameter inside a loop
-
- old_school
- 2538 posts
- Offline
If you want to use the Transform SOP as is, and you have no desire to make this into a compile block, the easier thing is to just replace the trivial @PR with:
or
for tagging the input SOP generically.
The prim() expression is an old school method to fetch an attribute based on a specific index. Since all the prims have the same value inside the subnet you can use 0 or the first primitive value.
To make it compile safe, use an attrib wrangle as jsmack indicated above.
prim("../transform2", 0, "PR", 0)
prim(opinputpath("."), 0, "PR", 0)
The prim() expression is an old school method to fetch an attribute based on a specific index. Since all the prims have the same value inside the subnet you can use 0 or the first primitive value.
To make it compile safe, use an attrib wrangle as jsmack indicated above.
Edited by old_school - Nov. 23, 2020 13:29:55
Houdini Indie and Apprentice » Alembic camera transform data?
-
- old_school
- 2538 posts
- Offline
Alembic and USD only support raw values and not keyframes with segments btw.
Not every application interpolates keyframe functions the same so that is why it is all raw values.
The fact you are adding keyframes and segments and perhaps fitting the animation data which FBX can certainly do is you manufacturing data that wasn't there.
All fine as long as you don't use this as Alembic/USD truth, cause it isn't. It would be an FBX bake on top of Alembic/USD.
Not every application interpolates keyframe functions the same so that is why it is all raw values.
The fact you are adding keyframes and segments and perhaps fitting the animation data which FBX can certainly do is you manufacturing data that wasn't there.
All fine as long as you don't use this as Alembic/USD truth, cause it isn't. It would be an FBX bake on top of Alembic/USD.
Houdini Indie and Apprentice » Alembic camera transform data?
-
- old_school
- 2538 posts
- Offline
First let's talk /obj based Houdini:
To “see” the camera transform value at the current frame, add an Alembic Xform node pointed to the .abc file and point it to your camera at question. Feed in the correct frame and fps and it returns the transform data.
To see the actual values, you can use the following hscript commands:
vorigin(“”,“/obj/cam1”)
vtorigin(“”,“/obj/cam1”)
vrorigin(“”,“/obj/cam1”)
or
origin(“”,“/obj/cam1”, “TX”) where “TX” can be (“TX”, “TY”, “TZ”, “RX”, “RY”, “RZ”, “SX”, “SY” or “SZ” )
If you want, you can add a Null Object and feed it in these expressions as a visualizer if you wish.
The fast reading of alembic and the fact that there is no parameter updates is one of the built-in features that makes Alembic so fast.
Alembic is a baked format that does not support keyframes or any spline type animation curve. Anything time based is baked as samples. Camera transforms are captured at frame rate defined in the export as transform samples in the .abc file.
Houdini uses the Alembic Xform Object to fetch a specific Alembic Primitives transform sample at the given frame with the correct FPS set.
Alembic is a heavily optimized file format that is designed to read very fast from disk and Houdini's Alembic support is utilizing the best that Houdini has in /obj space to support the format as fast as possible w/o having parameter dependency updates, etc.
For Solaris, to see the camera transform values at the current fraome, you can use the USD framework with it's Alembic schema to bring in the Alembic Camera and inspect the values on the primitives properties.
You can use a SubLayer LOP to load in the .abc file and directly inspect the camera transform data at the current fraome in the Scene Graph Details.
Look at the xformOp:transform subdata on the camera to see the values update as you increment the framerange.
See the attached hip file and don't forget to write out the Alembic from /out/alembic ROP.
To “see” the camera transform value at the current frame, add an Alembic Xform node pointed to the .abc file and point it to your camera at question. Feed in the correct frame and fps and it returns the transform data.
To see the actual values, you can use the following hscript commands:
vorigin(“”,“/obj/cam1”)
vtorigin(“”,“/obj/cam1”)
vrorigin(“”,“/obj/cam1”)
or
origin(“”,“/obj/cam1”, “TX”) where “TX” can be (“TX”, “TY”, “TZ”, “RX”, “RY”, “RZ”, “SX”, “SY” or “SZ” )
If you want, you can add a Null Object and feed it in these expressions as a visualizer if you wish.
The fast reading of alembic and the fact that there is no parameter updates is one of the built-in features that makes Alembic so fast.
Alembic is a baked format that does not support keyframes or any spline type animation curve. Anything time based is baked as samples. Camera transforms are captured at frame rate defined in the export as transform samples in the .abc file.
Houdini uses the Alembic Xform Object to fetch a specific Alembic Primitives transform sample at the given frame with the correct FPS set.
Alembic is a heavily optimized file format that is designed to read very fast from disk and Houdini's Alembic support is utilizing the best that Houdini has in /obj space to support the format as fast as possible w/o having parameter dependency updates, etc.
For Solaris, to see the camera transform values at the current fraome, you can use the USD framework with it's Alembic schema to bring in the Alembic Camera and inspect the values on the primitives properties.
You can use a SubLayer LOP to load in the .abc file and directly inspect the camera transform data at the current fraome in the Scene Graph Details.
Look at the xformOp:transform subdata on the camera to see the values update as you increment the framerange.
See the attached hip file and don't forget to write out the Alembic from /out/alembic ROP.
Technical Discussion » how to make a Geo always face a certain direction?
-
- old_school
- 2538 posts
- Offline
Here is an example file of using a separate Object Null as the target geometry.
The file packs the Bee geometry.
Uses a Primitive SOP's object target to do the transform
Follow with an unpack SOP to get back to the geometry.
Moving the null will aim the geometry.
The file packs the Bee geometry.
Uses a Primitive SOP's object target to do the transform
Follow with an unpack SOP to get back to the geometry.
Moving the null will aim the geometry.
Houdini Indie and Apprentice » Simple jitter/sway animation on instances
-
- old_school
- 2538 posts
- Offline
Since each unique instance type only supports shader/displacement and render property changes per instance, you will have to generate sequences of tree geometry with motion on them. Then use modulate to cycle over the tree sequences when you instance them.
How you deform the trees is a different question. I am not familiar with SpeedTree these days but if it spits out a skeleton (like I remember I think…), then you can use the Edge Transport SOP to crawl up the trunk-branch skeleton curves (fuse the points so branches fuse to branches etc. fuse to trunk).
Edge Transport SOP creates a distance attribute starting at the point you choose which should be the base of the tree. It's a distance attribute that allows you to add more movement the longer the distance. If there is a width or thickness attribute even better to help add deformation.
Then drive the skinned tree with the skeleton.
Leaves should be instances with instance points. Hopefully they have a local frame of reference (N and up or an orient matrix) that is super easy to sway about with vex.
Then generate your sequence of deforming trees as a cycled animation to disk wedging as many as you want.
Generally 10 instance sequences per type of trees works well enough with a cycle length of 4-5 seconds.
How you deform the trees is a different question. I am not familiar with SpeedTree these days but if it spits out a skeleton (like I remember I think…), then you can use the Edge Transport SOP to crawl up the trunk-branch skeleton curves (fuse the points so branches fuse to branches etc. fuse to trunk).
Edge Transport SOP creates a distance attribute starting at the point you choose which should be the base of the tree. It's a distance attribute that allows you to add more movement the longer the distance. If there is a width or thickness attribute even better to help add deformation.
Then drive the skinned tree with the skeleton.
Leaves should be instances with instance points. Hopefully they have a local frame of reference (N and up or an orient matrix) that is super easy to sway about with vex.
Then generate your sequence of deforming trees as a cycled animation to disk wedging as many as you want.
Generally 10 instance sequences per type of trees works well enough with a cycle length of 4-5 seconds.
Houdini Indie and Apprentice » Particle Lifetime measured in seconds? not for me!
-
- old_school
- 2538 posts
- Offline
Your approach is a good one. Using a volume to create a lattice of points is one way.
Another is to create a density field and scatter points by volume as well.
This second approach allows you to scuplt the density with noise to create a non-uniform distribution.
If you “want” a uniform distribution of points, you can also use a Box SOP set to lattice and add the internal divisions then use an Add SOP to delete the geometry but keep the points.
I believe the offset may be due to the original bean's origin not aligning with 0,0,0 in the current object. Or if you are using a bean at the object level, there may be a transform present there that is causing the offset.
I whipped up a quick file to show one way to create floating coffee beans around a mug using Bullet Simulation to add motion and to solve for self-intersections.
Note:
- how to construct a proper convex decomposition for the mug to help Bullet collisions
- construct volume to build points from by subtracting the mug volume from the base volume to avoid beans interpenetrating mug.
Another is to create a density field and scatter points by volume as well.
This second approach allows you to scuplt the density with noise to create a non-uniform distribution.
If you “want” a uniform distribution of points, you can also use a Box SOP set to lattice and add the internal divisions then use an Add SOP to delete the geometry but keep the points.
I believe the offset may be due to the original bean's origin not aligning with 0,0,0 in the current object. Or if you are using a bean at the object level, there may be a transform present there that is causing the offset.
I whipped up a quick file to show one way to create floating coffee beans around a mug using Bullet Simulation to add motion and to solve for self-intersections.
Note:
- how to construct a proper convex decomposition for the mug to help Bullet collisions
- construct volume to build points from by subtracting the mug volume from the base volume to avoid beans interpenetrating mug.
Technical Discussion » Quick question...
-
- old_school
- 2538 posts
- Offline
vex functions opfullpath()and relativepath() return paths to nodes.
http://www.sidefx.com/docs/houdini/vex/functions/opfullpath.html [www.sidefx.com]
http://www.sidefx.com/docs/houdini/vex/functions/relativepath.html [www.sidefx.com]
Here's one way to use opfullpath to return a string node name:
Just replace the s@pathfull and s@pathsplit to internal variables to avoid attribute bloat. Just exposed for clarity.
Now I have to ask, what are you doing?
It isn't that efficient to fetch a single node name for all the data/points you are ripping through.
If you have unique nodes to apply to different items you are processing, sure. But strings aren't the most efficient method through vex for lots of data processing.
It may be better to put a local spare parameter on the node and build out the string once as a set of parameters when the node cooks.
Python is very powerful at string manipulation while in vex it is somewhat limited…
http://www.sidefx.com/docs/houdini/vex/functions/opfullpath.html [www.sidefx.com]
http://www.sidefx.com/docs/houdini/vex/functions/relativepath.html [www.sidefx.com]
Here's one way to use opfullpath to return a string node name:
s@pathfull = opfullpath("../timeshift1"); s[]@pathsplit = split(s@pathfull, "/"); s@nodename = s[]@pathsplit[-1];
Now I have to ask, what are you doing?
It isn't that efficient to fetch a single node name for all the data/points you are ripping through.
If you have unique nodes to apply to different items you are processing, sure. But strings aren't the most efficient method through vex for lots of data processing.
It may be better to put a local spare parameter on the node and build out the string once as a set of parameters when the node cooks.
Python is very powerful at string manipulation while in vex it is somewhat limited…
Houdini Indie and Apprentice » Particle Lifetime measured in seconds? not for me!
-
- old_school
- 2538 posts
- Offline
I would first check in DOPs to see how the attributes age and time relate. In DOPs this is the geometry spreadsheet in popobject/Geometry.
If you cut your lifetime to say 2 and then play forward you should see age reach 2 then particles die.
In SOPs you can distort time. Time Shift and Time Warp SOPs come to mind. These would cause the age attribute to not reflect time…
How did you set up your particle sim?
If you cut your lifetime to say 2 and then play forward you should see age reach 2 then particles die.
In SOPs you can distort time. Time Shift and Time Warp SOPs come to mind. These would cause the age attribute to not reflect time…
How did you set up your particle sim?
-
- Quick Links