Yes!
You can turn either the VOP network part in to a Houdini Digital Asset (HDA) or the SOP that contains the VOP network in to an HDA or both. When you have an HDA defined in Houdini, it immediately becomes available to you in the tab menu in the given contexts. You can also package up HDA's on the shelf with simple interactive behaviours.
Building tools and turning them in to HDA's is an essential task.
Found 2146 posts.
Search results Show results as topic list.
Technical Discussion » deleting/killing particles by bounding box, but with threshold and/or falloff
- old_school
- 2540 posts
- Offline
Technical Discussion » deleting/killing particles by bounding box, but with threshold and/or falloff
- old_school
- 2540 posts
- Offline
See the attached file that shows both using Volumes to do particle culling and nearest location with an almost identical VOP network when compared with the ICE graph. I had to flip the logic in one step from an AND to an OR as I wanted to support a slightly larger subset of inputs.
I commented the nodes to make it self-documenting.
What's neat about this file is I have three conditions that showcase when you want to use volume sdf nearest point locations, nearest location to surface, and where it doesn't matter.
Nice to have both methods under our sleeves.
I liked this task.
I commented the nodes to make it self-documenting.
What's neat about this file is I have three conditions that showcase when you want to use volume sdf nearest point locations, nearest location to surface, and where it doesn't matter.
Nice to have both methods under our sleeves.
I liked this task.
Technical Discussion » deleting/killing particles by bounding box, but with threshold and/or falloff
- old_school
- 2540 posts
- Offline
Could also add probability as well depending on the weight if you want this to happen on a specific frame. A rand() function with a weight bias would work nicely to fuzzy up setting of the dead == 1 line.
Technical Discussion » deleting/killing particles by bounding box, but with threshold and/or falloff
- old_school
- 2540 posts
- Offline
Just saw your post with the ramp weight snapshot in Softimage. Yes you can add a ramp parameter to the density accumulator scalar as well. I just set it to 0.1 I think.
Nothing stopping you from querying the density from the volume and using a transfer function (ramp) fed by a fit01() to remap the accumulator multiplier.
I chose to use a blur on the volume itself to set the volume weight blur. It's the most lightweight method and done all the time to have volumes as masks. Lots of volume tools to modify the weights as well.
Nice to have Houdini support volumes natively.
Nothing stopping you from querying the density from the volume and using a transfer function (ramp) fed by a fit01() to remap the accumulator multiplier.
I chose to use a blur on the volume itself to set the volume weight blur. It's the most lightweight method and done all the time to have volumes as masks. Lots of volume tools to modify the weights as well.
Nice to have Houdini support volumes natively.
Technical Discussion » deleting/killing particles by bounding box, but with threshold and/or falloff
- old_school
- 2540 posts
- Offline
But you could have an attribute that accumulates over time based on the density of the volume. Then cull when that accumulated attribute reaches 1. Is that kinda what delete by volume node does in Soft?
Does the attached file do what you want? I created a couple POP wrangle nodes that can be wrapped up in to an HDA if you wish.
Does the attached file do what you want? I created a couple POP wrangle nodes that can be wrapped up in to an HDA if you wish.
Technical Discussion » Compile blocks within nested loops?
- old_school
- 2540 posts
- Offline
The compile block doesn't support what you are trying to do. You are forcing the compile block to re-compile every iteration.
Move the compile block around both foreach blocks. This will compile both foreach blocks and run inside the compile block for threading and memory savings.
Move the compile block around both foreach blocks. This will compile both foreach blocks and run inside the compile block for threading and memory savings.
Technical Discussion » What's the best way of getting rid of intersecting geo?
- old_school
- 2540 posts
- Offline
Boolean SOP will both cut at intersections and remove interior faces. Just wire your merged geometry in to the first input of the Boolean SOP.
Does that do what you want?
Does that do what you want?
Houdini Learning Materials » Best way to learn VEX ?
- old_school
- 2540 posts
- Offline
Following from Marc who I agree with (learning by hacking and slashing at vex is a fabulous way to learn it)…
When we are taking about using vex in the scope of using Houdini efficiently, we mean using vex functions for almost all the work we do to take advantage of the free threading from VEX evaluation inside Houdini.
We are not talking about writing functions, code design such as OOP, large or small application development, etc. The things you learn when undertaking the daunting task of learning to write code.
We are talking about using functions. Scripting skills are in order here.
To say “learn programming” to learn vex is way way WAY overkill for most of what we do.
If you want to learn vex, learn vex. Do stuff with it. I would also learn hscript as well. Many hscript functions have been ported to vex making for a somewhat seamless cross-over.
Actually learn VOPs first as Marc recommended. VOPs create vex code behind the scenes in a way that you rarely can create code that doesn't compile. It offers everything you can do in wranglers but in a node network. Once comfortable with VOPs, start inspecting the vex code generated to learn how the code is constructed. Then wrangle VEX code is a logical progression. Just so you know a Snippet VOP is how wranglers are exposed at the top so there you go!
Wrangling was added to offer TD's a convenient way to wrangle attributes on geometry. It has turned out to be so handy that every TD uses wranglers, even if there is a SOP to do that work. Your friend's advice was correct in that you ultimately need to learn to write vex wranglers but it isn't absolutely mandatory, at first.
My tip is to start learning Houdini full stop. Run through some tutorials. When you hit a spot where wrangling is used, follow along and carefully inspect the results through the operator info and more importantly the spreadsheet to see what attributes were added and what values in general were assigned to those attributes. Then hack away at that example, make it fail, fix it, debug it. You're there! That is 90% of what wranglers are doing anyway.
VEX wrangling is all quite logical once you get the syntax out of the way and learn a few common VEX functions like:
set()
relbbox()
chs(), chf(), chv(), chramp()
length(), luminance()
if() for()and other looping blocks
and more.
If learning vex, focus on point attributes at first as this is the sweet spot when learning. After you have command of point wrangling, expand to include primitive, vertex and detail wrangling of attributes. Mixing geometry types and attributes is the next step. Then create geometry after that.
I find the help to be excellent at finding a vex function for a given specific task.
When we are taking about using vex in the scope of using Houdini efficiently, we mean using vex functions for almost all the work we do to take advantage of the free threading from VEX evaluation inside Houdini.
We are not talking about writing functions, code design such as OOP, large or small application development, etc. The things you learn when undertaking the daunting task of learning to write code.
We are talking about using functions. Scripting skills are in order here.
To say “learn programming” to learn vex is way way WAY overkill for most of what we do.
If you want to learn vex, learn vex. Do stuff with it. I would also learn hscript as well. Many hscript functions have been ported to vex making for a somewhat seamless cross-over.
Actually learn VOPs first as Marc recommended. VOPs create vex code behind the scenes in a way that you rarely can create code that doesn't compile. It offers everything you can do in wranglers but in a node network. Once comfortable with VOPs, start inspecting the vex code generated to learn how the code is constructed. Then wrangle VEX code is a logical progression. Just so you know a Snippet VOP is how wranglers are exposed at the top so there you go!
Wrangling was added to offer TD's a convenient way to wrangle attributes on geometry. It has turned out to be so handy that every TD uses wranglers, even if there is a SOP to do that work. Your friend's advice was correct in that you ultimately need to learn to write vex wranglers but it isn't absolutely mandatory, at first.
My tip is to start learning Houdini full stop. Run through some tutorials. When you hit a spot where wrangling is used, follow along and carefully inspect the results through the operator info and more importantly the spreadsheet to see what attributes were added and what values in general were assigned to those attributes. Then hack away at that example, make it fail, fix it, debug it. You're there! That is 90% of what wranglers are doing anyway.
VEX wrangling is all quite logical once you get the syntax out of the way and learn a few common VEX functions like:
set()
relbbox()
chs(), chf(), chv(), chramp()
length(), luminance()
if() for()and other looping blocks
and more.
If learning vex, focus on point attributes at first as this is the sweet spot when learning. After you have command of point wrangling, expand to include primitive, vertex and detail wrangling of attributes. Mixing geometry types and attributes is the next step. Then create geometry after that.
I find the help to be excellent at finding a vex function for a given specific task.
Houdini Learning Materials » Exporting from Houdini
- old_school
- 2540 posts
- Offline
It can be confusing when to cache and what file formats to generate. And getting the correct file extension is critical to success.
Houdini is very different from other apps in that you invariably use a Node of type ROP for Render Output Driver to export geometry.
The only exception is the File > Export > Alembic or FBX options from the menu but I would recommend not to use this option and to learn to use either an Alembic ROP or an FBX ROP to export parts or all of your scene. The various SOPs used for exporting invariably are wrappers around various ROP types. Only exception is the File SOP wrapped up in the very useful File Cache SOP.
Caching Simulations for re-working:
When you are doing a large simulation, it is common to cache the raw results of the simulation in geometry SOPs using the File Cache SOP. This File Cache SOP is used in many shelf tool set-up. It can both write and read geometry to disk. Or it can pass the geometry through. It tries it's best in Automatic mode to make sure files on disk remain up to date and not overwritten if not necessary. But this node isn't generally intended to create geometry to transfer over to other applications. You “could” absolutely use a File Cache SOP to generate export geometry for use in other software packages but that is not the intention. You have to be very careful to make sure only the primitives that are supported are contained in the geometry. At least that is the strategy I try to use.
Caching Geometry/Simulations for export to other applications or to Houdini/Mantra:
It's up to you to determine where in the SOP geometry chain you wish to export from. That is why you have a Display and Render flag on SOPs. You can separate them if you wish using Ctrl-Click to set the Render flag.
You can have as many export locations as you want. You also have a choice as to whether you cache with nodes in the SOP geometry context or from the /out Output context. It's just packaging and approach. If you are not utilizing a farm to process scenes you can safely export your scene date directly from SOPs. Only caveat is that FBX is only accessible from /out using the FBX ROP Render OutPut node.
SOP level geometry exports
Geometry Exports:
Use the ROP Output Driver SOP for formally exporting geometry to 3rd party applications. Just wire it in the network, select the appropriate file extension and go.
Common file extensions:
.bgeo > Houdini's native geometry format. All primitive types are supported.
.vdb > OpenVDB volume format export extension. Note that this ONLY supports VDB grids/volumes. Please remove any and all other primitive types from the target SOP you wish to output from. If you do have a mixed bag of geometry in your SOP chain, use a Delete SOP and set Geometry Type to “VDB” and set Operation to “Delete Non-Selected” to keep any and all VDB type geometry.
.obj > Wavefront's ancient obj format but it is still widely supported. Only polygon data is supported so make sure you delete everything but polygons. You can use the Delete SOP as it has a menu to choose polygons only and then enable delete non-selected to keep the polygons but trash everything else.
.iges > an ancient cad format that still keeps on going. It is a bastardized extended format with hundreds of different variants. It's best to strip out everything and keep only polygons and NURBs surfaces for this export type.
.lw > Lightwave's polygon format. As with .obj, only polygons are supported.
.eps and .ai > Yes you can save geometry as .eps 2D vector lifework or .ai for Adobe Illustrator but this variant has limited geometry support. Keep things to Bezier and NURBs closed faces and simple polygon shapes. Best if you flatten everything to the XY plane as well. I personally don't use this export option. It is very limited.
.dxf > AutoCAD's default Data eXchange Format. This supports polygons, open and closed curves, NURBs curves and surfaces along with Bezier curves and surfaces.
.abc > Alembic geometry can be cached directly from a SOP. Careful as this is NOT how you support round-tripping of alembic archives. Better to use the Alembic Output Driver SOP imho. Use this to export your geometry as a single flat alembic archive. Use the Name SOP along with partitioning of the geometry to add archive information. This supports a limited subset of geometry from polygons and even better polysoup type primitives, Catmul-Clark Rendderman type subdivided meshes, points, open curves such as hair wires. It does not support volumes, NURBs, Beziers or anything else for that matter.
Scene Level Exports:
Use the ROP Alembic Output Driver SOP.
Extension to use is .abc
As we all know, Alembic archives are a complete scene level format used to capture the entire scene to be lit and rendered out. It can contain lights, cameras, and entire geometry hierarchies. You can create an alembic archive that only contains geometry. This is why it is a scene level output driver inside SOPs. This is also why in the Hierarchy tab, you can specify a root object which can be the object that contains the geometry in this object you wish to export.
Alembic format does not support all the available primitive types in Houdini. To repeat the above list:
.abc > Use this to export your geometry as a single flat alembic archive. Use the Name SOP along with partitioning of the geometry to add archive information.
This supports a limited subset of geometry from polygons and even better polysoup type primitives, Catmul-Clark Rendderman type subdivided meshes, points, open curves such as hair wires. It does NOT support volumes, VDB grids, NURBs, Beziers or anything else for that matter.
Alembic archives are quite unique in that they support deforming geometry such as characters. As such it requires you to input a frame range to capture any deforming geometry. Just output a single frame if there is no deforming geometry.
The Alembic data format doesn't take too kindly to meshes that change topology such as a flip fluid surface. Instead of using an efficient archive method where transform deltas on top of a mesh are kept, a copy of the mesh at every frame is inserted in to the alembic archive with no benefits of motion blur.
Oh well…
/out level geometry exports
For a more organized and formalized approach to exporting geometry out to other applications, you can go to the /out directory and add specialized geometry type ROP Render OutPut drivers. They do exactly what is listed above in SOPs but in /out where you can chain several of these ROPs together to create a dependency output graph where you can render all of them out at the same time.
Geometry Export ROPs:
Use the Geometry ROP for formally exporting geometry to 3rd party applications
Scene Export ROPs:
ROP Alembic Output Driver SOP > See the ROP Alembic Output Driver SOP above.
Filmbox FBX ROP:
File extension is .fbx
As with the Alembic Output Driver, this is a scene export filetype that contains scene data intended to be lit and rendered. It is also a file type that is good for containing performance motion captured rig data. Houdini uses .fbx as a standard transfer data format for character agents and as input for crowds. You can also use it to export scene data such as cameras, lights, materials and geometry.
FBX also has geometry limitations (surprise surprise): It supports, polygons, NURBs, Beziers, and that's pretty much it. It doesn't support geometry that changes in topology.
—-
Houdini's own native formats support all primitive types. Too bad all the other software evolved in a way that this simply doesn't exist and to make matters worse, many 3rd party file types were created to work around deficiencies around these other applications which never considered Houdini's vast array of data types. I'm looking at you Alembic! One fear is that USD may be evolving in a similar fashion pointing to the lowest common denominator…
OpenVDB on the other hand is a fantastic archive format for Volumetric data. No wonder that Arnold supports direct rendering of OpenVDB grids.
Because of all this you need to develop a strategy to export your geometry out to other applications. Trial and error. It's just the way it is. Or you can use the super open File SOP, Alembic SOP and FBX import from the File > Import menu and add Houdini geometry then keep everything in Houdini and render with Mantra.
Or there are for purchase 3rd party options as well to help you get data from app to app.
As well, it is worth a Google search to see if there are some enterprising users that have written small wrapper apps to import and export geometry about. You never know what is out there until you look.
Houdini is very different from other apps in that you invariably use a Node of type ROP for Render Output Driver to export geometry.
The only exception is the File > Export > Alembic or FBX options from the menu but I would recommend not to use this option and to learn to use either an Alembic ROP or an FBX ROP to export parts or all of your scene. The various SOPs used for exporting invariably are wrappers around various ROP types. Only exception is the File SOP wrapped up in the very useful File Cache SOP.
Caching Simulations for re-working:
When you are doing a large simulation, it is common to cache the raw results of the simulation in geometry SOPs using the File Cache SOP. This File Cache SOP is used in many shelf tool set-up. It can both write and read geometry to disk. Or it can pass the geometry through. It tries it's best in Automatic mode to make sure files on disk remain up to date and not overwritten if not necessary. But this node isn't generally intended to create geometry to transfer over to other applications. You “could” absolutely use a File Cache SOP to generate export geometry for use in other software packages but that is not the intention. You have to be very careful to make sure only the primitives that are supported are contained in the geometry. At least that is the strategy I try to use.
Caching Geometry/Simulations for export to other applications or to Houdini/Mantra:
It's up to you to determine where in the SOP geometry chain you wish to export from. That is why you have a Display and Render flag on SOPs. You can separate them if you wish using Ctrl-Click to set the Render flag.
You can have as many export locations as you want. You also have a choice as to whether you cache with nodes in the SOP geometry context or from the /out Output context. It's just packaging and approach. If you are not utilizing a farm to process scenes you can safely export your scene date directly from SOPs. Only caveat is that FBX is only accessible from /out using the FBX ROP Render OutPut node.
SOP level geometry exports
Geometry Exports:
Use the ROP Output Driver SOP for formally exporting geometry to 3rd party applications. Just wire it in the network, select the appropriate file extension and go.
Common file extensions:
.bgeo > Houdini's native geometry format. All primitive types are supported.
.vdb > OpenVDB volume format export extension. Note that this ONLY supports VDB grids/volumes. Please remove any and all other primitive types from the target SOP you wish to output from. If you do have a mixed bag of geometry in your SOP chain, use a Delete SOP and set Geometry Type to “VDB” and set Operation to “Delete Non-Selected” to keep any and all VDB type geometry.
.obj > Wavefront's ancient obj format but it is still widely supported. Only polygon data is supported so make sure you delete everything but polygons. You can use the Delete SOP as it has a menu to choose polygons only and then enable delete non-selected to keep the polygons but trash everything else.
.iges > an ancient cad format that still keeps on going. It is a bastardized extended format with hundreds of different variants. It's best to strip out everything and keep only polygons and NURBs surfaces for this export type.
.lw > Lightwave's polygon format. As with .obj, only polygons are supported.
.eps and .ai > Yes you can save geometry as .eps 2D vector lifework or .ai for Adobe Illustrator but this variant has limited geometry support. Keep things to Bezier and NURBs closed faces and simple polygon shapes. Best if you flatten everything to the XY plane as well. I personally don't use this export option. It is very limited.
.dxf > AutoCAD's default Data eXchange Format. This supports polygons, open and closed curves, NURBs curves and surfaces along with Bezier curves and surfaces.
.abc > Alembic geometry can be cached directly from a SOP. Careful as this is NOT how you support round-tripping of alembic archives. Better to use the Alembic Output Driver SOP imho. Use this to export your geometry as a single flat alembic archive. Use the Name SOP along with partitioning of the geometry to add archive information. This supports a limited subset of geometry from polygons and even better polysoup type primitives, Catmul-Clark Rendderman type subdivided meshes, points, open curves such as hair wires. It does not support volumes, NURBs, Beziers or anything else for that matter.
Scene Level Exports:
Use the ROP Alembic Output Driver SOP.
Extension to use is .abc
As we all know, Alembic archives are a complete scene level format used to capture the entire scene to be lit and rendered out. It can contain lights, cameras, and entire geometry hierarchies. You can create an alembic archive that only contains geometry. This is why it is a scene level output driver inside SOPs. This is also why in the Hierarchy tab, you can specify a root object which can be the object that contains the geometry in this object you wish to export.
Alembic format does not support all the available primitive types in Houdini. To repeat the above list:
.abc > Use this to export your geometry as a single flat alembic archive. Use the Name SOP along with partitioning of the geometry to add archive information.
This supports a limited subset of geometry from polygons and even better polysoup type primitives, Catmul-Clark Rendderman type subdivided meshes, points, open curves such as hair wires. It does NOT support volumes, VDB grids, NURBs, Beziers or anything else for that matter.
Alembic archives are quite unique in that they support deforming geometry such as characters. As such it requires you to input a frame range to capture any deforming geometry. Just output a single frame if there is no deforming geometry.
The Alembic data format doesn't take too kindly to meshes that change topology such as a flip fluid surface. Instead of using an efficient archive method where transform deltas on top of a mesh are kept, a copy of the mesh at every frame is inserted in to the alembic archive with no benefits of motion blur.
Oh well…
/out level geometry exports
For a more organized and formalized approach to exporting geometry out to other applications, you can go to the /out directory and add specialized geometry type ROP Render OutPut drivers. They do exactly what is listed above in SOPs but in /out where you can chain several of these ROPs together to create a dependency output graph where you can render all of them out at the same time.
Geometry Export ROPs:
Use the Geometry ROP for formally exporting geometry to 3rd party applications
Scene Export ROPs:
ROP Alembic Output Driver SOP > See the ROP Alembic Output Driver SOP above.
Filmbox FBX ROP:
File extension is .fbx
As with the Alembic Output Driver, this is a scene export filetype that contains scene data intended to be lit and rendered. It is also a file type that is good for containing performance motion captured rig data. Houdini uses .fbx as a standard transfer data format for character agents and as input for crowds. You can also use it to export scene data such as cameras, lights, materials and geometry.
FBX also has geometry limitations (surprise surprise): It supports, polygons, NURBs, Beziers, and that's pretty much it. It doesn't support geometry that changes in topology.
—-
Houdini's own native formats support all primitive types. Too bad all the other software evolved in a way that this simply doesn't exist and to make matters worse, many 3rd party file types were created to work around deficiencies around these other applications which never considered Houdini's vast array of data types. I'm looking at you Alembic! One fear is that USD may be evolving in a similar fashion pointing to the lowest common denominator…
OpenVDB on the other hand is a fantastic archive format for Volumetric data. No wonder that Arnold supports direct rendering of OpenVDB grids.
Because of all this you need to develop a strategy to export your geometry out to other applications. Trial and error. It's just the way it is. Or you can use the super open File SOP, Alembic SOP and FBX import from the File > Import menu and add Houdini geometry then keep everything in Houdini and render with Mantra.
Or there are for purchase 3rd party options as well to help you get data from app to app.
As well, it is worth a Google search to see if there are some enterprising users that have written small wrapper apps to import and export geometry about. You never know what is out there until you look.
Houdini Indie and Apprentice » dialog prompt when creating digital asset, what does it mean?
- old_school
- 2540 posts
- Offline
This dialog comes up when the new HDA you are trying to save the definition for has a change in the parameter interface from the existing HDA definition you are replacing.
The three options are there to let you decide what you want to do with the parameters on both the original definition and the new definition.
No Changes: keep all the parameters from the original definition and the added/changed parameters on the definition you are about to write. Use this if you want to sort out all the old and new parameters manually. You may have a custom spare parameter that you want to keep on the new definition.
Revert Layout: keep the old parameters and toss any new spare parameters you creating with the new definition of the asset. Use this if you are not sure the new definition should really be altering the parameter interface. If you haven't made any change to the parameters and you write the asset and still get this dialog, this option would be a safe bet. Chances are there is a node inside the asset that itself has seen a parameter change across releases that you may or may not have to deal with. 99% of the time if you are unlocking and locking up an HDA just to poke around and you get this dialog, safely choose this option and carry on.
Destroy All Spare Parameters: remove all old spare parameters and replace with new parameters from the new definition you are about to write. Out with the old and in with the new! If you are pushing an HDA forward making lots of developmental changes and you get this dialog, destroy and move forward is what you do. Fix as you go.
The three options are there to let you decide what you want to do with the parameters on both the original definition and the new definition.
No Changes: keep all the parameters from the original definition and the added/changed parameters on the definition you are about to write. Use this if you want to sort out all the old and new parameters manually. You may have a custom spare parameter that you want to keep on the new definition.
Revert Layout: keep the old parameters and toss any new spare parameters you creating with the new definition of the asset. Use this if you are not sure the new definition should really be altering the parameter interface. If you haven't made any change to the parameters and you write the asset and still get this dialog, this option would be a safe bet. Chances are there is a node inside the asset that itself has seen a parameter change across releases that you may or may not have to deal with. 99% of the time if you are unlocking and locking up an HDA just to poke around and you get this dialog, safely choose this option and carry on.
Destroy All Spare Parameters: remove all old spare parameters and replace with new parameters from the new definition you are about to write. Out with the old and in with the new! If you are pushing an HDA forward making lots of developmental changes and you get this dialog, destroy and move forward is what you do. Fix as you go.
Technical Discussion » Modelling - detach polygon
- old_school
- 2540 posts
- Offline
You can use the Extract tool on the Modify Shelf to take your current selection, remove it and create a new object from your selection. This is a shortcut for smack's answer to do the same thing manually with an Object Merge SOP inside a new object.
Technical Discussion » Normal....point, vertex or primitive
- old_school
- 2540 posts
- Offline
Either point or vertex normals with default shaders will cause Mantra to use those normals for refraction and reflection. Houdini only supports either point or vertex N normals within a single object/Display SOP. Never both. It's a historic limitation and a good one imho.
If you don't want point/vertex N normals, append an Attribute Delete SOP and remove N from both vertex and point type attributes. The material will build them for you from the primitive normal direction.
—-
The answer ultimately depends on the shader/material you assign to your geometry. Using any of the shipped materials will inherit any point or vertex normal vector N attribute present on the geometry, and if there is no point or vertex vector N attribute present, the shader is programmed to generate the surface normals from the primitive normals interpolated across the polygon faces as the geometry is both refined and sampled by rays.
To ensure the Normals face toward the eye/camera, the shader will also do a frontface() operation to ensure the surface normals face toward the eye/camera.
—-
When rendering transparent objects or using two-way shaders (textures on both sides of a face), this frontface() option can be disabled. It's the “Shade Both Sides As Front” parameter on the Principled Surface material. If you are rendering transparent objects, it is important to assure that your primitive normals face outward for refraction and reflection to work properly although not mandatory. It is good practice. Verify primitive normal outward orientation by turning on primitive normals in the viewport. If they face inward and you are rendering transparent dielectric type materials or using two-way shaders, use a Reverse SOP to reverse the vertex order winding to revers the primitive normals.
Note that Houdini has opposite Left Handed winding to Maya/3DSMax/RenderMAN and perhaps others. If you are importing any geometry from another application, assume that it is Right handed winding and you need to apply a Reverse SOP if you want to render it with transparency or with a two-sided shader, for Boolean operations to work properly, for collision SDF volumes or VDB grids to be computed properly, etc. Otherwise you can rely on the frontface() operation on your normals to ensure they face forward.
Hope this helps.
If you don't want point/vertex N normals, append an Attribute Delete SOP and remove N from both vertex and point type attributes. The material will build them for you from the primitive normal direction.
—-
The answer ultimately depends on the shader/material you assign to your geometry. Using any of the shipped materials will inherit any point or vertex normal vector N attribute present on the geometry, and if there is no point or vertex vector N attribute present, the shader is programmed to generate the surface normals from the primitive normals interpolated across the polygon faces as the geometry is both refined and sampled by rays.
To ensure the Normals face toward the eye/camera, the shader will also do a frontface() operation to ensure the surface normals face toward the eye/camera.
—-
When rendering transparent objects or using two-way shaders (textures on both sides of a face), this frontface() option can be disabled. It's the “Shade Both Sides As Front” parameter on the Principled Surface material. If you are rendering transparent objects, it is important to assure that your primitive normals face outward for refraction and reflection to work properly although not mandatory. It is good practice. Verify primitive normal outward orientation by turning on primitive normals in the viewport. If they face inward and you are rendering transparent dielectric type materials or using two-way shaders, use a Reverse SOP to reverse the vertex order winding to revers the primitive normals.
Note that Houdini has opposite Left Handed winding to Maya/3DSMax/RenderMAN and perhaps others. If you are importing any geometry from another application, assume that it is Right handed winding and you need to apply a Reverse SOP if you want to render it with transparency or with a two-sided shader, for Boolean operations to work properly, for collision SDF volumes or VDB grids to be computed properly, etc. Otherwise you can rely on the frontface() operation on your normals to ensure they face forward.
Hope this helps.
Houdini Learning Materials » Cache particle points and Offset particle cache in time?
- old_school
- 2540 posts
- Offline
Yes. All ROP type output nodes including the various SOPs and DOPs that contain ROP networks to export geometry to disk will write out all geometry.
If you just want points, you need to remove all the other primitives and just write out the points to the bgeo file that you intend to instance with.
The bgeo format supports all houdini primitives including points so it is up to you to make sure the bgeo's you write to disk only contain the points you want to instance with. You can always test with a Copy To Points SOP on a subset of the points.
Yes all the attributes on the points are kept when you write out the bgeo file.
For instancing, you can add instance attributes on to the points and in the instance file string, reference the geometry on disk that the point is to use. There are a bunch of other ways to instance as well.
Once you are instancing geometry on to points, other attributes are used on the points to manipulate the instanced geometry at render time in Mantra:
float pscale = global scale of the instance geometry
vector scale = per axis scale of the instance geometry
vector4 orient = orientation of the instance geometry
vector v or vector N = Z-lookat instance direction of the geometry (alternate to orient)
vector up = Y-lookat instance direction (alternate to orient)
There are others as well but these are the most common point attributes that affect the points themselves when instancing stuff.
If you just want points, you need to remove all the other primitives and just write out the points to the bgeo file that you intend to instance with.
The bgeo format supports all houdini primitives including points so it is up to you to make sure the bgeo's you write to disk only contain the points you want to instance with. You can always test with a Copy To Points SOP on a subset of the points.
Yes all the attributes on the points are kept when you write out the bgeo file.
For instancing, you can add instance attributes on to the points and in the instance file string, reference the geometry on disk that the point is to use. There are a bunch of other ways to instance as well.
Once you are instancing geometry on to points, other attributes are used on the points to manipulate the instanced geometry at render time in Mantra:
float pscale = global scale of the instance geometry
vector scale = per axis scale of the instance geometry
vector4 orient = orientation of the instance geometry
vector v or vector N = Z-lookat instance direction of the geometry (alternate to orient)
vector up = Y-lookat instance direction (alternate to orient)
There are others as well but these are the most common point attributes that affect the points themselves when instancing stuff.
Technical Discussion » Old vs. New Point SOP
- old_school
- 2540 posts
- Offline
I too have lamented the demise of the Point SOP having it be the corner stone of scene files I've been crafting since 1993. Yes I thought long and hard when it was deprecated in H16 along with the Copy SOP and the Group SOP. The Point SOP's use of local variables in simple expressions is a real good tool for teaching trig and linear algebra over the years. To open the door to artists the ability to work in geometry is liberating. Not to be minimized by the programmers in the audience.
Point SOP has serious issues wrt performance. It works on three floats in turn. No parallel computation here. We tried to add vector computation to Point SOP way back but no one used it as it was a bit of an edge case.
What ultimately deprecated the Point SOP? This node will not work with the new compile SOP workflow which does offer substantial memory and performance gains in certain cases that are quickly becoming more general.
When we decided to add VEX as a general point expression language for POP Wrangling we quickly moved this in to SOPs and other contexts. I rarely used the Point SOP after this, only for demos actually. I haven't used the Point SOP for quite some time in personal files. I go straight for wrangling in an Attribute Wrangle SOP and/or VOPs.
Reasons to use Wrangling in VEX type operators over Point SOP:
- common language applied to all supported contexts in Houdini, including H16 CHOPs!
You learn it once and use it for ever everywhere. Including Shading! You learn how to do a sin() in VEX and it's the same expression in all contexts with VEX.
- common variable names across all Nodes. If you want P, it's @P. Everywhere! If you want point number, it's @ptnum. Everywhere! But if you are using SOPs, for those nodes that had local variables (had to go to help to see if it did) the variables would be different across all these SOPs. Using help was mandatory for those operators you didn't use that often.
Having a common and logical naming schema for all variables removes all this ambiguity. And to find out the variable name and type, just pop up the node info with a MMB. H16 node info in advanced view shows you all the attributes around a variable including it's signature, etc. Very very handy! Thank-you R&D! Makes this even easier to teach.
- VEX gives you threading for free. Well you need to be processing more than 1000 points/prims but still you get threading for free. Never possible with any of the SOPs that use local variables.
- VEX is easy to learn and consistent to apply.
I stand by that and the many new users I've helped get up with the new ways quickly. Especially the Softimage users where our illogical treatment of local variables to actual variable names drove them nutty. $VX for @v.x. Why capitalized when “v” isn't? And on and on…
And using a point(opinputpath(“.”,0),$PT,“my_var”,0) to fetch “my_var” is nutty as you just want f@my_var. Wrangling gives you that! So yes the simple case is slightly more complicated but everything else is the same. No seemingly random use of local variables to fetch common variables by name to do your work.
- Wrangling = VOPs through the Snippet VOP. VOPs is always a visual alternative and fits in naturally with Wrangling.
There's always VOPs as an alternative to visually build up your VEX code. And wrangling slips in here quite nicely.
VEX wrangling is exposed at the top of the node through the use of a Snippet VOP. It is the Snippet VOP's code parameter that is raised to the top level of the asset. Wrangling really is deeply embedded VOPs so using it means you are indeed working in a VOP context down below. And I bet few of you have gone in to the VOP network to have a look. It's very cool what you can do around a Snippet VOP. Have a look at the various POP type DOPs that work with points directly. Very cool stuff Mr. Lait has added in there.
I debug VEX wranglers by inspecting the VEX code. The RMB menu on the node and go to the view vex code option to see how the @variable_name is expanded and how your code is treated as a function in the body of the VEX code.
This is really important as it is a single unified way to do stuff.
- In H16 VEX type operators are supported inside Compile Blocks, but not SOPs using local variables as they are not supported.
This is very important moving forward as we look for further ways to thread and parallelize workflows.
The ambiguity of expressions in parameters such as the old Point SOP makes it almost impossible to thread efficiently.
Learning and using Wrangling is mandatory moving forward for ad-hoc access to munging or wrangling attributes, hence the name.
Does VEX replace hscrpt?
I assume this question will come up at some point.
Yes and no is the answer. Mostly yes though.
We ported most of the hscript commands in to VEX, I believe in H14, H14.5 and H15, and continue to follow up on RFE's for the more ambiguous hscript functions to port over that make sense. For example ch(), chs(), etc to do channel referencing right in VEX. Well be careful in H16 compile blocks but ch() is still supported.
There are some things that are just handier to do in hscript so that is always an option.
Then there are the functions that have not been implemented in VEX. Just yesterday I wanted to access the maximum value of an incoming volume. That is done with the hscript command volumemax() with no VEX equivalent function while there are a bunch of other volume type vex functions that were ported over from hscript. Since the maximum volume value is a constant coming in for the current evaluated time, just create a Parameter VOP and at the top level, put your hscript command in there. Done!
We even added pythonic like slicing to VEX string wrangling a couple releases ago so even text manipulation has gotten much better in VEX and in wrangling. For advanced string support, there is always Python, so use that where it makes sense, then feed VEX the string through a parameter.
—-
For the minimal added complexity to a sin() function, the benefits of VEX wrangling and moving workflows to VOPs in H16 is hugely beneficial. Wrangling and VEX/VOPs are now the way forward for custom geometry manipulation.
Point SOP has been deprecated. Move on I say!
Point SOP has serious issues wrt performance. It works on three floats in turn. No parallel computation here. We tried to add vector computation to Point SOP way back but no one used it as it was a bit of an edge case.
What ultimately deprecated the Point SOP? This node will not work with the new compile SOP workflow which does offer substantial memory and performance gains in certain cases that are quickly becoming more general.
When we decided to add VEX as a general point expression language for POP Wrangling we quickly moved this in to SOPs and other contexts. I rarely used the Point SOP after this, only for demos actually. I haven't used the Point SOP for quite some time in personal files. I go straight for wrangling in an Attribute Wrangle SOP and/or VOPs.
Reasons to use Wrangling in VEX type operators over Point SOP:
- common language applied to all supported contexts in Houdini, including H16 CHOPs!
You learn it once and use it for ever everywhere. Including Shading! You learn how to do a sin() in VEX and it's the same expression in all contexts with VEX.
- common variable names across all Nodes. If you want P, it's @P. Everywhere! If you want point number, it's @ptnum. Everywhere! But if you are using SOPs, for those nodes that had local variables (had to go to help to see if it did) the variables would be different across all these SOPs. Using help was mandatory for those operators you didn't use that often.
Having a common and logical naming schema for all variables removes all this ambiguity. And to find out the variable name and type, just pop up the node info with a MMB. H16 node info in advanced view shows you all the attributes around a variable including it's signature, etc. Very very handy! Thank-you R&D! Makes this even easier to teach.
- VEX gives you threading for free. Well you need to be processing more than 1000 points/prims but still you get threading for free. Never possible with any of the SOPs that use local variables.
- VEX is easy to learn and consistent to apply.
I stand by that and the many new users I've helped get up with the new ways quickly. Especially the Softimage users where our illogical treatment of local variables to actual variable names drove them nutty. $VX for @v.x. Why capitalized when “v” isn't? And on and on…
And using a point(opinputpath(“.”,0),$PT,“my_var”,0) to fetch “my_var” is nutty as you just want f@my_var. Wrangling gives you that! So yes the simple case is slightly more complicated but everything else is the same. No seemingly random use of local variables to fetch common variables by name to do your work.
- Wrangling = VOPs through the Snippet VOP. VOPs is always a visual alternative and fits in naturally with Wrangling.
There's always VOPs as an alternative to visually build up your VEX code. And wrangling slips in here quite nicely.
VEX wrangling is exposed at the top of the node through the use of a Snippet VOP. It is the Snippet VOP's code parameter that is raised to the top level of the asset. Wrangling really is deeply embedded VOPs so using it means you are indeed working in a VOP context down below. And I bet few of you have gone in to the VOP network to have a look. It's very cool what you can do around a Snippet VOP. Have a look at the various POP type DOPs that work with points directly. Very cool stuff Mr. Lait has added in there.
I debug VEX wranglers by inspecting the VEX code. The RMB menu on the node and go to the view vex code option to see how the @variable_name is expanded and how your code is treated as a function in the body of the VEX code.
This is really important as it is a single unified way to do stuff.
- In H16 VEX type operators are supported inside Compile Blocks, but not SOPs using local variables as they are not supported.
This is very important moving forward as we look for further ways to thread and parallelize workflows.
The ambiguity of expressions in parameters such as the old Point SOP makes it almost impossible to thread efficiently.
Learning and using Wrangling is mandatory moving forward for ad-hoc access to munging or wrangling attributes, hence the name.
Does VEX replace hscrpt?
I assume this question will come up at some point.
Yes and no is the answer. Mostly yes though.
We ported most of the hscript commands in to VEX, I believe in H14, H14.5 and H15, and continue to follow up on RFE's for the more ambiguous hscript functions to port over that make sense. For example ch(), chs(), etc to do channel referencing right in VEX. Well be careful in H16 compile blocks but ch() is still supported.
There are some things that are just handier to do in hscript so that is always an option.
Then there are the functions that have not been implemented in VEX. Just yesterday I wanted to access the maximum value of an incoming volume. That is done with the hscript command volumemax() with no VEX equivalent function while there are a bunch of other volume type vex functions that were ported over from hscript. Since the maximum volume value is a constant coming in for the current evaluated time, just create a Parameter VOP and at the top level, put your hscript command in there. Done!
We even added pythonic like slicing to VEX string wrangling a couple releases ago so even text manipulation has gotten much better in VEX and in wrangling. For advanced string support, there is always Python, so use that where it makes sense, then feed VEX the string through a parameter.
—-
For the minimal added complexity to a sin() function, the benefits of VEX wrangling and moving workflows to VOPs in H16 is hugely beneficial. Wrangling and VEX/VOPs are now the way forward for custom geometry manipulation.
Point SOP has been deprecated. Move on I say!
Technical Discussion » Sculpt tool not smoothing
- old_school
- 2540 posts
- Offline
Hi Detlef,
I can't recreate the bug where you set smooth to LMB in Sculpt SOP. It works for me. If you can recreate, please submit the bug to support.
But I am always setting smoothing to the MMB for some reason so I would not personally have caught this bug if it was there. I do set deform with smooth on LMB frequently though and that has worked throughout the H16 beta for me.
I can't recreate the bug where you set smooth to LMB in Sculpt SOP. It works for me. If you can recreate, please submit the bug to support.
But I am always setting smoothing to the MMB for some reason so I would not personally have caught this bug if it was there. I do set deform with smooth on LMB frequently though and that has worked throughout the H16 beta for me.
Houdini Learning Materials » Is there a way to offset the center of the wavetank container in a guided sim so that there are less wasted particles in front of a moving boat?
- old_school
- 2540 posts
- Offline
Yes excellent question! It requires you to “Houdini finesse” a couple parameters. Common to do for the off the Shelf tools.
There are centroid() expressions in there to determine the position of the container based on the centroid of the tracking object. I am always offsetting the container btw so a common thing to do.
The Ocean Source SOP in question is called wave tank and the channels to manipulate are in the parameter labeled Center and the two channels are called tx and tz.
You can simply put your own offset float values in front of the centroid() expressions:
which will offset the container 0.2 units in X and -0.5 units in Z from the centroid of the tracking object.
Tip: you can use the MMB (Middle Mouse Button) on top of the .2 or .5 in the expressions above and increase/decrease the values by manipulating the ladder handle to place the container interactively while watching in the viewport.
I must use the MMB on values in expressions at least every few minutes if not more frequently when building up assets and Houdini networks as I go. So this is a very common TD technique to use when wrangling with expressions.
There are other approaches to incorporate Nulls at the object level, add your own point to locate the centroid of the container within the tracking object or other techniques to fetch the tracking centre but a simple offset on the expressions works just fine once you know about the MMB trick over values in expressions.
There are centroid() expressions in there to determine the position of the container based on the centroid of the tracking object. I am always offsetting the container btw so a common thing to do.
The Ocean Source SOP in question is called wave tank and the channels to manipulate are in the parameter labeled Center and the two channels are called tx and tz.
You can simply put your own offset float values in front of the centroid() expressions:
0.2 + centroid("../mergefollow", D_X)
-0.5 + centroid("../mergefollow", D_Z)
which will offset the container 0.2 units in X and -0.5 units in Z from the centroid of the tracking object.
Tip: you can use the MMB (Middle Mouse Button) on top of the .2 or .5 in the expressions above and increase/decrease the values by manipulating the ladder handle to place the container interactively while watching in the viewport.
I must use the MMB on values in expressions at least every few minutes if not more frequently when building up assets and Houdini networks as I go. So this is a very common TD technique to use when wrangling with expressions.
There are other approaches to incorporate Nulls at the object level, add your own point to locate the centroid of the container within the tracking object or other techniques to fetch the tracking centre but a simple offset on the expressions works just fine once you know about the MMB trick over values in expressions.
Houdini Learning Materials » H16 hair - thickness
- old_school
- 2540 posts
- Offline
The Hair Generate Object's Thickness tab and parameters are currently the way to do global hair thickness settings.
I don't see this feature so you are correct that there is no way to set thickness via a texture map. No idea if it was planned or not. Please submit an RFE if you will with a few cases where you have used this in the past or need for the future.
Digging in a bit…
If you use the tree to go inside the network (there is a dive target that takes you inside a subnet to do the groom btw) you can find the Attribute Wrangle SOP called “set_width” that creates the standard float “width” attribute. The set_width Attribute Wrangle SOP feeds in to a Null SOP target called “Hairs”.
Looking at this design set-up, the width attribute is set directly on the generated hairs. There is no surface therefore a map would require additional logic to work on the hair.
This Null SOP called HAIRS feeds in to the dive target subnet where you can manipulate the width with any attribute type modifier SOP you wish to affect the float width attribute directly on the guides or the hair. You could fetch the surface with uv's to reference your texture map and then apply the values to the hair curves if you want.
I think that the general shape defined at the top level is a good one. If you want say whiskers thicker than body hair, you could select those whisker curves then multiply pscale with an Attribute Wrangle up. Selecting with a texture map on a surface with a value can be set up to do the same thing.
I would probably do a different groom for the whiskers my self and merge for sim if I needed to.
Remember every SOP you create inside the Hair_Generate object becomes a part of the hair generation “procedural” so you can make customizations as you go.
I don't see this feature so you are correct that there is no way to set thickness via a texture map. No idea if it was planned or not. Please submit an RFE if you will with a few cases where you have used this in the past or need for the future.
Digging in a bit…
If you use the tree to go inside the network (there is a dive target that takes you inside a subnet to do the groom btw) you can find the Attribute Wrangle SOP called “set_width” that creates the standard float “width” attribute. The set_width Attribute Wrangle SOP feeds in to a Null SOP target called “Hairs”.
Looking at this design set-up, the width attribute is set directly on the generated hairs. There is no surface therefore a map would require additional logic to work on the hair.
This Null SOP called HAIRS feeds in to the dive target subnet where you can manipulate the width with any attribute type modifier SOP you wish to affect the float width attribute directly on the guides or the hair. You could fetch the surface with uv's to reference your texture map and then apply the values to the hair curves if you want.
I think that the general shape defined at the top level is a good one. If you want say whiskers thicker than body hair, you could select those whisker curves then multiply pscale with an Attribute Wrangle up. Selecting with a texture map on a surface with a value can be set up to do the same thing.
I would probably do a different groom for the whiskers my self and merge for sim if I needed to.
Remember every SOP you create inside the Hair_Generate object becomes a part of the hair generation “procedural” so you can make customizations as you go.
Houdini Indie and Apprentice » Distribute points (randomly) in a mesh with an increase in density towards the center (Points from Volume)
- old_school
- 2540 posts
- Offline
If you generate an SDF type volume with the Iso Offset SOP and negate the SDF with the toggle, then pass this in to Scatter SOP you will get naturally get more points the further you are inside and away from the surface.
See the attached hip file for a complete exploration of scattering points in to SDF volumes. Please read the notes in the SOP network. When modifying SDF volumes it is important that you let anyone working with modified SDF volumes that it is no longer representative of the incoming geometry.
I added a Ramp Parameter option in a Volume VOP to remap the SDF to whatever you want as well.
Hope this helps.
See the attached hip file for a complete exploration of scattering points in to SDF volumes. Please read the notes in the SOP network. When modifying SDF volumes it is important that you let anyone working with modified SDF volumes that it is no longer representative of the incoming geometry.
I added a Ramp Parameter option in a Volume VOP to remap the SDF to whatever you want as well.
Hope this helps.
Technical Discussion » are nested assets possible?
- old_school
- 2540 posts
- Offline
The recipe for creating the geometry is done by the HDA itself and you end up with geometry. If you select the final geometry after the HDA SOPs have done their work, you are picking the result of the HDA. You need to re-apply the construction rules by adding another HDA SOP set up to modify input final bookshelf again to make changes if you want to do it your way.
The whole idea of select current SOP versus Display SOP reinforces the opposite workflow: Affect attributes and geometry upstream and then have the Display SOP result cook the changes and give you the final result.
It may be better if you treat the shelf construction HDA as the last step and build proxy geometry that drives the final HDA that builds all the shelves at once. That proxy input could be the final shelf that is adjusted and then re-built again if you want. But the HDA has to be applied again to build a new result.
For example, if you are putting shelves in a warehouse, give the artist a tool that adds basic shelf stand-in assets. One at a time. Scale and position to suit using any transforms (Transform SOP or Edit SOP in tweak mode for example). Creative use of uv's can give you the impression of more or less books, etc. Give the stand in geo identifiers for front, type, etc. Use Cd colour as colour for shelves. Add attribute slots so that the artist can change the attributes at will. Put the text identifiers on the asset so the artist can read the current state if you want.
Then pass these stand-ins in to the Shelf construction asset and it spits out all the final shelf geometry using the attributes, even as Alembic Archives or Packed Primitives if you want. All this using the Current SOP upstream to the Display SOP.
The whole idea of select current SOP versus Display SOP reinforces the opposite workflow: Affect attributes and geometry upstream and then have the Display SOP result cook the changes and give you the final result.
It may be better if you treat the shelf construction HDA as the last step and build proxy geometry that drives the final HDA that builds all the shelves at once. That proxy input could be the final shelf that is adjusted and then re-built again if you want. But the HDA has to be applied again to build a new result.
For example, if you are putting shelves in a warehouse, give the artist a tool that adds basic shelf stand-in assets. One at a time. Scale and position to suit using any transforms (Transform SOP or Edit SOP in tweak mode for example). Creative use of uv's can give you the impression of more or less books, etc. Give the stand in geo identifiers for front, type, etc. Use Cd colour as colour for shelves. Add attribute slots so that the artist can change the attributes at will. Put the text identifiers on the asset so the artist can read the current state if you want.
Then pass these stand-ins in to the Shelf construction asset and it spits out all the final shelf geometry using the attributes, even as Alembic Archives or Packed Primitives if you want. All this using the Current SOP upstream to the Display SOP.
Technical Discussion » How to create single multi-group obj for Unity, from instanced primitives? Urgent for art competition.
- old_school
- 2540 posts
- Offline
One way is to use the copy_to_points SOP's ability to pack on copy to create your assemblies per copy (I don't know if you want to have different geometry per copy).
Then use the Assemble SOP to construct the groups per assembly/packed primitive.
Finally use an Unpack SOP set to transfer the groups. This will unravel the packed primitives to get back where you started and each piece belongs to it's own group.
See the attached Houdini scene file for a working example.
Then use the Assemble SOP to construct the groups per assembly/packed primitive.
Finally use an Unpack SOP set to transfer the groups. This will unravel the packed primitives to get back where you started and each piece belongs to it's own group.
See the attached Houdini scene file for a working example.
-
- Quick Links