Houdini 20.0 Nodes Object nodes

Reference Image object node

Container for the Compositing operators (COP2) that define a picture.

On this page
Since 16.0

The object represents a compositing network, such as an image sequence or card. It contains compositing operators that define its look.

Parameters

COP Settings

Renderable

Controls if this object will be rendered in Mantra. If off, it will only be drawn in the viewport.

Use External COP

Rather than using the internal COP network, another network may be specified.

COP Path

Path to a COP network to use. If it points to a specific COP, that COP will be used. Otherwise the render COP of the network would be used.

File

The default COP network contains a file COP linked to this parameter. This allows you to change the file without having to dive into the network.

Alpha

Scale the images alpha by this value, allowing it be made semi-transparent to see through.

Frame

Which frame of the COP sequence to render.

Orientation

Adjust the default orientation of the image to match the default orthographic viewports.

Transform

Note

The transform of objects are also determined by the presence of any added node properties.

Transform Order

The left menu chooses the order in which transforms are applied (for example, scale, then rotate, then translate). This can change the position and orientation of the object, in the same way that going a block and turning east takes you to a different place than turning east and then going a block.

The right menu chooses the order in which to rotate around the X, Y, and Z axes. Certain orders can make character joint transforms easier to use, depending on the character.

Translate

Translation along XYZ axes.

Rotate

Degrees rotation about XYZ axes.

Scale

Non-uniform scaling about XYZ axes.

Pivot

Local origin of the object. See also setting the pivot point .

Uniform Scale

Scale the object uniformly along all three axes.

Modify Pre-Transform

This menu contains options for manipulating the pre-transform values. The pre-transform is an internal transform that is applied prior to the regular transform parameters. This allows you to change the frame of reference for the translate, rotate, scale parameter values below without changing the overall transform.

Clean Transform

This reverts the translate, rotate, scale parameters to their default values while maintaining the same overall transform.

Clean Translates

This sets the translate parameter to (0, 0, 0) while maintaining the same overall transform.

Clean Rotates

This sets the rotate parameter to (0, 0, 0) while maintaining the same overall transform.

Clean Scales

This sets the scale parameter to (1, 1, 1) while maintaining the same overall transform.

Extract Pre-transform

This removes the pre-transform by setting the translate, rotate, and scale parameters in order to maintain the same overall transform. Note that if there were shears in the pre-transform, it can not be completely removed.

Reset Pre-transform

This completely removes the pre-transform without changing any parameters. This will change the overall transform of the object if there are any non-default values in the translate, rotate, and scale parameters.

Keep Position When Parenting

When the object is re-parented, maintain its current world position by changing the object’s transform parameters.

Child Compensation

When the object is being transformed, maintain the current world transforms of its children by changing their transform parameters.

Enable Constraints

Enable Constraints Network on the object.

Constraints

Path to a CHOP Constraints Network. See also creating constraints.

Tip

You can you use the Constraints drop down button to activate one of the Constraints Shelf Tool. If you do so, the first pick session is filled automatically by nodes selected in the parameter panel.

Note

Lookat and Follow Path parameters on object nodes are deprecated in favor of Look At and Follow Path constraints. The parameters are only hidden for now and you can set their visibility if you do edit the node’s parameter interface.

Render

Material

Path to the Material node.

Display

Whether or not this object is displayed in the viewport and rendered. Turn on the checkbox to have Houdini use this parameter, then set the value to 0 to hide the object in the viewport and not render it, or 1 to show and render the object. If the checkbox is off, Houdini ignores the value.

Phantom

When true, the object will not be rendered by primary rays. Only secondary rays will hit the object.

(See the Render Visibility property).

Renderable

If this option is turned off, then the instance will not be rendered. The object’s properties can still be queried from within VEX, but no geometry will be rendered. This is roughly equivalent to turning the object into a transform space object.

See Render Visibility (vm_rendervisibility property).

Display As

How to display your geometry in the viewport.

Polygons as subdivision (Mantra)

Render polygons as a subdivision surface. The creaseweight attribute is used to perform linear creasing. This attribute may appear on points, vertices or primitives.

When rendering using OpenSubdiv, in addition to the creaseweight, cornerwieght attributes and the subdivision_hole group, additional attributes are scanned to control the behaviour of refinement. These override any other settings:

  • int osd_scheme, string osd_scheme: Specifies the scheme for OSD subdivision (0 or “catmull-clark”; 1 or “loop”; 2 or “bilinear”). Note that for Loop subdivision, the geometry can only contain triangles.

  • int osd_vtxboundaryinterpolation: The Vertex Boundary Interpolation method (see vm_osd_vtxinterp for further details)

  • int osd_fvarlinearinterpolation: The Face-Varying Linear Interpolation method (see vm_osd_fvarinterp for further details)

  • int osd_creasingmethod: Specify the creasing method, 0 for Catmull-Clark, 1 for Chaikin

  • int osd_trianglesubdiv: Specifies the triangle weighting algorithm, 0 for Catmull-Clark weights, 1 for “smooth triangle” weights.

Shading

Categories

The space or comma separated list of categories to which this object belongs.

Currently not supported for per-primitive material assignment (material SOP).

Reflection mask

A list of patterns. Objects matching these patterns will reflect in this object. You can use wildcards (for example, key_*) and bundle references to specify objects.

You can also use the link editor pane to edit the relationships between lights and objects using a graphical interface.

The object:reflectmask property in Mantra is a computed property containing the results of combining reflection categories and reflection masks.

Refraction mask

A list of patterns. Objects matching these patterns will be visible in refraction rays. You can use wildcards (for example, key_*) and bundle references to specify objects.

You can also use the link editor pane to edit the relationships between lights and objects using a graphical interface.

The object:refractmask property in Mantra is a computed property containing the results of combining reflection categories and reflection masks.

Light mask

A list of patterns. Lights matching these patterns will illuminate this object. You can use wildcards (for example, key_*) and bundle references to specify lights.

You can also use the link editor pane to edit the relationships between lights and objects using a graphical interface.

The object:lightmask property in Mantra is a computed property containing the results of combining light categories and light masks.

Volume filter

Some volume primitives (Geometry Volumes, Image3D) can use a filter during evaluation of volume channels. This specifies the filter. The default box filter is fast to evaluate and produces sharp renders for most smooth fluid simulations. If your voxel data contains aliasing (stairstepping along edges), you may need to use a larger filter width or smoother filter to produce acceptable results. For aliased volume data, gauss is a good filter with a filter width of 1.5.

  • point

  • box

  • gauss

  • bartlett

  • blackman

  • catrom

  • hanning

  • mitchell

Volume filter width

This specifies the filter width for the object:filter property. The filter width is specified in number of voxels. Larger filter widths take longer to render and produce blurrier renders, but may be necessary to combat aliasing in some kinds of voxel data.

Matte shading

When enabled, the object’s surface shader will be replaced with a matte shader for primary rays. The default matte shader causes the object to render as fully opaque but with an alpha of 0 - effectively cutting a hole in the image where the object would have appeared. This setting is useful when manually splitting an image into passes, so that the background elements can be rendered separately from a foreground object. The default matte shader is the “Matte” VEX shader, though it is possible to set a different matte shader by adding the vm_matteshader render property and assigning another shader. Secondary rays will still use the object’s assigned surface shader, allowing it to appear in reflections and indirect lighting even though it will not render directly.

For correct matte shading of volumes:

  1. Add the vm_matteshader property to the object.

  2. Create a Volume Matte shader.

  3. Set the density on this shader to match the density on the geometry shader.

  4. Assign this shader to vm_matteshader.

Then when the Matte Shading toggle is enabled, it will use your custom volume matte shader rather than the default (which just sets the density to 1). If you want fully opaque matte, you can use the matte shader rather than volume matte.

Raytrace shading

Shade every sample rather than shading micropolygon vertices. This setting enables the raytrace rendering on a per-object basis.

When micro-polygon rendering, shading normally occurs at micro-polygon vertices at the beginning of the frame. To determine the color of a sample, the corner vertices are interpolated. Turning on object:rayshade will cause the ray-tracing shading algorithm to be invoked. This will cause each sample to be shaded independently. This means that the shading cost may be significantly increased. However, each sample will be shaded at the correct time, and location.

Currently not supported for per-primitive material assignment (material SOP).

Sampling

Geometry velocity blur

This menu lets you choose what type of geometry velocity blur to do on an object, if any. Separate from transform blur and deformation blur, you can render motion blur based on point movement, using attributes stored on the points that record change over time. You should use this type of blur if the number points in the geometry changes over time (for example, a particle simulation where points are born and die).

If your geometry changes topology frame-to-frame, Mantra will not be able to interpolate the geometry to correctly calculate Motion Blur. In these cases, motion blur can use a v and/or accel attribute which is consistent even while the underlying geometry is changing. The surface of a fluid simulation is a good example of this. In this case, and other types of simulation data, the solvers will automatically create the velocity attribute.

No Velocity Blur

Do not render motion blur on this object, even if the renderer is set to allow motion blur.

Velocity Blur

To use velocity blur, you must compute and store point velocities in a point attribute v. The renderer uses this attribute, if it exists, to render velocity motion blur (assuming the renderer is set to allow motion blur). The v attribute may be created automatically by simulation nodes (such as particle DOPs), or you can compute and add it using the Point velocity SOP.

The v attribute value is measured in Houdini units per second.

Acceleration Blur

To use acceleration blur, you must compute and store point acceleration in a point attribute accel (you can change the acceleration attribute name using the geo_accelattribute property). The renderer uses this attribute, if it exists, to render multi-segment acceleration motion blur (assuming the renderer is set to allow motion blur). The accel attribute may be created automatically by simulation nodes, or you can compute and add it using the Point velocity SOP.

When Acceleration Blur is on, if the geometry has a angular velocity attribute (w), rapid rotation will also be blurred. This should be a vector attribute, where the components represent rotation speeds in radians per second around X, Y, and Z.

When this is set to “Velocity Blur” or “Acceleration Blur”, deformation blur is not applied to the object. When this is set to “Acceleration Blur”, you can use the geo_motionsamples property to set the number of acceleration samples.

Velocity motion blur used the velocity attribute (v) to do linear motion blur.
Acceleration motion blur uses the change in velocity to more accurately blue objects turning at high speed.
Angular acceleration blur works with object spin, such as these fast-spinning cubes.

Dicing

Shading quality

This parameter controls the geometric subdivision resolution for all rendering engines and additionally controls the shading resolution for micropolygon rendering. With all other parameters at their defaults, a value of 1 means that approximately 1 micropolygon will be created per pixel. A higher value will generate smaller micropolygons meaning that more shading will occur - but the quality will be higher.

In ray tracing engines, shading quality only affects the geometric subdivision quality for smooth surfaces (NURBS, render as subdivision) and for displacements - without changing the amount of surface shading. When using ray tracing, pixel samples and ray sampling parameters must be used to improve surface shading quality.

The effect of changing the shading quality is to increase or decrease the amount of shading by a factor of vm_shadingquality squared - so a shading quality of 2 will perform 4 times as much shading and a shading quality of 0.5 will perform 1/4 times as much shading.

Dicing flatness

This property controls the tesselation levels for nearly flat primitives. By increasing the value, more primitives will be considered flat and will be sub-divided less. Turn this option down for more accurate (less optimized) nearly-flat surfaces.

Ray predicing

This property will cause this object to generate all displaced and subdivided geometry before the render begins. Ray tracing can be significantly faster when this setting is enabled at the cost of potentially huge memory requirements.

Disable Predicing

Geometry is diced when it is hit by a ray.

Full Predicing

Generate and store all diced geometry at once.

Precompute Bounds

Generate all diced geometry just to compute accurate bounding boxes. This setting will discard the diced geometry as soon as the box has been computed, so it is very memory efficient. This can be useful to improve efficiency when using displacements with a large displacement bound without incurring the memory cost of full predicing.

When ray-tracing, if all polygons on the model are visible (either to primary or secondary rays) it can be more efficient to pre-dice all the geometry in that model rather than caching portions of the geometry and re-generating the geometry on the fly. This is especially true when global illumination is being computed (since there is less coherency among rays).

Currently not supported for per-primitive material assignment (material SOP).

Shade curves as surfaces

When rendering a curve, turns the curve into a surface and dices the surface, running the surface shader on multiple points across the surface. This may be useful when the curves become curved surfaces, but is less efficient. The default is to simply run the shader on the points of the curve and duplicate those shaded points across the created surface.

Geometry

Backface removal (Mantra)

If enabled, geometry that are facing away from the camera are not rendered.

Procedural shader

Geometry SHOP used by the renderer to generate render geometry for this object.

Force procedural geometry output

Enables output of geometry when a procedural shader is assigned. If you know that the procedural you have assigned does not rely on geometry being present for the procedural to operate correctly, you can disable this toggle.

Polygons as subdivision (Mantra)

Render polygons as a subdivision surface. The creaseweight attribute is used to perform linear creasing. This attribute may appear on points, vertices or primitives.

When rendering using OpenSubdiv, in addition to the creaseweight, cornerwieght attributes and the subdivision_hole group, additional attributes are scanned to control the behaviour of refinement. These override any other settings:

  • int osd_scheme, string osd_scheme: Specifies the scheme for OSD subdivision (0 or “catmull-clark”; 1 or “loop”; 2 or “bilinear”). Note that for Loop subdivision, the geometry can only contain triangles.

  • int osd_vtxboundaryinterpolation: The Vertex Boundary Interpolation method (see vm_osd_vtxinterp for further details)

  • int osd_fvarlinearinterpolation: The Face-Varying Linear Interpolation method (see vm_osd_fvarinterp for further details)

  • int osd_creasingmethod: Specify the creasing method, 0 for Catmull-Clark, 1 for Chaikin

  • int osd_trianglesubdiv: Specifies the triangle weighting algorithm, 0 for Catmull-Clark weights, 1 for “smooth triangle” weights.

Render as points (Mantra)

Controls how points from geometry are rendered. At the default settings, No Point Rendering, only points from particle systems are rendered. Setting this value to Render Only Points, will render the geometry using only the point attributes, ignoring all vertex and primitive information. Render Unconnected Points works in a similar way, but only for points not used by any of the geometry’s primitives.

Two attributes control the point primitives if they exist.

orient

A vector which determines the normal of the point geometry. If the attribute doesn’t exist, points are oriented to face the incoming ray (the VEX I variable).

width

Determines the 3D size of the points (defaults to 0.05).

Use N for point rendering

Mantra will initialize the N global from the N attribute when rendering point primitives. When disabled (the default), point normals will be initialized to face the camera.

Metaballs as volume

Render metaballs as volumes as opposed to surfaces. The volume quality for metaballs will be set based on the average size of all metaballs in the geometry, so increasing or decreasing the metaball size will automatically adjust the render quality to match.

Coving

Whether Mantra will try to prevent cracks.

Coving is the process of filling cracks in diced geometry at render time, where different levels of dicing side-by-side create gaps at T-junctions.

The default setting, Coving for displacement/sub-d, only does coving for surfaces with a displacement shader and subdivision surfaces, where the displacement of points can potentially create large cracks. This is sufficient for more rendering, however you may want to use Coving for all primitives if you are using a very low shading rate or see cracks in the alpha of the rendered image.

Do not use Disable coving. It has no performance benefit, and may actually harm performance since Houdini has to render any geometry visible through the crack.

0

No coving.

1

Only displaced surfaces and sub-division surfaces will be coved.

2

All primitives will be coved.

Material Override

Controls how material overrides are evaluated and output to the IFD.

When set to Evaluate Once, any parameter on the material, that uses channels or expressions, will be evaluated only once for the entire detail. This results in significantly faster IFD generation, due to the material parameter assignment being handled entirely by Mantra, rather than Houdini. Setting the parameter value to Evaluate for Each Primitive/Point will evaluate those parameters for each primitive and/or point. It’s also possible to skip material overrides entirely by setting the parameter value to Disabled.

Automatically Compute Normals (Old)

Whether mantra should compute the N attribute automatically. If the N attribute exists, the value will remain unchanged. However, if no N attribute exists, it will be created. This allows polygon geometry which doesn’t have the N attribute already computed to be smooth shaded.

Not supported for per-primitive material assignment (material SOP).

Ignore geometry attribute shaders

When geometry has shaders defined on a per-primitive basis, this parameter will override these shaders and use only the object’s shader. This is useful when performing matte shading on objects.

Not supported for per-primitive material assignment (material SOP).

Misc

Set Wireframe Color

Use the specified wireframe color

Wireframe Color

The display color of the object

Viewport Selecting Enabled

Object is capable of being picked in the viewport.

Select Script

Script to run when the object is picked in the viewport. See select scripts .

Cache Object Transform

Caches object transforms once Houdini calculates them. This is especially useful for objects whose world space position is expensive to calculate (such as Sticky objects), and objects at the end of long parenting chains (such as Bones). This option is turned on by default for Sticky and Bone objects.

See the OBJ Caching section of the Houdini Preferences window for how to control the size of the object transform cache.

Locals

IPT

This is typically -1. However, if the object is performing point instancing, then this variable will be set to the point number of the template geometry. For the IPT variable to be active, the Point Instancing parameter must be turned on in this object.

Note

This variable is deprecated. Use the instancepoint expression function instead.

Object nodes

  • Agent Cam

    Create and attach camera to a crowd agent.

  • Alembic Archive

    Loads the objects from an Alembic scene archive (.abc) file into the object level.

  • Alembic Xform

    Loads only the transform from an object or objects in an Alembic scene archive (.abc).

  • Ambient Light

    Adds a constant level of light to every surface in the scene (or in the light’s mask), coming from no specific direction.

  • Auto Bone Chain Interface

    The Auto Bone Chain Interface is created by the IK from Objects and IK from Bones tools on the Rigging shelf.

  • Blend

    Switches or blends between the transformations of several input objects.

  • Blend Sticky

    Computes its transform by blending between the transforms of two or more sticky objects, allowing you to blend a position across a polygonal surface.

  • Bone

    The Bone Object is used to create hierarchies of limb-like objects that form part of a hierarchy …

  • Camera

    You can view your scene through a camera, and render from its point of view.

  • Common object parameters

  • Dop Network

    The DOP Network Object contains a dynamic simulation.

  • Environment Light

    Environment Lights provide background illumination from outside the scene.

  • Extract Transform

    The Extract Transform Object gets its transform by comparing the points of two pieces of geometry.

  • Fetch

    The Fetch Object gets its transform by copying the transform of another object.

  • Formation Crowd Example

    Crowd example showing a changing formation setup

  • Fuzzy Logic Obstacle Avoidance Example

  • Fuzzy Logic State Transition Example

  • Geometry

    Container for the geometry operators (SOPs) that define a modeled object.

  • Groom Merge

    Merges groom data from multiple objects into one.

  • Guide Deform

    Moves the curves of a groom with animated skin.

  • Guide Groom

    Generates guide curves from a skin geometry and does further processing on these using an editable SOP network contained within the node.

  • Guide Simulate

    Runs a physics simulation on the input guides.

  • Hair Card Generate

    Converts dense hair curves to a polygon card, keeping the style and shape of the groom.

  • Hair Card Texture Example

    An example of how to create a texture for hair cards.

  • Hair Generate

    Generates hair from a skin geometry and guide curves.

  • Handle

    The Handle Object is an IK tool for manipulating bones.

  • Indirect Light

    Indirect lights produce illumination that has reflected from other objects in the scene.

  • Instance

    Instance Objects can instance other geometry, light, or even subnetworks of objects.

  • LOP Import

    Imports transform data from a USD primitive in a LOP node.

  • LOP Import Camera

    Imports a USD camera primitive from a LOP node.

  • Labs Fire Presets

    Quickly generate and render fire simulations using presets for size varying from torch to small to 1m high and low

  • Light

    Light Objects cast light on other objects in a scene.

  • Light template

    A very limited light object without any built-in render properties. Use this only if you want to build completely custom light with your choice of properties.

  • Microphone

    The Microphone object specifies a listening point for the SpatialAudio CHOP.

  • Mocap Acclaim

    Import Acclaim motion capture.

  • Mocap Biped 1

    A male character with motion captured animations.

  • Mocap Biped 2

    A male character with motion captured animations.

  • Mocap Biped 3

    A male character with motion captured animations.

  • Null

    Serves as a place-holder in the scene, usually for parenting. this object does not render.

  • Path

    The Path object creates an oriented curve (path)

  • PathCV

    The PathCV object creates control vertices used by the Path object.

  • Python Script

    The Python Script object is a container for the geometry operators (SOPs) that define a modeled object.

  • Ragdoll Run Example

    Crowd example showing a simple ragdoll setup.

  • Reference Image

    Container for the Compositing operators (COP2) that define a picture.

  • Rivet

    Creates a rivet on an objects surface, usually for parenting.

  • Simple Biped

    A simple and efficient animation rig with full controls.

  • Simple Female

    A simple and efficient female character animation rig with full controls.

  • Simple Male

    A simple and efficient male character animation rig with full controls.

  • Sound

    The Sound object defines a sound emission point for the Spatial Audio chop.

  • Stadium Crowds Example

    Crowd example showing a stadium setup

  • Stereo Camera Rig

    Provides parameters to manipulate the interaxial lens distance as well as the zero parallax setting plane in the scene.

  • Stereo Camera Template

    Serves as a basis for constructing a more functional stereo camera rig as a digital asset.

  • Sticky

    Creates a sticky object based on the UV’s of a surface, usually for parenting.

  • Street Crowd Example

    Crowd example showing a street setup with two agent groups

  • Subnet

    Container for objects.

  • Switcher

    Acts as a camera but switches between the views from other cameras.

  • TOP Network

    The TOP Network operator contains object-level nodes for running tasks.

  • VR Camera

    Camera supporting VR image rendering.

  • Viewport Isolator

    A Python Script HDA providing per viewport isolation controls from selection.

  • glTF