On this page | |
Since | 18.0 |
Overview ¶
Many renderers have custom settings that can be set on a per-geometry basis, such as sampling rates, motion blur control, and dicing quality. These properties are not generally applicable enough to be part of the USD standard, but can be very important for batch rendering targeted at a specific renderer.
These geometry settings are different from the settings on a Render Settings primitive because they do not affect the rendering of the entire scene. They only affect the targeted geometry primitives. Renderer-specific light settings are already provided on the Light LOP itself, and so will not appear on this node.
This node provides a way to set these renderer-specific setting on a set of geometry primitives.
Note
To find out how to add parameters for a third-party renderer, see “USD Hydra: Customizing for Houdini” in the HDK documentation.
Inheritance ¶
-
Individual renderers can set values from these parameters as USD attributes, or as primvars. (Karma sets them as primvars.)
-
Storing the settings in primvars allows them to be inherited down the scene graph hierarchy, so it is easier to apply a particular setting to large numbers of geometry primitives.
How to ¶
-
Specify the geometry primitives you want to apply settings to in the Primitives parameter.
-
Use the pop-up menu to the left of the parameters to control how the node authors the opinion for that setting.
Pop-up menu item
Meaning
Set or Create
Sets the attribute to the given value, whether it previously existed or not.
Set If Exists
Only set the attribute to the given value if it previously existed.
Use this mode to make sure an attribute is only set on primitives of the correct type. For example, only
UsdGeomSphere
primitives are likely to have aradius
attribute.Block
Makes the attribute appear to not exist, so it takes on its default value. (If the attribute doesn’t already exist on the prim, this does nothing.)
Disconnect Input
Deletes the attribute input connection to its source. Input connections take precedence over attribute values, so disconnecting an input allows the attribute value to take effect.
Do Nothing
Ignore this parameter, don’t create or change the attribute in any way.
Parameters ¶
Primitives
The primitive(s) the node should operate on. You can drag primitives from the scene graph tree pane into this textbox to add their paths, or click the Reselect button beside the text box to select the primitives in the viewer, or ⌃ Ctrl-click the Reselect button to choose prims from a pop-up tree window. You can also use primitive patterns for advanced matching, including matching all prims in a collection (using
/path/to/prim.collection:name
).
Karma ¶
Enable Motion Blur
Whether to enable motion blur. Changing this in the display options will require a restart of the render.
Velocity Blur
This parameter lets you choose what type of geometry velocity blur to do on an object, if any. Separate from transform blur and deformation blur, you can render motion blur based on point movement, using attributes stored on the points that record change over time. You should use this type of blur if the number points in the geometry changes over time (for example, a particle simulation where points are born and die).
If your geometry changes topology frame-to-frame, Karma will not be able to interpolate the geometry to correctly calculate Motion Blur. In these cases, motion blur can use a velocities
and/or accelerations
attribute which is consistent even while the underlying geometry is changing. The surface of a fluid simulation is a good example of this. In this case, and other types of simulation data, the solvers will automatically create the velocity attribute.
Note
In Solaris, velocities
, accelerations
, and angularVelocities
attributes are equivalent to v
, accel
, and w
in SOPs, respectively.
No Velocity Blur
Do not render motion blur on this object, even if the renderer is set to allow motion blur.
Velocity Blur
To use velocity blur, you must compute and store point velocities in a point attribute velocities
. The renderer uses this attribute, if it exists, to render velocity motion blur (assuming the renderer is set to allow motion blur). The velocities
attribute may be created automatically by simulation nodes (such as particle DOPs), or you can compute and add it using the Point velocity SOP.
The velocities
attribute value is measured in Houdini units per second.
Acceleration Blur
To use acceleration blur, you must compute and store point acceleration in a point attribute accelerations
. The renderer uses this attribute, if it exists, to render multi-segment acceleration motion blur (assuming the renderer is set to allow motion blur). The accel
attribute may be created automatically by simulation nodes, or you can compute and add it using the Point velocity SOP.
When Acceleration Blur is on, if the geometry has a angular velocity attribute (w
), rapid rotation will also be blurred. This should be a vector attribute, where the components represent rotation speeds in radians per second around X, Y, and Z.
When this is set to “Velocity Blur” or “Acceleration Blur”, deformation blur is not applied to the object. When this is set to “Acceleration Blur”, use the karma:object:geosamples property to set the number of acceleration samples.

velocities
) to do linear motion blur.

Geometry Time Samples
The number of sub-frame samples to compute when rendering deformation motion blur over the shutter open time. The default is 1 (sample only at the start of the shutter time), giving no deformation blur by default. If you want rapidly deforming geometry to blur properly, you must increase this value to 2 or more. Note that this value is limited by the number of sub-samples available in the USD file being rendered. An exception to this is the USD Skel deformer which allows.
“Deformation” may refer to simple transformations at the Geometry (SOP) level, or actual surface deformation, such as a character or object which changes shape rapidly over the course of a frame.

Objects whose deformations are quite complex within a single frame will require a higher number of Geo Time Samples.

Deformation blur also lets you blur attribute change over the shutter time. For example, if point colors are changing rapidly as the object moves, you can blur the Cd
attribute.
Increasing the number of Geo Time Samples can have an impact on the amount of memory Karma uses. For each additional Sample, Karma must retain a copy of the geometry in memory while it samples across the shutter time. When optimizing your renders, it is a good idea to find the minimum number of Geo Time Samples necessary to create a smooth motion trail.
Deformation blur is ignored for objects that have Velocity motion blur turned on.
Transform Time Samples
The number of samples to compute when rendering transformation motion blur over the shutter open time. The default is 2 samples (at the start and end of the shutter time), giving one blurred segment.

If you have object moving and changing direction extremely quickly, you might want to increase the number of samples to capture the sub-frame direction changes.

In the above example, it requires 40 transformation samples to correctly render the complex motion that occurs within one frame. (This amount of change within a single frame is very unusual and only used as a demonstration.)
Transformation blur simulates blur by interpolating each object’s transformation between frames, so it’s cheap to compute but does not capture surface deformation. To enable blurring deforming geometry, increase karma:object:geosamples.
Dicing Quality
This parameter controls the geometric subdivision resolution for smooth surfaces (subdivision surfaces and displaced surfaces). With all other parameters at their defaults, a value of 1 means that approximately 1 micropolygon will be created per pixel. A higher value will generate smaller micropolygons meaning that more shading will occur - but the quality will be higher.
The effect of changing the shading quality is to increase or decrease the amount of shading by a factor of karma:object:dicingquality
squared - so a shading quality of 2 will perform 4 times as much shading and a shading quality of 0.5 will perform 1/4 times as much shading.
Diffuse Samples
Specifies the quality of indirect diffuse shading. A value of one translates to roughly one additional diffuse sample per shading computation. A sample of 4 translates to roughly 4 additional diffuse samples per shading computation.
Reflect Samples
Specifies the quality of indirect reflection shading. A value of one translates to roughly one additional reflection sample per shading computation. A sample of 4 translates to roughly 4 additional reflection samples per shading computation.
Refract Samples
Specifies the quality of indirect refraction shading. A value of one translates to roughly one additional refraction sample per shading computation. A sample of 4 translates to roughly 4 additional refraction samples per shading computation.
Volume Samples
Specifies the quality of indirect volumetric shading. A value of one translates to roughly one additional volumetric sample per shading computation. A sample of 4 translates to roughly 4 additional volumetric samples per shading computation.
SSS Samples
Specifies the quality of indirect sub-surface scattering shading. A value of one translates to roughly one additional sub-surface scattering sample per shading computation. A sample of 4 translates to roughly 4 additional sub-surface scattering samples per shading computation.
Diffuse Limit
The number of times diffuse rays can propagate through your scene.

Unlike the Reflect and Refract Limits, this parameter will increase the overall amount of light in your scene and contribute to the majority of global illumination. With this parameter set above zero diffuse surfaces will accumulate light from other objects in addition to direct light sources.

In this example, increasing the Diffuse Limit has a dramatic effect on the appearance of the final image. To replicate realistic lighting conditions, it is often necessary to increase the Diffuse Limit. However, since the amount of light contribution usually decreases with each diffuse bounce, increasing the Diffuse Limit beyond 4 does little to improve the visual fidelity of a scene. Additionally, increasing the Diffuse Limit can dramatically increase noise levels and render times.

This is a float because all limits are stochastically picked per-sample, so for example you can set the diffuse limit to 3.25
and have 25% of the rays with a diffuse limit of 4 and 75% of rays with a diffuse limit of 3.
Reflection Limit

The number of times a ray can be reflected in your scene.

This example shows a classic “Hall of Mirrors” scenario with the subject placed between two mirrors.

This effectively creates an infinite series of reflections.

From this camera angle the reflection limits are very obvious and have a large impact on the accuracy of the final image. However, in most cases the reflection limit will be more subtle, allowing you to reduce the number of reflections in your scene and optimize the time it takes to render them.
Remember that the first time a light source is reflected in an object, it is considered a direct reflection. Therefore, even with Reflect Limit set to 0, you will still see specular reflections of light sources.
This is a float because all limits are stochastically picked per-sample, so for example you can set the diffuse limit to 3.25
and have 25% of the rays with a diffuse limit of 4 and 75% of rays with a diffuse limit of 3.
Refraction Limit

This parameter control the number of times a ray be refracted in your scene.

This example shows a simple scene with ten grids all in a row.

By applying a refractive shader, we will be able see through the grids to an image of a sunset in the background.

From this camera angle, in order for the image to be accurate, the refraction limit must match the number of grids that that are in the scene. However, most scenes will not have this number of refractive objects all in a row and so it is possible to reduce the refract limit without affecting the final image while also reducing the time it takes to render them.

Keep in mind that this Refract Limit refers to the number of surfaces that the ray must travel through, not the number of objects.
Remember that the first time a light source is refracted through a surface, it is considered a direct refraction. Therefore, even with Refract Limit set to 0, you will see refractions of Light Sources. However, since most objects in your scene will have at least two surfaces between it and the light source, direct refractions are often not evident in your final render.
This is a float because all limits are stochastically picked per-sample, so for example you can set the diffuse limit to 3.25
and have 25% of the rays with a diffuse limit of 4 and 75% of rays with a diffuse limit of 3.
Volume Limit
The number of times a volumetric ray can propagate through a scene. It functions in a similar fashion to the Diffuse Limit parameter.

Increasing the Volume Limit parameter will result in much more realistic volumetric effects. This is especially noticeable in situations where only part of a volume is receiving direct lighting. Also, in order for a volumetric object to receive indirect light from other objects, the Volume Limit parameter must be set above 0.

With the Volume Limit set to values above zero, the fog volume takes on the characteristic light scattering you would expect from light traveling through a volume. However, as with the Diffuse Limit, the light contribution generally decreases with each bounced ray and therefore using values above 4 does not necessarily result in a noticeably more realistic image.
Also, increasing the value of this parameter can dramatically increase the amount of time spent rendering volumetric images.
This is a float because all limits are stochastically picked per-sample, so for example you can set the diffuse limit to 3.25
and have 25% of the rays with a diffuse limit of 4 and 75% of rays with a diffuse limit of 3.
SSS Limit
The number of times a SSS ray can propagate through a scene. It functions in a similar fashion to the Diffuse Limit parameter.
This is a float because all limits are stochastically picked per-sample, so for example you can set the diffuse limit to 3.25
and have 25% of the rays with a diffuse limit of 4 and 75% of rays with a diffuse limit of 3.
Volume Step Rate

How finely or coarsely a volume is sampled as a ray travels through it. Volumetric objects are made up of 3d structures called Voxels, the value of this parameter represents the number of voxels a ray will travel through before performing another sample.
The default value is 0.25
, which means that every one of every four
voxels will be sampled. A value of 1
would mean that all voxels are
sampled and a value of 2 would mean that all voxels are sampled twice. This
means that the volume step rate value behaves in a similar way to pixel
samples, acting as a multiplier on the total number of samples for
volumetric objects.
Keep in mind that increasing the volume step rate can dramatically increase
render times, so it should only be adjusted when necessary. Also, while
increasing the default from 0.25
can reduce volumetric noise, increasing
the value beyond 1
will rarely see similar results.
Uniform Volume
Whether to render this object as if it was a uniform-density volume. Using this property on surface geometry is more efficient than actually creating a volume object of uniform density, since the renderer can assume that the volume density is uniform and place samples more optimally. The surface normal of the surface is used to determine which side of the surface will render as a volume - the normal will point away from the interior. The surface need not be closed - if the surface is not closed, the volume will extend an infinite distance away from the surface. Non-closed surfaces may produce unexpected results near the edge of the surface, so try to keep the viewing camera away from the edges.
Uniform Volume Density
Determines how the samples are distributed when rendering a uniform volume
(karma:object:volumeuniform
is enabled). This parameter must match the
density on the uniform volume shader in order to produce correct results.
Render Points As
When rendering point clouds, they can be rendered as camera oriented discs, spheres or discs oriented to the normal attribute.
Render Curves As
When rendering curves, they can be rendered as ribbons oriented to face the camera, rounded tubes or ribbons oriented to the normal attribute attached to the points.
Override Curves Basis
USD supports Curve Basis types that may not be supported directly in Houdini. In some cases, you may want to override the Houdini curve basis. For example, if you have linear curves in Houdini, you may want to render them with a Bezier, B-Spline or Catmull-Rom basis. This menu will force Karma to override the basis that’s tied to the USD primitives.
Note that the topology of the curves must match the target basis. For example, when selecting any cubic curve basis, every curves must have at least 4 vertices. For the Bezier basis, curves must have 4 + 3*N vertices.
Treat As Light Source
Any object with an emissive material will generate light within the scene. If an object is significant enough (eg size, brightness, etc…) then it is possible for Karma to treat that object as if it were an explicit lightsource (similar to regular lights), meaning the emitted light will be handled much more efficiently. But doing so will add extra overhead elsewhere in the system (eg increased memory usage, slower update times, etc…).
There are three options. “No” will set the object as not being a lightsource. “Yes” will set the object as being a lightsource. “Auto” (default) means Karma will use an internal heuristic to decide if the object should be treated as a lightsource.
Light Sampling Quality
When an object is used as a geometry light source, this sets the per-light sampling quality. Increasing the quality will add additional samples for this light source, improving the sampling quality of this light relative to other light sources.
Note: This is not the quality of light received by an object.
Fix Shadow Terminator
Adjust shading position of shadow rays to avoid self-shadowing artifact on low-poly mesh due to discrepancy between smooth normals and face normals.
Volume Filter
Some volume primitives can use a filter during evaluation of volume
channels. This specifies the filter. The default box filter is fast to
evaluate and produces sharp renders for most smooth fluid simulations. If
your voxel data contains aliasing (stairstepping along edges), you may need
to use a larger filter width or smoother filter to produce acceptable
results. For aliased volume data, gauss
is a good filter with a filter
width of 1.5.
-
point
-
box
-
gauss
-
bartlett
-
blackman
-
catrom
-
hanning
-
mitchell
Volume Filter Width
This specifies the filter width for the “Volume Filter” property. The filter width is specified in number of voxels. Larger filter widths take longer to render and produce blurrier renders, but may be necessary to combat aliasing in some kinds of voxel data.