Houdini 20.0 Nodes Geometry nodes

Raw Import geometry node

Imports raw binary files as point, detail, or volumes.

Raw Import loads binary files and creates detail attributes, points, or volumes to represent the data within. This requires a precise specification of the file layout, including the bit-depth and endianness of the data.

Reading is done blockwise, and only the specified blocks are read. Some file formats may require multiple Raw Imports; the first to read header information, and later ones with an Ignored block to get to the data that needs to be read.

For text data the Table Import SOP can be used.


Raw File

The file to read.

Reload Geometry

Trigger re-loading and re-cooking of the file. Useful if the file has changed on disk.

File Layout


The byte ordering of the raw data. Unless all the data is single bytes, you need to set this based on how the data was created. There is no way to tell what to set this to, other than knowing which “endian-ness” the program that created the data used, however “Little” (the default) is common on modern systems.

Little (Intel)

The least significant byte in a multi-byte number is first. This matches the normal memory layout of modern architectures (Intel and ARM), so it is commonly used.

Big (Network)

The most significant byte in a multi-byte number is first. This is sometimes called “network ordering” and is common in, but not limited to, network protocols.

While many know the story of Gulliver’s travels, of how the eponymous main character washed ashore of a land of little people, they may not recall why the nations of that island were at war. There was a schism about whether eggs should be cracked from their little side or the big side first.

A similar problem arose in the world of computer science. When storing a multi-byte value, do you store the big portion or the little portion first?

Point Count

If point attributes are read from the file, points must be created to hold the attributes. This controls how the number of points is determined.

No Points

No points are to be created. There should not be any point-attribute blocks in this case.

Specific Points

A specific number of points are created. All point-based blocks will have this size.

From Header

If a block creates a detail attribute; that detail attribute’s value can be used to set the number of points. This avoids the need to do a multi-pass if the point count is embedded in the file. The detail block must precede the first point block.

From File Size

Most file sources allow determination of the file size. When this is possible, the number of points can be inferred by computing the left over space after all the other blocks are accounted for. This requires any header or trailer blocks to be properly defined to ensure the remaining bytes can be distributed among the point attributes.

Number of Points

How many points to create.

Point Count Attrib

A detail attribute that should be created by a detail-block whose value will determine the number of points to create.

Number of Blocks

Block #

The name of the block. This becomes the attribute name for detail or point blocks, and the volume name for volume blocks. For an ignored block it is a useful way to comment why the block exists.

Import Target

Where to put the data read from the block.


The data will be ignored.

Detail Attribute

The data will be stored into a detail attribute.

Point Attribute

The data will be stored into a point attribute spread across all the points that were generated.


The data will be stored into a volume.

Tuple Size

The number of entries per data element. For point attributes, the “P” attribute must be at most size 3. Likewise, float volumes only support 1-4 tuplesizes.


The type of data stored in this block.


Floating point values.


Integral values.


How many bits to store for each value.


For integers this is an unsigned value from 0..255. For float, the unsigned value 0..255 will be fit to the floating point values 0..1.


For integers, this is a signed value from -32768..32767. For float, this is either binary16 or bfloat16.


For integers, this is a signed 32 bit value. For float, it is a binary32 floating point representation.


For integers, this is a signed 64 bit value. For float, it is a binary64 floating point representation.

Use BFloat16

The usual 16-bit float representation, often called half, is binary16. This is used by OpenEXR and OpenVDB and internally in Houdini as a way to store floats in less space by reducing both the range and the precision.

BFloat16 is a truncated version of binary32 where the range remains unchanged, but the precision has been reduced to a mere 8-bits. This is commonly used in machine learning.

Collate with Previous

When a point block is loaded from disk, it could either be interleaved with other point blocks or form its own contiguous block. If it is not marked as being collated, it will load its own contiguous block (possibly including any successive blocks that are marked as collated) If it is marked as collated, all the blocks that are collated together will be read in turn for each point before going to the next point.

Volume Resolution

The resolution of the volume to load. If an axis is -1, it is treated as dynamic. The dynamic axes will be determined by taking the remainder of the total file size (if available) and dividing it among the dynamic axes equally. Thus if more than one axis is dynamic, the volume must be an even square or cube.

Volume Order

Similar to endian-ness for binary numbers, there is disagreement about whether volume data should be X-axis first, or Z-axis first. If you are reading volume data, you need to set this based on the convention used by the program that created the data.


The Z-axis is the outermost loop. Consecutive elements in the file will be consecutive X-values. This matches Houdini’s internal Volume layout.


The X-axis is the outermost loop. Consecutive elements in the file will be consecutive Z-values. This matches OpenVDB’s internal volume layout.

See also

Geometry nodes