HDK
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
GA Users Guide

GA Using Guide: Table Of Contents

Introduction to Using GA

One of the design constraints for the design of the GA library was to minimize effort required to port code from the previous GB incarnation. While this older interface can be used, it does not always provide the most efficient means of working with the geometry data.

This document gives guidance on the best practices when re-writing code to use the new architecture.

Eliminating GEO_Point

The GA library doesn't store point (GEO_Point) or vertex (GEO_Vertex) objects. However, for easy porting, GA_Detail in older versions of Houdini (prior to 14.0) could create persistent GEO_Point objects on the fly.

GEO_Point *ppt = gdp->getGBPoint();

This was done any time you asked for a GEO_Point.

However, all operations can be done without creating GEO_Point objects (since GEO_Point objects really just store a back pointer to the point GA_IndexMap and a GA_Offset).

Rather than using GEO_Point objects, try to use the GA_Offset of the point.

This will:

  • Keep memory usage lower (no persistent objects)
  • Use the GA_Offset interface directly (rather than using indirection through the GEO_Point object).

The following table lists some of the common mappings from the old to new methods which work with GA_Offsets.

Old method New methods Remarks
GEO_Detail::appendPoint() GEO_Detail::appendPointOffset() or GEO_Detail::appendPointCopy()  
GEO_Detail::deletePoint() GA_Detail::destroyPointOffset() Methods in GA_Detail consistently use the term "destroy" instead of "delete"
GEO_Vertex::getPt() GA_GBVertex::getPointOffset() or GA_GBVertex::getPointIndex()  
UT_Vector4 pos4 = ppt->getPos() UT_Vector4 pos4 = gdp->getPos4(point_offset) For efficiency, use getPos3() where the w component should not be affected.
ppt->setPos(pos4) gdp->setPos4(pos4) For efficiency, use setPos3() where the w component should not be affected.

GA_FOR_ALL_GPOINTS() macro

One common place that GEO_Point objects are created is in the GA_FOR_ALL_GPOINTS() macro. There is a special version of the macro which doesn't construct the GEO_Point objects called GA_FOR_ALL_GPOINTS_NC(). So, it should be relatively easy to sweep code to change macros.

However, there is a caveat. The NC macro creates a temporary object which gets re-used, so the underlying code should NOT hold onto references to the GEO_Point pointer. For example, if you are building an array of GEO_Point pointers and plan to sort them, you cannot use the NC macro.

For improved safety, the NC versions of the macros limit the scope of the loop variable to within the loop itself. It is also an "iterator" like object which provides an operator->() and get() to retrieve the underlying element pointer.

// Old
FOR_ALL_GPOINTS(gdp, ppt)
{
UT_Vector3 pos = ppt->getPos();
functionWhichDoesNotCachePointers(ppt);
}
// New
GA_FOR_ALL_GPOINTS_NC(gdp, GEO_Point, ppt)
{
UT_Vector3 pos = ppt->getPos3(); // use getPos3() for efficiency
functionWhichDoesNotCachePointers(ppt->get());
}

Handle Access to Attributes

Attributes in the GA library can provide their own AIF interfaces. For example, the numeric type provides an AIFTuple interface, as does the index pair attribute. To a user, both of these attributes can appear as a tuple of numeric data.

However, processing data using the AIF interface invokes at least one virtual thunk for each call (since the AIF classes are all virtual), and in practice often two virtual calls are required. Thus, GA provides an "efficient" attribute accessor for numeric (and string) data which is tightly coupled to the numeric interface.

In fact, there are multiple classes to access attribute data. Each has their advantages and disadvantages. It's important to try to choose the appropriate interface.

GA_AttributeRefMap

The GA_AttributeRefMap class allows you to process all attributes on vertices, points, primitives (and detail) objects within one detail or even across different details.

The GA_AttributeRefMap will create a mapping between destination and source attributes (which may exist on different details).

When operating in an "object-centric" mode (i.e. blending two points as objects), you often want to process all the attributes. The GA_AttributeRefMap will use the AIFs provided by the attributes to perform operations between all attributes in the map.

When constructing the GA_AttributeRefMap, you must first bind the details, then add attributes, and then perform operations on elements.

There are several ways of adding attributes to a bound GA_AttributeRefMap:

// Append a single attribute
hmap.append(dest_attrib, src_attrib);
// Append attributes based on a GA_AttributeFilter
// All floating point point attributes
hmap.append(GA_AttributeFilter::selectFloatTuple(), GA_ATTRIB_POINT);
// All numeric (integer/floating point) primitive attributes
hmap.append(GA_AttributeFilter::selectNumeric(), GA_ATTRIB_PRIMITIVE);
// All "color" point attributes
hmap.append(GA_AttributeFilter::selectTypeInfo(GA_TYPE_COLOR), GA_ATTRIB_POINT);

For example, let's say you want to split an edge, creating a new vertex, but you want to interpolate the two vertices on either side of the edge:

// Get the first and last vertex of a face
GA_Size vcount = face->getVertexCount();
GA_Offset vtx0 = face->getVertexOffset(vcount-1);
GA_Offset vtx1 = face->getVertexOffset(0);
// Add a new vertex (and a new point)
face->appendVertex(gdp->appendPointOffset());
// Get the vertex offset for the new vertex
GA_Offset new_vtx = face->getVertex(vcount);
// Now perform attribute interpolation be
GA_AttributeRefMap hmap(*gdp);
// Append all point & vertex attributes (including "P")
hmap.append(GA_AttributeFilter::selectNumeric(), GA_ATTRIB_POINT);
hmap.append(GA_AttributeFilter::selectNumeric(), GA_ATTRIB_VERTEX);
// Perform linear interpolation, writing to new_vtx
hmap.lerpValue(GA_ATTRIB_VERTEX, new_vtx, GA_ATTRIB_VERTEX, vtx0, vtx1, blend_factor);

You can also avoid the necessity of specifying the destination attribute explicitly for every operation by using a GA_AttributeRefMapDestHandle:

// Set the element to write into
h.setVertex(new_vtx);
// Perform linear interpolation
h.lerpVertex(vtx0, vtx1, blend_factor);

GA_Handle

The GA_Handle template classes provide an efficient way to get/set numeric or string data.

You can create handles on the stack by binding them to an attribute. The type of access desired is part of the handle type: HandleI for integer, HandleF for float, and HandleV3 for the very common vector triple, and HandleS for strings.

If the handle cannot be bound it won't be valid and attempts to access it will crash. You can use isValid() to determine if it bound successfully. To bind, the attribute must exist, it must be in a type convertible to the desired handle, and the attribute must be large enough. For example, a HandleV3 will bind to a 3-float attribute, but not to a 2-float attribute. Note HandleV3 will also bind to a 4-float attribute, ignoring the fourth float.

Because handles are often tied to a specific named attribute, they often are named attribname_h.

// Read only, from named attribute
GA_ROHandleV3 vel_h(const_gdp, GEO_POINT_DICT, "v");
// Read write, from a named attribute, but must be at least 2 floats
GA_RWHandleF life_h(gdp, GEO_POINT_DICT, "life", 2);
// From an existing const GA_Attribute *
GA_ROHandleI id_h(const_attribute);
// From a GA_AttributeRef, (common when interfacing with legacy code)
GA_RWAttributeRef scale_gah(gdp->findFloatTuple(GEO_POINT_DICT, "pscale"));
GA_RWHandleF scale_h(scale_gah.getAttribute());

These classes are tightly coupled to the numeric and string attribute types. They will not work with index pair or other attribute types. However, they minimize virtual calls and can access data very efficiently. When you are certain you will be working with the standard Houdini numeric/string attributes, you can use GA_Handle for the most efficient code.

For example, to add a timestep of the "v" attribute to the "P" attribute, the code might look like:

GA_ROHandleV3 v_h(gdp, GEO_POINT_DICT, "v");
GA_RWHandleV3 p_h(gdp->getP());
if (v_h.isValid() && p_h.isValid())
{
for (GA_Iterator it(gdp->getPointRange()); !it.atEnd(); ++it)
p_h.add(it.getOffset(), v_h.get(it.getOffset()) * timestep);
}

GA_PageHandle

GA_Attribute data is constructed using "paged" arrays. That is, each array is typically an array of pages of data. By working on pages of data, you can take advantage of data locality even more than GA_Handle. Instead of working on individual elements, you can work on blocks of the array. You can also avoid the virtual function call for each access as it will be amortized to once per new page.

This will be most efficient if your iterator proceeds in sequential order over the offsets. GA_Iterator::blockAdvance(), intended for this purpose, provides contiguous blocks of offsets constrained to occupy the same page. If you need random access, GA_Handle is usually a superior interface.

Wherever possible, the page handle acquires a direct pointer to the underlying data. However, this should not be relied on - if the types differ it will be marshalled into a temporary buffer and flushed out whenever the handle is destroyed or changes pages. Extreme care should be taken if you have more than one page handle pointing to the same attribute - things may work with native types that don't if the attribute is fpreal16!

Because handles are often tied to a specific named attribute, they often are named attribname_ph.

For example, to add a timestep of the "v" attribute to the "P" attribute, the code might look like:

GA_ROPageHandleV3 v_ph(gdp, GEO_POINT_DICT, "v");
GA_RWPageHandleV3 p_ph(gdp->getP());
if (v_ph.isValid() && p_ph.isValid())
{
for (GA_Iterator it(gdp->getPointRange()); it.blockAdvance(start, end); )
{
v_ph.setPage(start);
p_ph.setPage(start);
#if 0
// Use Vector Math library
VM_Math::madd((fpreal32 *)&p_ph.value(start),
(const fpreal32 *)&v_ph.value(start),
timestep, (end - start)*3);
#else
for (GA_Offset pt = start; pt < end; ++pt)
{
p_ph.value(pt) += v_ph.get(pt) * timestep;
}
#endif
}
}

GA_OffsetArray vs. GA_OffsetList vs. GA_IndexArray vs. GA_Range

Writing Parallel Algorithms in GA

This section discusses best practices when writing parallel algorithms for GA.

The GA library works on pages of data rather than large contiguous arrays. When data is written to the array, pages may be allocated or cleared. For thread safety, the GA library only allows a single thread to write to a single page. When writing threaded algorithms, each thread must iterate over pages using a GA_PageIterator. The page iterator then can provide a GA_Iterator which will iterate over the elements in the page (either blocked or individual).

There are typically two methods to write threaded code for GA

Using UTparallelFor (TBB paradigm)

Using UT_ThreadedAlgorithm (UT_JobInfo paradigm)

When writing threaded code, both of these methods require having a GA_SplittableRange object. The GA_SplittableRange should be constructed before splitting into threads. There is some cost (both memory and performance) in constructing a splittable range.

UTparallelFor(), UTparallelReduce()

Typical single threaded code using the TBB paradigm would look something like:

class op_Normalize {
public:
op_Normalize(GA_RWHandleV3 &v_h)
: myV_h(v_h)
{}
void operator()(const GA_Range &r) const
{
for (GA_Iterator it(r.begin()); !it.atEnd(); ++it)
{
UT_Vector3 N = myV_h.get(*it);
N.normalize();
myV_h.set(*it, N);
}
}
};
void
{
UTparallelFor(range, op_Normalize(v_h));
}

However, not all GA_Range objects are splittable, so this code may end up single threaded, even though the intent was to have threaded code. Also, the above code is not thread safe since a GA_Range iterator is not guaranteed to work on single pages.

The correct way to ensure threading is to use GA_SplittableRange:

class op_Normalize {
public:
op_Normalize(const GA_RWAttributeRef &v)
: myV(v)
{}
// Take a SplittableRange (not a GA_Range)
void operator()(const GA_SplittableRange &r) const
{
GA_RWPageHandleV3 v_ph(myV.getAttribute());
// Iterate over pages in the range
for (GA_PageIterator pit = r.beginPages(); !pit.atEnd(); ++pit)
{
// iterate over the elements in the page.
for (GA_Iterator it(pit.begin()); it.blockAdvance(start, end); )
{
// Perform any per-page setup required, then
v_ph.setPage(start);
for (GA_Offset i = start; i < end; ++i)
{
UT_Vector3 N = v_ph.get(i);
N.normalize();
v_ph.set(i, N);
}
}
}
}
};
void
normalize(const GA_Range &range, const GA_RWAttributeRef &v)
{
// Create a GA_SplittableRange from the original range
UTparallelFor(GA_SplittableRange(range), op_Normalize(v));
}

For foreach-style algorithms, the simplest approach is to use GAparallelForEachPage() which provides a higher-level API that is convenient to use with lambdas. It invokes the body approximately once per worker thread, and provides a GA_PageIterator with load-balanced iteration over the pages in the range (which is advantageous if some pages are more expensive to process than others).

void
normalize(const GA_Range &range, const GA_RWAttributeRef &v)
{
GAparallelForEachPage(range, /* shouldthread */ true, [&](GA_PageIterator pit)
{
// Create any thread-local data structures etc here.
GAforEachPageBlock(pit, [&](GA_Offset start, GA_Offset end)
{
// Perform any per-page setup required.
v_ph.setPage(start);
for (GA_Offset off = start; off < end; off++)
{
UT_Vector3 N = v_ph.get(off);
N.normalize();
v_ph.set(off, N);
}
});
});
}

UT_ThreadedAlgorithm

Similar to the TBB paradigm, UT_ThreadedAlgorithm relies on having a GA_SplittableRange to operate correctly. The easiest way to ensure proper splittable ranges is to make the "partial" method take a GA_SplittableRange const reference (instead of a GA_Range). Then, in the partial method, use the page iterator to iterate over pages using the UT_JobInfo for load balancing. For example:

class MyClass {
THREADED_METHOD2(MyClass, range.canMultiThread(), opNormalize,
void *, method_data);
// A GA_Range passed in will automatically be converted to
// GA_SplittableRange
void opNormalizePartial(const GA_SplittableRange &range,
void *method_data,
const UT_JobInfo &info);
};
void
MyClass::opNormalizePartial(const GA_SplittableRange &range,
void *method_data,
const UT_JobInfo &jobinfo)
{
for (GA_PageIterator pit = r.beginPages(jobinfo); !pit.atEnd(); ++pit)
{
// Perform any per-page setup required
for (GA_Iterator it(pit.begin()); !it.atEnd(); ++it)
{
...
}
}
}

Threading writes across page boundaries

Some algorithms are not amenable to working within page boundaries. If your algorithm can guarantee that threads will write to unique offsets you can now use the GA_AutoHardenForThreading class to prepare the attribute for generic threaded write access. For paged attributes, instantiating this class will ensure all pages are "hardened" and can be written by multiple threads. Note that groups are written to with bit operations, so writing to group offsets is not threadsafe as bitwise updates are not threadsafe.

Using ranges may be problematic with this approach though since ranges will still adhere to division along pages. You will likely have to roll your own iteration methods. For example:

class Functor
{
public:
Functor(GA_Attribute &write, const GA_Attribute &read)
: myWrite(write)
, myRead(read)
{
}
void operator()(const UT_BlockedRange<GA_Offset> &range)
{
const GA_IndexMap &index = myWrite.getIndexMap();
for (GA_Offset i = range.begin(); i != range.end(); ++i)
{
// Skip over holes in the index map
if (index.isOffsetActive(i))
process(i);
}
}
};
static void
threadMe(GA_Attribute &write, const GA_Attribute &read)
{
GA_Offset end = write.getIndexMap().getOffsetSize();
// Creating a GA_AutoHardenForThreading will ensure that the
// write attribute will allow write-access by multiple
// threads, regardless of page boundaries. Since the read
// attribute is read-only, there's no requirement to harden
// for threading.
GA_AutoHardenForThreading thread_write(write);
UTparallelFor(UT_BlockedRange<GA_Offset>(0, end), Functor(write, read));
}

Note that there is a cost associated to hardening attributes for threading. There may be significant work required to prepare an attribute for this kind of threaded access, and further work on the destructor to optimize the attribute for paged access. Where possible, paged threading should likely be used.

Adding Custom Primitives

Custom Primitives: GEO/GU

Until Houdini 14, it was necessary for a custom primitive class to derive from both GEO_Primitive and GU_Primitive. Standard practice was to have a GEO_CustomPrim class derive from GEO_Primitive, and then a GU_CustomPrim class deriving from both GEO_CustomPrim and GU_Primitive, with only this GU_CustomPrim class ever being instantiated. This is no longer the case, and now you just need one class deriving from GEO_Primitive.

There are several methods to be implemented for the GEO interface:

Other virtuals that need to be implemented, formally inherited fromxi GU_Primitive, but now inherited from GEO_Primitive, are:

  • int64 getMemoryUsage() const
    Return an approximate memory usage for the primitive. You should at least return sizeof(*this).
  • const GA_PrimitiveDefinition &getTypeDef() const
    Return the definitition of the primitive.
  • GEO_Primitive *convert(GU_ConvertParms &parms, GA_PointGroup *usedpts)
    Convert this primitive into one or more other primitives. If the conversion parameters has a deletion group, the primitive should add itself to that group, otherwise, it should delete itself from the detail. For example:
    if (GA_PrimitiveGroup *group = parms.getDeletePrimitives())
    group->add(this);
    else
    getParent()->deletePrimitive(*this, usedpts != NULL);
  • GEO_Primitive *convertNew(GU_ConvertParms &parms)
    Convert to a new primitive (or multiple primitives), keeping this primitive intact.
  • void *castTo (void) const
    Return the GU_Primitive pointer
  • const GEO_Primitive *castToGeo(void) const
    Return the GEO_Primitive pointer
  • void normal(NormalComp &output) const
    Add the primitive's normal to all points referenced by the primitive. For example, for a polygon, you might have code like:
    UT_Vector3 nml = computeNormal();
    for (i = 0; i < getVertexCount(); ++i)
    output.add(getPointOffset(i), nml);
  • int intersectRay(const UT_Vector3 &o, const UT_Vector3 &d, float tmax, float tol, float *distance, UT_Vector3 *pos, UT_Vector3 *nml, int accurate, float *u, float *v, int ignoretrim) const
    Intersect a ray with the primitive.
  • GU_RayIntersect *createRayCache(int &persistent)
    Create a cache structure which can be used to accelerate ray-tracing.

Custom Primitives: Registration

Primitives are registered using the DSO/DLL hook newGeometryPrim(GA_PrimitiveFactory*).

This method should add the definition to the given GA_PrimitiveFactory.

Please see the HDK_Sample::GU_PrimTetra sample for an example of this code.

Custom Primitives: SOP creation

Once you have a custom primitive defined, you will likely want to add a mechanism for the user to create and use these primitives. Most likely you will need to create a SOP to create and manipulate your new primitive.

Custom Primitives: GT (tesselation)

When tesselating a GU_Detail using the GT library (as is done for the GL viewport rendering), it's possible to provide Houdini with a tesselation of your primitive. This doesn't rely on the convert mechanism and can be much more light-weight than creating Houdini geometry.

The GT_GEOPrimCollector class lets you either generate a single GT primitive for each GEO primitive, or to collect multiple GEO primitives to generate a single GT primitive. For example, a GEO_PrimNURBSurf might generate a single GT_PrimPatch, while GEO_PrimPoly's might be collected to generate a single GT_PrimPolygonMesh.

When your subclass constructs, it must pass a GA_PrimitiveTypeId. This tells the GT tesselator to pass any primitives of that type-id to your collector.

There are three methods used in collecting:

  • GT_GEOPrimCollectData *beginCollecting(const GT_GEODetailListHandle &geometry, const GT_RefineParms *parms) const

    This is called when the geometry needs to generate GT primitives for the registered primitive type. You can return either NULL or a sub-class of GT_GEOPrimCollectData object which is then passed to the collect and endCollect methods.
  • GT_PrimitiveHandle collect(const GT_GEODetailListHandle &geometry, const GEO_Primitive const prim_list, int nsegments, GT_GEOPrimCollectData *data) const

    For each primitive in the detail, this method is called. GT is capable of handling multiple "segments" of geometry. If there are multiple segments, a list of primitives will be passed to the collect() method. Usually, the number of segments will be set to one.

    The method can either return a GT_PrimitiveHandle referring to a GT primitive, or it can return an empty GT_PrimitiveHandle. For example, a GEO_PrimNURBSurf might return a single GT_PrimitiveHandle to a GT_PrimPatch, while GEO_PrimPoly's collector might store the list of all GEO_PrimPoly's to build a primitive in the endCollecting method.

    The data passed in is the data returned by beginCollecting.
  • GT_PrimitiveHandle endCollecting(const GT_GEODetailListHandle &geometry, GT_GEOPrimCollectData *data) const

    This method is called after the last primitive in the detail has been passed to the collect method. The method can either return a primitive or an empty GT_PrimitiveHandle.

    Note: The endCollecting() method should not delete the collecting data.

The .geo/.bgeo file format

Houdini 12 introduced a new .geo/.bgeo file format. The ASCII format is stored in vanilla JSON, so reading a .geo file in Python should be as simple as:

import simplejson
geo = simplejson.load(open(filename, 'r'))

However, for efficiency, Houdini also comes with a "binary" JSON implementation. There are several ways to use/understand the binary JSON implementation

  • $HH/public/binary_json/binary_json.py
    This directory provides an implementation of the binary JSON format in Python. It's unlikely that the code is efficient enough to use in production. The intent is to use this code as a reference implementation.
  • UT_JSONParser.h and other UT_JSON classes
    Provide a C++ interface to read/write JSON in both ASCII and binary form. Using these classes will not incur a license cost.
  • $HH/python*libs/hjson.so Is a Python module which uses the UT_JSON classes to implement a JSON reader/writer. The hjson module provides performance on par with the most efficient JSON modules available, but also supports the binary JSON extensions required for .bgeo. An example of this might be:
    try:
    # Try for ASCII/binary support
    import hjson
    json = hjson
    except:
    # Fall back to only ASCII support
    import simplejson
    json = simplejson
    geo = json.load(open(filename, 'r'))
    print json.dumps(obj, binary=False, indent=4)
    Please see the module help for more details on named arguments.

JSON Schema

Loading/parsing a geo/bgeo file is relatively easy. However, interpreting the schema may be a little more difficult. There is a Python implementation that interprets the schema provided in $HH/public/hgeo. This is intended as a reference implementation and may not be efficient enough to be used in production.

If you run the module as a program, the application will load any .geo/.bgeo files specified on the command line (or $HH/geo/defbgeo.bgeo if non are specified), load the geometry and print out information about the file. For example:

% hython hgeo.py
+++ 0.001 ( 0.001): Loading /hfs/houdini/geo/defgeo.bgeo
+++ 0.002 ( 0.000): Done Loading /hfs/houdini/geo/defgeo.bgeo
+++ 0.002 ( 0.000): Loaded Topology
+++ 0.002 ( 0.000): Loaded Attributes
+++ 0.002 ( 0.000): Loaded Primitives
+++ 0.002 ( 0.000): Loaded Groups
========== /hfs/houdini/geo/defgeo.bgeo ==========
80 Points
96 Vertices
12 Primitives
----- Attributes -----
Point Attributes
numeric P[4]
12 Primitives
12 Poly
Primitive 0 is a Poly and has 4 vertices.
Vertex[0]->Point[3] P= [0.5, -0.5, -0.5, 1.0]
Vertex[1]->Point[0] P= [-0.5, -0.5, -0.5, 1.0]
Vertex[2]->Point[1] P= [-0.5, -0.5, 0.5, 1.0]
Vertex[3]->Point[2] P= [0.5, -0.5, 0.5, 1.0]

Note that by running hython instead of python, the hjson module becomes available.

Numeric Data

The schema supports various fields for storing numeric tuple data values, regardless of whether dealing with an ASCII or binary file.

For the purposes of this description, we'll focus on a simple example with an attribute tuple of size 4 and 5 elements with values [X0,Y0,Z0,W0]..[X4,Y4,Z4,W4], respectively, and a tiny page size of 2.

The simplest representation, and that used by default for ASCII files, is as an array of structs.

"values",[
"size",4,
"storage","fpreal32",
"tuples",[[X0,Y0,Z0,W0],[X1,Y1,Z1,W1],[X2,Y2,Z2,W2],[X3,Y3,Z3,W3],[X4,Y4,Z4,W4]
]
]

This representation is both highly readable and easy to parse, but not particularly efficient. An alternative, packed page, representation is preferred for binary files.

"values",[
"size",4,
"storage","fpreal32",
"packing",[3,1],
"pagesize",2,
"rawpagedata",[
X0,Y0,Z0, X1,Y1,Z1, W0, W1, # page 0 (subvector0, subvector1)
X2,Y2,Z2, X3,Y3,Z3, W2, W3, # page 1 (subvector0, subvector1)
X4,Y4,Z4, W4 # page 2 (subvector0, subvector1)
]
]

The pages are sequential in the flat "rawpagedata" array, with each page packed as per the "packing" field.

For example, a packing of [4] yields

"values",[
"size",4,
"storage","fpreal32",
"packing",[4],
"pagesize",2,
"rawpagedata",[
X0,Y0,Z0,W0, X1,Y1,Z1,W1, # page 0 (subvector0)
X2,Y2,Z2,W2, X3,Y3,Z3,W3, # page 1 (subvector0)
X4,Y4,Z4,W4 # page 2 (subvector0)
]
]

while a packing of [1,1,1,1] yields

"values",[
"size",4,
"storage","fpreal32",
"packing",[1,1,1,1],
"pagesize",2,
"rawpagedata",[
X0, X1, Y0, Y1, Z0, Z1, W0, W1 # page 0 (subvector0 .. subvector3)
X2, X3, Y2, Y3, Z2, Z3, W2, W3 # page 1 (subvector0 .. subvector3)
X4, Y4, Z4, W4 # page 2 (subvector0 .. subvector3)
]
]

The "packing" field may be omitted, and, if missing, is equivalent to [<size>].

An optional "constantpageflags" field may also be used to mark constant pages for each subvector.

Suppose, in our simple example, we have X0 == X1, Y0 == Y1 and Z0 == Z1. Then we could have

"values",[
"size",4,
"storage","fpreal32",
"packing",[3,1],
"pagesize",2,
"constantpageflags",[[1,0,0],[0,0,0]],
"rawpagedata",[
X0,Y0,Z0, W0, W1, # page 0 (subvector0, subvector1)
X2,Y2,Z2, X3,Y3,Z3, W2, W3, # page 1 (subvector0, subvector1)
X4,Y4,Z4, W4 # page 2 (subvector0, subvector1)
]
]

A subvector entry in the "constantpageflags" array with no constant pages may also be represented as an empty array. The following is equivalent to the previous example

"values",[
"size",4,
"storage","fpreal32",
"packing",[3,1],
"pagesize",2,
"constantpageflags",[[1,0,0],[]],
"rawpagedata",[
X0,Y0,Z0, W0, W1, # page 0 (subvector0, subvector1)
X2,Y2,Z2, X3,Y3,Z3, W2, W3, # page 1 (subvector0, subvector1)
X4,Y4,Z4, W4 # page 2 (subvector0, subvector1)
]
]