foam Is it a representation of 0 to Last vertex? Instead of prim:vert index (if that's what it is) in the details view?
Correct… for pretty much any use case inside Houdini.
To clarify, you probably don't need to worry about this distinction, but the order of vertex indices may not correspond with the order of the primitives then the order of the vertices within the primitives. For example, if you do a Reverse SOP, the order of the vertices within the primitives change, but the vertex indices don't usually get changed. It's still 0 to the last vertex; the order may just not be consistent with the other order.
You can check this by putting down a grid, then adding an AttribWrangle on vertices, with “i@vertex_index = @vtxnum;” as the VEXpression. Open up the spreadsheet on vertex attributes, and vertex_index will show 0 to 323. If you put down a reverse in between the two SOPs, all of the same numbers will still be there; they'll just be in a different order, since the spreadsheet for vertices sorts by primitive, then by vertex order within a primitive.
pbowmar -Multi-input nodes (Merge, Switch, Copy) appear with the wrong number of inputs and wiring can't be done until you jiggle the node to get it to redraw correctly
It's not just multi-input nodes that suffer from this. Single-input nodes often appear with multiple inputs, and it occurs on all platforms, so far as we know. Sometimes you have to rewire too, if it's a multi-input node, because the wire will sometimes be on the wrong input after jiggling the node. I thought we had a reproducible test case for it at some point, but I don't know the bug number to check.
GEO_PRIMTETRAHEDRON refers to Houdini's built-in tetrahedron type, not the HDK_Sample one, so you can't do that typecast safely. Also, the build-in tetrahedron type starts with its 4 vertices as GA_INVALID_OFFSET, whereas the HDK_Sample one starts by allocating 4 vertices in its constructor. The README.TXT file in the tetprim directory should explain how to get the SOP going in Houdini, though there might be something wrong with it.
I don't know if any of this info is helpful, but hopefully! :wink:
I'm glad I could help. If people are also wondering what's up with polysoup primitives, there are now some tips in the docs about when to use them instead of polygon primitives:
That's rather scary. I wrote the current version of that function, and it gets called on every SOP cook, but I have no idea how it could be missing unless the entire GA library is missing, in which case, starting Houdini wouldn't work at all.
Could it be that you either have 1) a version of libGA.dll from Houdini 12.5 or earlier installed in the same path, or 2) a version of a custom HDK node or other library from Houdini 12.5 or earlier installed for 13.0? I added a parameter to GA_Detial::getMemoryUsage in Houdini 13, so anything referring to it in 12.5 would need to be rebuilt for Houdini 13.
I've only got images for the 1 trillion cubes example, though I tried out the forest example with some stand-in L-systems and it seemed to work okay. It took a while to wrap my head around what things to pack and what not to pack when mixing the separate copy stampings, but (I think) I eventually figured it out. I found the key for me was just thinking about when I would end up having packed primitives that each only contain one packed primitive, and not packing in that case.
Warning: In the trillion cubes scene, if you change it to do 1 trillion boxes where each box is a packed primitive referring to the same single box, DO NOT try to render it, since Mantra will try to make 1 trillion instances. That's why I have it as a polysoup up to 1 million boxes; then Mantra only has 1 million instances.
I put together a description of some possible examples, but it was too colloquial, incomplete, and way too last-minute to get into the docs in any acceptable form. In case you're interested, here's what I wrote:
A packed primitive provides a suite of mechanisms for encapsulating an entire detail that may or may not even exist in memory until the primitive is unpacked. To avoid having to load and display everything while using it, it provides options for displaying as just a bounding box, or just the points of the geometry. It also enables very significant memory sharing, to the point of providing SOP-level instancing and much more, while being much simpler to set up than object-level instancing. This is recommended as the way to do instancing in the future.
For example, suppose that you have 5 intricate oak tree models, stored as polygon soups in files, with which to make a forest of 10,000 trees. Simply copying the oak tree model 10,000 times would take 10,000 times as much memory, so unless you make lightweight sprites to stand-in for most of the trees, copying the model is out of the question. Instancing the models onto points allows you to place them in the scene without loading the 5 trees, and when viewing, the tree models only need to be loaded once. However, it can be awkward visualizing the placement and transformations of the trees as just one point for each. Packed primitives would enable you to, in SOPs, have 10,000 primitives, each referring to one of the 5 files, and adjust the placement and transformation of each primitive, selectively viewing just one or a few of the trees as full models, or as point clouds, still only loading up to the 5 files, as needed. This can be easily set up using the pack options on the File SOP and then copy stamping.
Perhaps more interestingly, you could procedurally generate 100 models of branches and trunks for the trees, each using 3 polygon soups, and use the “Pack Geometry Before Copying” option on the Copy SOP to pack up each of these details before copy stamping to procedurally generate 50 tree models, each with 40 branches and a trunk, using packed primitives referring to the 100 details containing the branches and trunks. When a packed primitive is copied, it just refers to the same geometry data as the original, so there will only be 100 branch and trunk models and a set of packed primitives referring to these models. These 50 tree models can then also be packed up, and copy stamped to make the forest, and the result will then be:
• 1 detail, for the forest, containing 10,000 packed primitives, only referring to • 50 unique details, one for each unique tree model, each containing 41 packed primitives, only referring to • 100 unique details, one for each unique branch or trunk model, each containing 3 polygon soups, and the corresponding points and vertices.
The forest detail on its own takes up a bit more than 2 MB, the 50 tree details take up a total of a bit less than 1 MB, and supposing that each branch or trunk model takes up about 70 KB, the 100 branch and trunk details take up a total of about 7 MB, for a grand total of 10 MB for the entire forest. If you were to unpack just the first level of packing, you would have a forest containing 410,000 packed primitives referring to the branch and trunk models, which would take up about 89 MB, for a total of 96 MB. If you were to unpack just the second level of packing, you would have 50 tree models of 123 polygon soups each, which would take up about 144 MB, for a total of 146 MB. Unpacking both levels would result in a forest detail containing 1,230,000 polygon soups, taking up a staggering 28.7 GB.
This is why it’s important that packed primitives aren’t accidentally unpacked without a user specifically indicating to do so. This means that packed primitives don’t support most operations that one might want to perform on the underlying geometry. It would be a rude surprise if one put down a Divide SOP and everything was unpacked in the process.
If you want to perform some operations on the packed geometry, you have to unpack the part of the geometry that’s of interest, perform the operations, and then optionally repack that geometry. For example, supposing that a branch on one of the trees in the forest is encroaching on a path, you can unpack that tree, replacing its packed primitive with 41 branch and trunk packed primitives, translate, rotate, and scale the offending branch packed primitive, then repack the 41 packed primitives for the tree. There will then be 51 unique details for trees.
Mantra treats packed primitives as instanced, though it doesn’t yet support nesting instances, so it will create 410,000 instances internally for this example, which can get memory intensive, but is certainly feasible. This means that for more significant nesting of packed primitives, there can be a tradeoff between the size of the underlying geometry and the number of instances. For example, packing a single polygon and creating 1,000,000 packed primitives referring to it will take up significantly more memory than a single detail with 1,000,000 polygons, but packing 1,000 polygons and referring to it 1,000 times is better than either of those. With some careful balancing of that tradeoff, it is possible to render over 1 trillion cubes with vertex normals using less than 9 GB of memory.
Supposing that a branch needs to be broken as a solid object using a finite element simulation, you can unpack just that tree, then unpack just that branch, and set up the simulation. For RBD simulations, geometry often doesn’t even need to be unpacked. It can use packed fragment primitives to refer to just groups within a detail. It also means that that using the Copy SOP with a group and packing doesn’t need to make a new detail with just the content of the group, unless the primitives are unpacked.
From the HDK, packed primitives are even more flexible, since you can provide your own packed implementation, which could procedurally generate geometry as you choose, using any data from your custom SOPs that created the packed primitives, as well as any data from the detail containing the primitives at render time. For example, you could create a packed primitive implementation that generates, at render time, snow packs and trees for mountain terrain in the detail.
Oh man, I long for the day that someone figures out how to fix Cookie, but this one's just sad, because operating on polygon lines should be the second-simplest case, after operating on disconnected points, which also don't seem to be supported. Unlike with closed polygons, there'd be no complicated reconnection needed after, or handling of holes, etc. Someday…
Yeah, then it's not the Segment Scale Compensate issue. They changed the exporter in Maya *because* Max couldn't support it unless it was baked in.
Not sure if this helps, when I import the whole scene into Unity all transforms are fine.
It's good to know there's another source validating the scene. If you can, it'd probably be worth submitting a bug. I hope that I can do something about it, but I can't make any promises.
If you can't submit a bug, is there any animation on the broken assets? How complicated is the transform hierarchy? Any unusual transform orders? Deformation?
This is just a wild guess, and probably not at all the issue, but do the assets that are broken use the “Segment Scale Compensate” option from Maya? Starting in Maya 2010, they both mark the use of compensation *and* bake in the compensation, so our support for it was broken, because we would add in the compensation and compensate for the compensation, undoing it. That's fixed in H13, but there are many things that could be wrong with transformations in the FBX importing, so that's probably not it.
MegaLeon I've actually checked and in toolkit/include there's no GB folder at all. Why is that? Have they been deprecated?
GB was the base of the geometry library in Houdini 11.1 and before. It was replaced with GA in Houdini 12.0. Porting may be quite difficult if you're not familiar with the code. Also, we've deprecated some more in Houdini 13, so once you've gotten it working, you may still have a few hundred deprecation warnings when you try to build it for 13.
Also, there'll be a FindShortestPath SOP in Houdini 13, if you can wait until October 31st. :wink: If that HDK plugin is the one I think it might be, FindShortestPath should even have a similar workflow as an option, among many other options.
When you profile it with the Performance Monitor, what node(s) inside of the Point Replicate are taking up most of the time? It might (or might not) be easy to do a quick optimization if it's something simple that's taking up the time. (Edit: Actually, I have no idea if the Performance Monitor even supports profiling procedurals at all. Does it? I was talking about the Point Replicate SOP.)
stkeg it seems like the SIM node can write to the attribute, but it doesn't seem like the changes are showing up.
when i print out the point attributes from with the SIM node, i can see the changes. the problem is, i don't see the changes in the details view. also the other problem is the SOP node only sees what's in the details view, so it doesn't detect the changes to the point attributes by the SIM node.
As I said, the SIM node shouldn't modify the SOP's detail; only the SOP should modify its own detail.
any ideas on how i can get the SOP node see the values the SIM node is making to the point attributes?
You'll need to get the SIM node to make the SOP recook and make the changes to its geometry.
I don't know exactly how to approach it, since I haven't interfaced with SIM before, but definitely, the SOP node should be the only one that modifies its own geometry. If it needs to be updated as a result of some change in SIM, you'll need the SOP to have a dependency on the change and have it recook.
When the SOP cooks, you can have it check for that specific change and only modify what needs to be modified if that's all that's changed. That sounds like it might be the behaviour you're looking for.
vilhelmo I just registered for the forum and noticed my password was included in clear text in the activation email.
Yeah, that's not good. I'll see if there's something simple we can do to fix that.
does the Houdini Forum save all of our passwords unhashed in their database?
It seems that the passwords are all saved hashed in the database. The activation email is sent immediately, before throwing away the plaintext copy, which is why it has access to the plaintext password.
Given the number of websites and forums that have had their databases leaked/hacked, this seems like a bit of a security flaw to me.
I'd be more concerned about that emails between different networks are pretty much all sent unencrypted, and in the U.S., all of the points in between are pretty much required by their federal government to record everything and send the data around to tons of 3rd parties. It's also not good that the forum sends the plaintext password to Side Effects in the first place, since it should be hashed on the client side, but that's another matter, and fixing that won't really prevent man-in-the-middle attacks.
nicholas_yue I have found that enabling the “convex” parameter in the polysoup SOP eliminated the duplicate vertices but am unsure if that is the correct approach or proper solution.
Check the vertex count from before converting to PolySoup. If it's 24, you have quads that just happen to have the same point twice. PolySoups are supposed to be an exact representation of the original polygons. The polygons might not be in the same order, but the content should be identical.
Convexing probably isn't what you want to do to fix this, because you'll end up with 12 triangles (6 of them degenerate) instead of the 6 you want.
nicholas_yue With 12.5.427, there is an error with the line
UT_Vector3 P = detail->getPos3(ptoff);
Something about expecting long long int for parameter i.e. ptoff must be long long int
Hmm… your ptoff is a GA_Offset, right? On Windows, GA_Offset is type __int64, on Mac, it's int64_t, on 64-bit Linux, it's long, and on 32-bit Linux, it's long long. GA_Detail::getPos3 should take a GA_Offset as its parameter, though, so it should be consistent with the GA_Offset in your file, regardless of how it's defined on the platform. All I can think of is if there's something redefining GA_Offset somewhere.
nicholas_yue Should I revamp to the equivalent GU_ classes ?
It's not necessary unless you need to call methods from the GU class, which you aren't in this case.
If I should, how does one get the GU_ from opening a BGEO file ?
If you do ever need to call methods from the GU class, I think that you should be able to cast the GEO_Primitive* directly to GU_PrimPolySoup*. If you were casting from GEO_Primitive* to GU_Primitive*, the compiler wouldn't know whether/how it needs to change the address as a result of the multiple inheritance in GU_PrimPolySoup, but casting to GU_PrimPolySoup*, it should know. If you ever need to cast GEO_Primitive* to GU_Primitive* on something whose type you don't know, you can use (GU_Primitive*)prim->castTo(), which does the adjustment inside the GU class via the virtual call to castTo.