If you are colliding with static objects using a DOP setup, you can switch the objects to Use Volume Collisions. Then the collisionignore attribute will work. But this means you lose the edge/edge collisions that can be very useful for hairs.
For controlling pairwise collision within dynamic components of a vellum solve, there is no attribute now.
Found 390 posts.
Search results Show results as topic list.
Technical Discussion » Disable Vellum collisions for specific primitives
- jlait
- 6187 posts
- Online
Technical Discussion » EdgeFracture Visualizer Error
- jlait
- 6187 posts
- Online
I couldn't reproduce this as it was fixed in 17.0.374! So when you next update it should work. Thank you for the reports!
Technical Discussion » Vellum Cloth Presets?
- jlait
- 6187 posts
- Online
A .hip file is a thousand pictures.
If your cloth is stretching too much, first increase substeps to 5 if you haven't already. Then increase Constraint Iterations to be at least the diameter of the cloth in edges. Higher res cloth needs more iterations.
If your cloth is stretching too much, first increase substeps to 5 if you haven't already. Then increase Constraint Iterations to be at least the diameter of the cloth in edges. Higher res cloth needs more iterations.
Technical Discussion » Animate vellum rest length scale
- jlait
- 6187 posts
- Online
You can't animate the input to the vellumsolver, as this is only read for the first frame.
Instead, for the SOP-based Vellum Solver dive inside it and add a Vellum Constraints Property. This lets you modify Rest Length scale there. You may want to create constraint groups to ensure it modifies the ones you want.
For DOP-based vellum solver, it is a similar idea, wire in a vellum constraints property like you would any microsolver.
Instead, for the SOP-based Vellum Solver dive inside it and add a Vellum Constraints Property. This lets you modify Rest Length scale there. You may want to create constraint groups to ensure it modifies the ones you want.
For DOP-based vellum solver, it is a similar idea, wire in a vellum constraints property like you would any microsolver.
Technical Discussion » Vellum, grain-grain collision exclusion when explicitly constrained? (and non-disablable collisons!?)
- jlait
- 6187 posts
- Online
Your analysis seems quite right. Stripping those points from the neighbourlist is what the don't-collide-with-explicit does. I'm glad you were able to find the right code to add it.
As for speed, I'd expect a proper apples-to-apples to match speed in both cases. One big difference is we've locked down the Vellum path to be only support OpenCL and only have constraint averaging disabled. But we call the same kernel in the inner loop, so I'm not sure where the speed up occurs.
Another issue with Vellum is the auto-sleep likely won't be as effective in speeding up the sim. So I can see pure grains still being used. Which is also why we kept all those shelf tools.
As for speed, I'd expect a proper apples-to-apples to match speed in both cases. One big difference is we've locked down the Vellum path to be only support OpenCL and only have constraint averaging disabled. But we call the same kernel in the inner loop, so I'm not sure where the speed up occurs.
Another issue with Vellum is the auto-sleep likely won't be as effective in speeding up the sim. So I can see pure grains still being used. Which is also why we kept all those shelf tools.
Technical Discussion » EdgeFracture Visualizer Error
- jlait
- 6187 posts
- Online
I'm unable to reproduce, it toggles the piece visualization for me?
Can you please submit a bug with a .hip file that shows how you have this set up?
Can you please submit a bug with a .hip file that shows how you have this set up?
Technical Discussion » attributes that drive vellum
- jlait
- 6187 posts
- Online
Not yet…. I'm making a list and checking it twice, so hopefully it will show up soon.
There is no per-point friction, however, so that attribute isn't on the list.
You can also look at the help for the Detangle SOP to see some of the collision attributes described.
Most dynamic attributes are covered with the POP Attributes docs as there is a lot of overlap.
There is no per-point friction, however, so that attribute isn't on the list.
You can also look at the help for the Detangle SOP to see some of the collision attributes described.
Most dynamic attributes are covered with the POP Attributes docs as there is a lot of overlap.
Technical Discussion » atrribdataid array length
- jlait
- 6187 posts
- Online
It's a 64-bit session id broken into two 32 bit integers. So it will be the same for all data ids you read within a session.
Dive into the Vellum Solver SOP where we recompute the graph colouring to see attribute ids being used.
Dive into the Vellum Solver SOP where we recompute the graph colouring to see attribute ids being used.
Technical Discussion » Vellum, grain-grain collision exclusion when explicitly constrained? (and non-disablable collisons!?)
- jlait
- 6187 posts
- Online
It might help to clarify that the detangle and the grains are two separate collision detectors in Vellum. The detangle uses overlap and disableself, but the grain colliders do not. The grain collider uses the isgrain attribute to include a point for grain collision.
The collide mutually constrained isn't in Vellum. Often it was used in grains to handle sheets or wires where one has to have a very dense layer to avoid penetration. In Vellum, it is expected instead to just use triangles and lines for this. The main reason for its absence, though, is that the constraints live on a separate geometry in Vellum.
The collide mutually constrained isn't in Vellum. Often it was used in grains to handle sheets or wires where one has to have a very dense layer to avoid penetration. In Vellum, it is expected instead to just use triangles and lines for this. The main reason for its absence, though, is that the constraints live on a separate geometry in Vellum.
Technical Discussion » How to use $HIP?
- jlait
- 6187 posts
- Online
It is impossible to set $HIP because that would defeat it being the path to the .hip file. If you load a .hip file from the correct path on the flash drive, it should on load set the $HIP to point to that correct path. I'm not sure why it isn't in your case.
So say you have a
if you copy all of this to your flash drive:
So say you have a
c:/path/to/my.hip
c:/path/to/texture/foo.jpg
Then in my.hip you have $HIP/texture/foo.jpgif you copy all of this to your flash drive:
f:/my.hip
f:/texture/foo.jpg
when you load my.hip from the flash drive it should set $HIP to f:/
, and then $HIP/texture will become f:/texture
, and you should find the path.
Edited by jlait - Oct. 15, 2018 10:16:13
Technical Discussion » How to use $HIP?
- jlait
- 6187 posts
- Online
You can't set $HIP. $HIP always refers to the directory your .hip file is found in. Thus, if you move the .hip file and all its dependencies around, it will still work. This works best if your .hip files are in the root of your project structure, so you have
myprojects/projectA/foo.hip
myprojects/projectA/geo/lotsof.bgeo.sc
Then $HIP/geo will work as expected. This sort of layout is what most of the default paths point to.
Alternatively, you may want a specific job home and have .hip be in a subfolder:
myprojects/projectA/hip/foo.hip
myprojects/projectA/geo/lotsof.bgeo.sc
Here you can set JOB to myprojects/projectA, and then use
$JOB/geo/lotsof.bgeo.sc
However you must be careful to reset JOB if moving the project.
myprojects/projectA/foo.hip
myprojects/projectA/geo/lotsof.bgeo.sc
Then $HIP/geo will work as expected. This sort of layout is what most of the default paths point to.
Alternatively, you may want a specific job home and have .hip be in a subfolder:
myprojects/projectA/hip/foo.hip
myprojects/projectA/geo/lotsof.bgeo.sc
Here you can set JOB to myprojects/projectA, and then use
$JOB/geo/lotsof.bgeo.sc
However you must be careful to reset JOB if moving the project.
Technical Discussion » Vellum Cloth Presets?
- jlait
- 6187 posts
- Online
For leather you want to increase the Compression Stiffness, and maybe even turn it off. Likewise, boosting the bend stiffness might be needed depending on the density of the mesh. However, as it does depend on topology, I can't give you numbers to plug directly in.
Technical Discussion » Vellum grains emission overlap
- jlait
- 6187 posts
- Online
Not easily, I'm afraid…. You could use the same emission approach using a POP Source, but you'll have to careful to update the ConstraintGeometry at the same time, so this is not for the faint-of-heart. I'll submit an RFE to this effect though!
Technical Discussion » OpenCL Voxel Space to World Space position?
- jlait
- 6187 posts
- Online
From attribvolume.hip of the OpenCL masterclass:
In this case you are going the other way, but it should be a matter of using the gidx.xyz rather than pos.xyz.
So hi.hi is the translate component, not lo.lo. Note heightfields are stored always with a xy layout in memory, but usually displayed with a zx layout on screen. This would be why the 15s are not along a diagonal.
float4 voxelpos = pos.x * density_xformtovoxel.lo.lo +
pos.y * density_xformtovoxel.lo.hi +
pos.z * density_xformtovoxel.hi.lo +
density_xformtovoxel.hi.hi;
In this case you are going the other way, but it should be a matter of using the gidx.xyz rather than pos.xyz.
So hi.hi is the translate component, not lo.lo. Note heightfields are stored always with a xy layout in memory, but usually displayed with a zx layout on screen. This would be why the 15s are not along a diagonal.
Houdini Lounge » Will Houdini 17 have upgraded FLIP Solver
- jlait
- 6187 posts
- Online
Damn! You aren't supposed to break the NDA, jsmack!
Mind you, I had already presented the secret sauce of our new abstract slurry a few years back:
https://www.youtube.com/watch?v=K8dxc807R-4&t=21m13s [www.youtube.com]
Mind you, I had already presented the secret sauce of our new abstract slurry a few years back:
https://www.youtube.com/watch?v=K8dxc807R-4&t=21m13s [www.youtube.com]
Houdini Lounge » Houdini can only load 1.5 TB of Flip Data then crashes!
- jlait
- 6187 posts
- Online
madcat117
is over 1.5 tb of flip simulation ram cache fixed yet?
No. I have not reproduced it yet. I would appreciate more information about the nature of the crash.
We don't have any known 1.5TB limits, and it is a weird number for most of address problems that are directly our fault. 2gb/4gb are big flags, along with 4x and 0.25x of those. But by the time you get to a TB most of our counters would either be 64 bit or long since overflowed.
madcat117
Hopefully SideFX software fixes this and they can easily test this out by trying to run a 3.8 tb ram instance from Amazon web services if they want to have access to that type of hardware quickly and in a affordable manner for testing and debugging.
I'd like some more guidance as to what I'm looking for before paying $32/hour to remotely debug the issue. My current theory is it is some ulimit-style restriction in Windows 2016. This is why I'm most interesting in knowing if it has a similar barrier on Linux.
Technical Discussion » flip distributed sim stuck
- jlait
- 6187 posts
- Online
nimnul
as Chris pointed out, stuck occurs when one peer jumps to the next frame without waiting for the others to finish the previous frame
This is a sort of “split-brain” problem. This usually isn't a networking issue, but a logic issue in the distributed simulation.
A very common cause of this is when the substepping isn't synced between machines.
This doesn't explain:
nimnulhowever, as that shouldn't materially affect whether substepping stays synced.
turning Distributed Pressure Solve off seems to resolve the issue
Does your flip sim use variable substepping? Ie, Min Substep < Max Substep?
If you can try locking substeps by setting those equal, it may stop the issue?
Ideally if you have a case that reproduces that you can submit to support, we'd like to see it fail here as it can be hard to figure out what is causing computation to diverge.
Houdini Lounge » Houdini can only load 1.5 TB of Flip Data then crashes!
- jlait
- 6187 posts
- Online
Awaiting more information….
Is this referring to memory_use.hip? Or your own FLIP loading tests?
Ideally, if you can also try a Linux distro (Can run apprentice off a thumb-stick Linux?) we can get a good idea if this is something in Houdini or the OS.
Thanks,
madcat117
it always crashes at 1.5 TB to 1.4 TB of loaded RAM cache use!
Is this referring to memory_use.hip? Or your own FLIP loading tests?
jlait
Does Windows report any interesting messages when it takes Houdini down?
Ideally, if you can also try a Linux distro (Can run apprentice off a thumb-stick Linux?) we can get a good idea if this is something in Houdini or the OS.
Thanks,
Houdini Lounge » Houdini can only load 1.5 TB of Flip Data then crashes!
- jlait
- 6187 posts
- Online
We don't have any known limits at that point, but there are always surprises….
Biggest linux machine I've run on is 1.5TB, interestingly enough, so while that worked right up to 1.5TB it doesn't answer the question about beyond :>
“Crash” can be a rather vague term. Does Windows report any interesting messages when it takes Houdini down?
If you can try on Linux, that would help swiftly separate whether this is an OS issue or Houdini issue. The closest I can think of for a Houdini issue would be someone using a int32 to store a memory size in kb. But that would more overflow around 2TB.
I can't find anything around 1TB here: Server 2016 seems enabled right up to 24TB.
https://docs.microsoft.com/en-gb/windows/desktop/Memory/memory-limits-for-windows-releases [docs.microsoft.com]
Attached is a .hip file that uses 4GB per frame by initializing 1024^3 volumes (and making sure they aren't displayed so you don't use more memory…) It should be a lot faster for hitting the 1.5TB limit. It also might reveal if it is *how* we are allocating the memory that is failing.
A long while ago we had a 48gb limit on Linux's default allocator because NVidia reserved the 2GB address space, which caused sbrk() to fail and fall back to mmap(), which has a hardcoded limit of 64k handles…. There might be a similar thing we are hitting here…
Biggest linux machine I've run on is 1.5TB, interestingly enough, so while that worked right up to 1.5TB it doesn't answer the question about beyond :>
“Crash” can be a rather vague term. Does Windows report any interesting messages when it takes Houdini down?
If you can try on Linux, that would help swiftly separate whether this is an OS issue or Houdini issue. The closest I can think of for a Houdini issue would be someone using a int32 to store a memory size in kb. But that would more overflow around 2TB.
I can't find anything around 1TB here: Server 2016 seems enabled right up to 24TB.
https://docs.microsoft.com/en-gb/windows/desktop/Memory/memory-limits-for-windows-releases [docs.microsoft.com]
Attached is a .hip file that uses 4GB per frame by initializing 1024^3 volumes (and making sure they aren't displayed so you don't use more memory…) It should be a lot faster for hitting the 1.5TB limit. It also might reveal if it is *how* we are allocating the memory that is failing.
A long while ago we had a 48gb limit on Linux's default allocator because NVidia reserved the 2GB address space, which caused sbrk() to fail and fall back to mmap(), which has a hardcoded limit of 64k handles…. There might be a similar thing we are hitting here…
Technical Discussion » File size getting out of hand when creating terrain ?
- jlait
- 6187 posts
- Online
You can also cache out a height field with a “File Cache” SOP or File SOP as it can be saved as .bgeo.sc. This is a 3d file format, but it stores the heightfield as a 2d volume so will round-trip seamlessly.
The growing file size is probably due to heightfield paint. To avoid having to re-apply the strokes every time you load a file, it caches out the final painting you did as a layer. This will thus bloat the .hip file.
The growing file size is probably due to heightfield paint. To avoid having to re-apply the strokes every time you load a file, it caches out the final painting you did as a layer. This will thus bloat the .hip file.
-
- Quick Links