Found 33 posts.
Search results Show results as topic list.
Technical Discussion » rest_ratio and rest2_ratio weirdness
- meeotch
- 33 posts
- Offline
Technical Discussion » alembic ROP broken in 12.5.533?
- meeotch
- 33 posts
- Offline
Just switched versions from 12.5.427 to 12.5.533, and a file I'd previously saved with 427 is now exhibiting some weird behavior from the Alembic ROP: it writes what appears to be a single frame of object pivots, rather than the entire animated geometry.
Has anyone else noticed this? I've attached a hip file - works in 427, doesn't work in 533.
Has anyone else noticed this? I've attached a hip file - works in 427, doesn't work in 533.
Technical Discussion » rest_ratio and rest2_ratio weirdness
- meeotch
- 33 posts
- Offline
So I've been trying to retrofit the old billowysmoke shader to respect the rest & rest2 fields in order to render a white, puffy cloud of expanding smoke with noise. I inserted a blend into the restpos VOP, as suggested in a couple of older threads on here, but noticed there was still some weird jumping in the texture. Then I noticed the rest_ratio and rest2_ratio detail attributes, and thought, “Oh, even better - I don't have to debug my VOPs.”
Except the rest_ratio and rest2_ratio attributes don't seem to follow a simple back-and-forth interpolation between the two fields. They're never exactly 0,1 during the sim - and they don't change every frame:
1018 1 0
1019 0.24 0.76
1040 0.04 0.96
1051 0.1556 0.8444
1056 0.5196 0.4804
1065 0.8836 0.1164
1975 0.7528 0.2472
That could be roughly 25 frames peak to trough, but the values are *static* in between the given frames. Also, diving into the stock pyro shader, I don't see a simple blending between two rest positions. There are a bunch of nodes with weird types like “pyroNoiseSpace” that have no documentation.
Has something changed recently to make the rest space interpolation into a black box?
Except the rest_ratio and rest2_ratio attributes don't seem to follow a simple back-and-forth interpolation between the two fields. They're never exactly 0,1 during the sim - and they don't change every frame:
1018 1 0
1019 0.24 0.76
1040 0.04 0.96
1051 0.1556 0.8444
1056 0.5196 0.4804
1065 0.8836 0.1164
1975 0.7528 0.2472
That could be roughly 25 frames peak to trough, but the values are *static* in between the given frames. Also, diving into the stock pyro shader, I don't see a simple blending between two rest positions. There are a bunch of nodes with weird types like “pyroNoiseSpace” that have no documentation.
Has something changed recently to make the rest space interpolation into a black box?
Technical Discussion » bug - cluster SOP unstable with small numbers of points?
- meeotch
- 33 posts
- Offline
I've noticed that the cluster SOP seems to be unstable with small numbers (< 10) of points.
See the attached file. If you scrub through toward frame 48, you'll notice that when almost all the points are deleted, the clusters “jump”. On my system (v 12.5.427), I'm seeing it at frames 45 & 46.
Can anyone else confirm this behavior?
(Also weird: the add SOP seems to be adding a default primitive that I can't get rid of. If you view either of the add SOPs in the network, you'll see a little blob of geo at the origin.)
See the attached file. If you scrub through toward frame 48, you'll notice that when almost all the points are deleted, the clusters “jump”. On my system (v 12.5.427), I'm seeing it at frames 45 & 46.
Can anyone else confirm this behavior?
(Also weird: the add SOP seems to be adding a default primitive that I can't get rid of. If you view either of the add SOPs in the network, you'll see a little blob of geo at the origin.)
Technical Discussion » Cloth arbitrary vel field
- meeotch
- 33 posts
- Offline
Has anyone successfully gotten the “External Velocity Field” in the Drag tab on a cloth object to work? The docs indicate: “You should not use the Wind Force DOP or any of the other forces in DOPs because they will generate inferior results compared with the Air Drag model.” So I've been trying to pass in a velocity volume, and dutifully using the Drag External Velocity Field slot as instructed, but no matter what I try, I can't get the cloth to respond.
At first, I thought all I had to do was reference the velocity volume directly from the EVF slot, with some sort of syntax: /obj/volumeObject/volumeVelocity1:vel or some such. Failing to get that to work, I searched for example files, and all I found was a Wire Solver example (the Wire setup has a similarly named EVF slot), wherein the velocity volume was loaded into the DOPSolver via a SOP Vector Field DOP and attached to the wire. The EVF field was left at the default “vel”, and the velocity volume was named appropriately (vel.x, vel.y, vel.z). But trying the same thing with the cloth solver still gives no response.
I've attached a simple file that indicates what I thought the correct setup should be. What am I doing wrong here?
At first, I thought all I had to do was reference the velocity volume directly from the EVF slot, with some sort of syntax: /obj/volumeObject/volumeVelocity1:vel or some such. Failing to get that to work, I searched for example files, and all I found was a Wire Solver example (the Wire setup has a similarly named EVF slot), wherein the velocity volume was loaded into the DOPSolver via a SOP Vector Field DOP and attached to the wire. The EVF field was left at the default “vel”, and the velocity volume was named appropriately (vel.x, vel.y, vel.z). But trying the same thing with the cloth solver still gives no response.
I've attached a simple file that indicates what I thought the correct setup should be. What am I doing wrong here?
Technical Discussion » dynamic fracture channel export
- meeotch
- 33 posts
- Offline
Ha! At the risk of jinxing myself, I think I found the answer… After many hours of banging my head against it, I guess all I needed to do was post the question!
You can get the pieces on the creation frame by digging into the “fractured_geo” SOPnet that lives inside the Voronoi Fracture Configure Object.
So far, it looks like a perfect translation, using that geo + the curves from chops. If anyone asks you… I'm a g*ddamn genius.
You can get the pieces on the creation frame by digging into the “fractured_geo” SOPnet that lives inside the Voronoi Fracture Configure Object.
So far, it looks like a perfect translation, using that geo + the curves from chops. If anyone asks you… I'm a g*ddamn genius.
Technical Discussion » dynamic fracture channel export
- meeotch
- 33 posts
- Offline
So I've got some dynamically fractured RBD's, and I'm trying to export the motion (to maya). I've done this successfully before with non-dynamically fractured RBD's (i.e. fractured outside the sim, not inside the sim via Voronoi Fracture Configure Object). The process was: 1) export the fractured pieces as obj's, 2) export the p t r channels via CHOPs, 3) load both into maya with a script.
The problem is, with the dynamically fractured RBD's, the pieces don't actually exist on the frame I need to export them! Take a look at the attached file. The sphere hits the cube at frame 4. The cube moves, but so far no pieces. Advance to frame 5, now we have pieces.
Now go turn off display of the AutoDopNet, and turn on display of the “piece” object. This just singles out one piece, to make it easier to see. Go into the chopnet, and middle-click on the three export nodes at the bottom. (I'm not sure why, but this causes the chopnet to update. If you're < frame 4, the chopnet errors out because it's trying to extract channels from a piece that doesn't exist. If you go to frame > 5 and middle-click, it will update and correctly export those channels.) So now at the object level, there are three nulls: 1) translate, which is the txyz channels of the piece, 2) pivot, which is the pxzy channels, and 3) pivot + translate, which should correctly track the piece. Remember, if you scrub down to < frame 4, it will break, and you'll have to middle-click the chopnet export nodes again.
So now step slowly backward from from 10 or so. Pivot + translate is tracking back with the piece, toward Pivot. When you get to frame 5, it's almost there… Then on frame 4, the piece disappears.
What this is demonstrating (I think), is that the pivot that the AutoDopNet is using is the piece's pivot (center) *at the frame of impact*. There's some sort of off-by-one problem where the pieces are getting created/calculated at frame 4, but not actually dumped into the sim until the next frame.
So the correct geo to export is the pieces in their position at frame 4. But I can't seem to figure out how to extract that from the AutoDopNet. Is there some way to re-order the solvers or something so that the pieces go into the sim on the same frame they're created?
I can get reasonable-looking results by manually centering the pivots in maya, then throwing away the P channel info - but it's not a perfect translation. I can also use FBX, instead of the obj/chop method - but I want curves, not point caches.
The problem is, with the dynamically fractured RBD's, the pieces don't actually exist on the frame I need to export them! Take a look at the attached file. The sphere hits the cube at frame 4. The cube moves, but so far no pieces. Advance to frame 5, now we have pieces.
Now go turn off display of the AutoDopNet, and turn on display of the “piece” object. This just singles out one piece, to make it easier to see. Go into the chopnet, and middle-click on the three export nodes at the bottom. (I'm not sure why, but this causes the chopnet to update. If you're < frame 4, the chopnet errors out because it's trying to extract channels from a piece that doesn't exist. If you go to frame > 5 and middle-click, it will update and correctly export those channels.) So now at the object level, there are three nulls: 1) translate, which is the txyz channels of the piece, 2) pivot, which is the pxzy channels, and 3) pivot + translate, which should correctly track the piece. Remember, if you scrub down to < frame 4, it will break, and you'll have to middle-click the chopnet export nodes again.
So now step slowly backward from from 10 or so. Pivot + translate is tracking back with the piece, toward Pivot. When you get to frame 5, it's almost there… Then on frame 4, the piece disappears.
What this is demonstrating (I think), is that the pivot that the AutoDopNet is using is the piece's pivot (center) *at the frame of impact*. There's some sort of off-by-one problem where the pieces are getting created/calculated at frame 4, but not actually dumped into the sim until the next frame.
So the correct geo to export is the pieces in their position at frame 4. But I can't seem to figure out how to extract that from the AutoDopNet. Is there some way to re-order the solvers or something so that the pieces go into the sim on the same frame they're created?
I can get reasonable-looking results by manually centering the pivots in maya, then throwing away the P channel info - but it's not a perfect translation. I can also use FBX, instead of the obj/chop method - but I want curves, not point caches.
Technical Discussion » parallel ROP rendering
- meeotch
- 33 posts
- Offline
Peter - thanks for the definitive reply. At least I can stop looking! My current thought is to run a python script that manages the threads (already written, actually - working as a command line solution), and call that from a shellROP, wired appropriately into the mantraROP as a dependency. That solution sucks since it loses “pause” functionality - but it's what I've got right now.
I'm still confused, however, what the docs refer to:
“A Merge ROP with the 3 inputs would allow all 3 to render in parallel, while a Pre Post ROP explicitly sets up the ordering of frames. ” and (from wedgeROP):
“Wait for Render to Complete - Sets the Block Until Render Completes temporarily on the Output Driver. This avoids all the wedges being started simultaneously, which can inconvenience a single-user machine. However, by not blocking, you can return control to Houdini quickly while waiting for the renders to run in background. ”
This really seems to imply that ROP networks incorporate render-manager-like functionality.
I'm still confused, however, what the docs refer to:
“A Merge ROP with the 3 inputs would allow all 3 to render in parallel, while a Pre Post ROP explicitly sets up the ordering of frames. ” and (from wedgeROP):
“Wait for Render to Complete - Sets the Block Until Render Completes temporarily on the Output Driver. This avoids all the wedges being started simultaneously, which can inconvenience a single-user machine. However, by not blocking, you can return control to Houdini quickly while waiting for the renders to run in background. ”
This really seems to imply that ROP networks incorporate render-manager-like functionality.
Technical Discussion » parallel ROP rendering
- meeotch
- 33 posts
- Offline
Is it possible to cause houdini to fire off separate threads in order to run several ROPs in parallel? The docs for mergeROP seem to suggest that it is, but when I wire several geometryROPs into a mergeROP and run the mergeROP, it seems to be processing its inputs one at a time.
The wedgeROP also seems to imply this sort of functionality, with its “Wait for Render to Complete” parameter. However, same deal: when I actually run a wedge with this option turned off, it still runs one slice at a time.
Context: trying to render out several geo passes in parallel, followed by a mantra render when they're all finished.
The wedgeROP also seems to imply this sort of functionality, with its “Wait for Render to Complete” parameter. However, same deal: when I actually run a wedge with this option turned off, it still runs one slice at a time.
Context: trying to render out several geo passes in parallel, followed by a mantra render when they're all finished.
Technical Discussion » mplay viewport layout stuck in multi mode
- meeotch
- 33 posts
- Offline
After setting the layout in my mplay render viewer to multiple viewports (rows>1 and/or columns>1), it seems to get “stuck” that way, and can't be forced back into single viewport mode.
Neither hitting “b” or “T”, clicking the Maximize Viewport button, setting the layout back to 1x1, or turning on/off Hide Blank Viewports seems to have any effect. The viewer stays in the highest viewport-number mode that it was previously set to (so switching from 4x4 back down to 2x2 doesn't work, either).
Any ideas?
Windows 7, NVidia GFX 470, Houdini 11.
Neither hitting “b” or “T”, clicking the Maximize Viewport button, setting the layout back to 1x1, or turning on/off Hide Blank Viewports seems to have any effect. The viewer stays in the highest viewport-number mode that it was previously set to (so switching from 4x4 back down to 2x2 doesn't work, either).
Any ideas?
Windows 7, NVidia GFX 470, Houdini 11.
Technical Discussion » deep camera maps
- meeotch
- 33 posts
- Offline
I've been attempting to mess around with deep camera maps. The docs mention that the Deep Resolver parameter in the ROP should be set to “Deep Shadow Map” (rather than “Deep Camera Map”, which is in fact an option). When I do this, I get what appears to be a deep shadow map (i.e. no color info). Two files are output: 1) the Output Picture, which has either 1 or 0 in the color channel, and a sensible alpha, and 2) the DSM Filename, which has Pz only.
When I set Deep Resolver to “Deep Camera Map”, I get what looks like a normal Output Picture, and no secondary file.
Has anyone got DCM's working? Tips?
When I set Deep Resolver to “Deep Camera Map”, I get what looks like a normal Output Picture, and no secondary file.
Has anyone got DCM's working? Tips?
Technical Discussion » sculpted/scripted forces (VOP force?)
- meeotch
- 33 posts
- Offline
I'm working on a SPH (particle fluid) splash that needs to be highly directable. I'd like to implement a “sculpted” or “scripted” force to drive the particle fluid, along the lines of a scripted daemon in Realflow. So basically, defining a force through straight-up math, or by math derived from the placement of some control objects/curves/etc. My guess is that one way to do this is through the VOP Force node, but my experience with VOPs is pretty limited, and the VOP Force doesn't seem to have any examples.
Can anyone point the way to some examples (or offer alternative suggestions)? I have a feeling I'm about to find out how deep the rabbit hole goes.
Can anyone point the way to some examples (or offer alternative suggestions)? I have a feeling I'm about to find out how deep the rabbit hole goes.
Technical Discussion » DOP position when not active
- meeotch
- 33 posts
- Offline
Background: what I'm trying to do is to set the “active” status of RBDFracture pieces based on their world space positions. I.e., so that I can keyframe an overall animation, then have the pieces turn dynamic when they get near the ground.
The test setup: I have a box that's copied to the points of a sphere. The whole thing is keyframed, and also made into an RBDFracture. The RBDFracture has its Active parameter keyframed to turn on at frame 10.
The problem: the Position “p” and “t” data on each fracture piece don't contain the actual position of the RBD piece when the piece is not active. Load up the attached sample file, and watch the position of “circle_object1”. After frame 10, it tracks properly, but before frame 10 it just follows the parent object.
The solution? I'm guessing it involves getting the world space position of each fracture group from *outside* the DOPnet, but I'm not quite sure how to do it. (Bit of a houdini n00b.) Note that eventually, I want to acess this info in an expression on the RBDKeyActive node (I.e. from *inside* the DOPnet), rather than from the circle object. Also note that I'll eventually be doing this with many, many pieces - which is why I don't want to create an RBDKeyActive node for each piece, or enter the keyframes by hand.
The test setup: I have a box that's copied to the points of a sphere. The whole thing is keyframed, and also made into an RBDFracture. The RBDFracture has its Active parameter keyframed to turn on at frame 10.
The problem: the Position “p” and “t” data on each fracture piece don't contain the actual position of the RBD piece when the piece is not active. Load up the attached sample file, and watch the position of “circle_object1”. After frame 10, it tracks properly, but before frame 10 it just follows the parent object.
The solution? I'm guessing it involves getting the world space position of each fracture group from *outside* the DOPnet, but I'm not quite sure how to do it. (Bit of a houdini n00b.) Note that eventually, I want to acess this info in an expression on the RBDKeyActive node (I.e. from *inside* the DOPnet), rather than from the circle object. Also note that I'll eventually be doing this with many, many pieces - which is why I don't want to create an RBDKeyActive node for each piece, or enter the keyframes by hand.
-
- Quick Links