Thanks for all your help! The UVs came from the Zbrush model.
The labs UV transfer tool worked really well, but the "harden UV edges" option in the remesh did the trick perfectly. Hadn't even noticed that before!
Aizatulin, I'll give your method a go during some downtime--there are a few tricks in there I'd like to have in my brain.
Found 100 posts.
Search results Show results as topic list.
Technical Discussion » Clean transfer of UVs after remesh?
- bentway23
- 100 posts
- Offline
Technical Discussion » Clean transfer of UVs after remesh?
- bentway23
- 100 posts
- Offline
I have to remesh an object to do some mild boolean-ing and less mild fracturing. Is there a suggested way for a clean UV transfer from the original to the remeshed object? I realize the vertices aren't 100% in the same place, but it seems like they're closer than the new UVs would indicate.
In case it was an issue of point/vertex density, I tried this with inappropriately dense meshes, as well as doing a transfer by way of a very dense scatter, and the result is the same.
The weirdness, predictably, happens along the edges of the UV islands, so I assume the culprit and salvation lies within some sort of vertex split workflow.
Surprisingly, the boolean and fracture hold the UVs pretty nicely (post the remesh UV mangling).
Three pics attached--the original, after the remesh, and then after the boolean/fracture.
Thanks for any pointers!
In case it was an issue of point/vertex density, I tried this with inappropriately dense meshes, as well as doing a transfer by way of a very dense scatter, and the result is the same.
The weirdness, predictably, happens along the edges of the UV islands, so I assume the culprit and salvation lies within some sort of vertex split workflow.
Surprisingly, the boolean and fracture hold the UVs pretty nicely (post the remesh UV mangling).
Three pics attached--the original, after the remesh, and then after the boolean/fracture.
Thanks for any pointers!
Technical Discussion » Increase null display size?
- bentway23
- 100 posts
- Offline
Thanks! I'll have to give the custom node converter a spin, as well as the labs stickers (although if I ever accidentally hit "more information" on those, Houdini crashes, so I've avoided them). I am a frequent (ab)user of backdrops and sticky notes, but sometimes it still becomes a matter of finding a tiny node in a vast sea of backdrop.
Technical Discussion » Increase null display size?
- bentway23
- 100 posts
- Offline
Is it possible to increase the size of a null object in the Network viewer? I usually have one as a control with all my adjustable parameters on it, and even when bright red and a different shape it can get lost in the overall node sea.
Technical Discussion » Connecting adjacent points but of a different group
- bentway23
- 100 posts
- Offline
Addendum--a not-difficult way to do it (and hopefully I'm not gumming up the forum with the obvious)--use the "name" sop to create a name attribute from each point's group. Use connect points to make the prims, then in a primwrangle you just have to compare every prim's two points to see if they belong in the same group, and remove them if they do.
int pts = primpoints(0, @primnum);
string name0 = point(0, "name", pts);
string name1 = point(0, "name", pts);
if(name0 == name1){
removeprim(0, @primnum, 1);
}
My fear with this is that it's probably pretty inefficient since it very heavyhandedly grinding through every primitive.
int pts = primpoints(0, @primnum);
string name0 = point(0, "name", pts);
string name1 = point(0, "name", pts);
if(name0 == name1){
removeprim(0, @primnum, 1);
}
My fear with this is that it's probably pretty inefficient since it very heavyhandedly grinding through every primitive.
Technical Discussion » Connecting adjacent points but of a different group
- bentway23
- 100 posts
- Offline
Beautiful, thanks! I was tipped off to the the "inters" on an Od/Force thread, but the intra is good to know, too, definitely.
I dove into the vellum constraints node to see where the prim creation happens to see if I could just steal that small part to do this same setup but not for dynamics, simply for connecting points of separate groups. It looks like the magic in the glue constraint creation hinges on a "createGlueConstraints" function which appears to be not accessible to layfolk and possesses no context help info. However, I suppose for a quick hack to connect points from different groups, it's easy enough to build them using the vellum glue constraint and then delete all the dynamics attributes. (I'm pretty convinced there's a less ugly way, though, probably just doing a point-by-point group membership or attribute match lookup and blasting prims where both points are of the same group. The search continues!)
I dove into the vellum constraints node to see where the prim creation happens to see if I could just steal that small part to do this same setup but not for dynamics, simply for connecting points of separate groups. It looks like the magic in the glue constraint creation hinges on a "createGlueConstraints" function which appears to be not accessible to layfolk and possesses no context help info. However, I suppose for a quick hack to connect points from different groups, it's easy enough to build them using the vellum glue constraint and then delete all the dynamics attributes. (I'm pretty convinced there's a less ugly way, though, probably just doing a point-by-point group membership or attribute match lookup and blasting prims where both points are of the same group. The search continues!)
Technical Discussion » Connecting adjacent points but of a different group
- bentway23
- 100 posts
- Offline
I've probably overthought this and missed the obvious--I have multiple different groups of points (a changing number, this has to be scalable), and I want to create polylines between the border points--essentially connect adjacent pieces, but only pieces that are not in the current group, instead of only pieces that are.
(The end result is creating prims to use as glue constraints in a grain setup connecting clusters of points. Because there are gaps between the chunks (and those gaps are where I want to form my polylines) using the connect adjacent pieces doesn't work because all adjacent points are found before the necessary distance has been covered. It's for a softbody tearing sim using grains for point deformation to compensate for vellum welds not accommodating solid objects--project file in this post: https://forums.odforce.net/topic/50110-vellum-welding-fractured-solid-or-extruded-objects/)
(The end result is creating prims to use as glue constraints in a grain setup connecting clusters of points. Because there are gaps between the chunks (and those gaps are where I want to form my polylines) using the connect adjacent pieces doesn't work because all adjacent points are found before the necessary distance has been covered. It's for a softbody tearing sim using grains for point deformation to compensate for vellum welds not accommodating solid objects--project file in this post: https://forums.odforce.net/topic/50110-vellum-welding-fractured-solid-or-extruded-objects/)
Technical Discussion » Vellum welding fractured solid (or extruded) objects
- bentway23
- 100 posts
- Offline
I'm probably missing something basic, but I haven't been able to find the answer to this--
I want to tear a solid object (either a solid/watertight mesh or simply an extruded object with an output back) using vellum, with an object pushing out from inside and breaking the tear/fracture. I assume there would be a fracture (either RBD material, or voronoi, or boolean), and in the vellum setup, after the cloth and struts/pressure/tets/whatever, append a weld. The problem is that I can't get the weld to recognize the inside fractured edges as the parts that need to be attached. (I've also tried stitching points to a similar lack of avail.) Perhaps there's some sort of grouping I need to do? Any suggestions on how I can get a solid object with welded pieces so something can tear its way out?
Thanks for any help!
I want to tear a solid object (either a solid/watertight mesh or simply an extruded object with an output back) using vellum, with an object pushing out from inside and breaking the tear/fracture. I assume there would be a fracture (either RBD material, or voronoi, or boolean), and in the vellum setup, after the cloth and struts/pressure/tets/whatever, append a weld. The problem is that I can't get the weld to recognize the inside fractured edges as the parts that need to be attached. (I've also tried stitching points to a similar lack of avail.) Perhaps there's some sort of grouping I need to do? Any suggestions on how I can get a solid object with welded pieces so something can tear its way out?
Thanks for any help!
Technical Discussion » Question about sop solver substeps and changing results
- bentway23
- 100 posts
- Offline
So this is probably a very basic question . . .
I am toying with the Entagma yarn art tutorial (https://entagma.com/advanced-setups-06-yarn-art-using-metropolis-hastings-sampling/). It's almost all done inside a sop solver. Since it's geared towards an end goal (creating a still image) versus an animated one, the number of substeps on the solver is jacked up considerably, so 128 (or whatever) runs through the point wrangle inside the sop solver are done for each frame. Got it, easy enough.
My question: if I change the number of substeps, the whole result changes (the "yarn" distribution changes). Since all random numbers are nrandom("marsenne") and not re-initiated for each step, why would that be? Why would there be a difference in the result of running it 20 frames at 64 substeps versus 10 frames at 128 substeps (and the differences are dramatic). I could understand if there were something dynamically initiated on each step that created a difference, such as a rand or noise. Is the difference just in the nrandom("marsenne"), when it is invoked?
Thanks for helping me towards my goal of incomplete ignorance!
I am toying with the Entagma yarn art tutorial (https://entagma.com/advanced-setups-06-yarn-art-using-metropolis-hastings-sampling/). It's almost all done inside a sop solver. Since it's geared towards an end goal (creating a still image) versus an animated one, the number of substeps on the solver is jacked up considerably, so 128 (or whatever) runs through the point wrangle inside the sop solver are done for each frame. Got it, easy enough.
My question: if I change the number of substeps, the whole result changes (the "yarn" distribution changes). Since all random numbers are nrandom("marsenne") and not re-initiated for each step, why would that be? Why would there be a difference in the result of running it 20 frames at 64 substeps versus 10 frames at 128 substeps (and the differences are dramatic). I could understand if there were something dynamically initiated on each step that created a difference, such as a rand or noise. Is the difference just in the nrandom("marsenne"), when it is invoked?
Thanks for helping me towards my goal of incomplete ignorance!
Technical Discussion » Instance node loses UVs
- bentway23
- 100 posts
- Offline
I have an animated alembic loop that I am copying to points from a POPnet, but I want each copy to be slightly randomly offset time-wise. I would love to be able to do this with my original alembic (I'll be posting this question separately), but the only method I've found is Tim van Helsdingen's method (https://vimeo.com/344045106) that explicitly calls the cached frames, just adds an offset to the $F part of the path, and then uses the instance node to do the instancing, so I'm doing that having recached my .abc as a .bgeo sequence. The problem is that the instance node doesn't retrieve the UVs for the objects, and I haven't been able to find a way to get them to carry through.
How can I get the UVs to be seen in the instanced objects?
How can I get the UVs to be seen in the instanced objects?
Edited by bentway23 - Aug. 15, 2021 21:07:25
Technical Discussion » Pinning vellum to target when simulation moves from source
- bentway23
- 100 posts
- Offline
Technical Discussion » Pinning vellum to target when simulation moves from source
- bentway23
- 100 posts
- Offline
I have an alembic with a lovely bit of movement happening, lying down on a ground plane. I'd like to make parts of it floppy, which I can do with vellum and a pin to target with an attribute that scales the pin stiffness so it's lower on the floppier parts. No problem there.
The part I'm not able to figure out is this: I actually want to hang this up by one of the floppy bits, but still be able to use the shivering/shuddering from the target animation. The target animation is apparently evaluated as absolute coordinates, rather than deltas of a rest mesh, so even with the floppy part being hung up, the pinned parts are still lying on the "ground"/seeking the actual point locations of the target. Is there a way where I can get it to evaluate the target so that even if some of the animated bits drift (and move some of the not-quite-100%-pinned areas) the vellum will incorporate the source object's animation, but at it's new simulated place?
Thanks for any help!
The part I'm not able to figure out is this: I actually want to hang this up by one of the floppy bits, but still be able to use the shivering/shuddering from the target animation. The target animation is apparently evaluated as absolute coordinates, rather than deltas of a rest mesh, so even with the floppy part being hung up, the pinned parts are still lying on the "ground"/seeking the actual point locations of the target. Is there a way where I can get it to evaluate the target so that even if some of the animated bits drift (and move some of the not-quite-100%-pinned areas) the vellum will incorporate the source object's animation, but at it's new simulated place?
Thanks for any help!
Technical Discussion » Animating input temperature for pyro source spread node?
- bentway23
- 100 posts
- Offline
Is it possible to animate input point temperature for the pyro source spread node--i.e. have one pocket of points that flares up (@temperature = 1) at frame 100, and other that flares up at frame 200? Or a group that is dynamically created (bound by animated geometry), so that when the animated object flies through it creates a new source for the pyro source spread? It seems like it only accepts the values at frame one, which makes sense for a solver--is there a way, such as with vellum, that I can dig in and animate those things?
I'm using this more as a growth solver, not generating any actual pyro on this--I'm just spreading attributes about.
Thanks for any help!
I'm using this more as a growth solver, not generating any actual pyro on this--I'm just spreading attributes about.
Thanks for any help!
Technical Discussion » OpenCL Exception: Could not open OpenCL program
- bentway23
- 100 posts
- Offline
Hello! I'm trying to use an OpenCL node and the console pope up with the error in the subject line as soon as I click on it. Googling didn't help me--I updated my GPU drivers to no avail. I also added "HOUDINI_OCL_DEVICETYPE=GPU" to my .env file (per another forum), which didn't help. When I open the nVidia control panel to turn on OpenCL, it doesn't appear in any of the device manager settings, just OpenGL (and for the program-specific settings, Houdini doesn't appear at all).
I'm running Windows 10 on a computer with decent horsepower and an RTX 2070 SUPER graphics card, so computer beefiness or up-to-date-ness shouldn't be a problem. I'm also running Houdini Indie (the latest build--18.5.499).
Thanks for any help!
I'm running Windows 10 on a computer with decent horsepower and an RTX 2070 SUPER graphics card, so computer beefiness or up-to-date-ness shouldn't be a problem. I'm also running Houdini Indie (the latest build--18.5.499).
Thanks for any help!
Technical Discussion » Trouble with alembic export--flickery geo, color sets inacce
- bentway23
- 100 posts
- Offline
Alrighty, I'm trying to export a very straightforward system to alembic for use in Maya. A very simplified version is attached--an emitter emits particles, there's a trail, those are added to create a line, and a polywire makes them into geo.
This should be easy (haha), but I'm getting inconsistent results (user error, no doubt), none of which are the one I want. For the most part, the color sets are inaccessible--Cd shows up in the color set editor, but does nothing when used as Cd in an Arnold user data color node for a material. Also, the geo tends to be "flickery", and getting close to it in the viewport the Cd-based colors (visible, at least, in the viewport) are jumping about and it looks like there are occasional normals reversing.
If I reimport an .abc into Houdini it seems to work fine. The closest to success I got was to write it out as a bgeo sequence and use Houdini Engine to bring that into Maya and write an .abc from Maya--the geo appeared okay, but the color set didn't carry over (even though I had "write color sets" clicked).
What is wrong with my settings, or with the way my system is built?
Thanks for any help!
This should be easy (haha), but I'm getting inconsistent results (user error, no doubt), none of which are the one I want. For the most part, the color sets are inaccessible--Cd shows up in the color set editor, but does nothing when used as Cd in an Arnold user data color node for a material. Also, the geo tends to be "flickery", and getting close to it in the viewport the Cd-based colors (visible, at least, in the viewport) are jumping about and it looks like there are occasional normals reversing.
If I reimport an .abc into Houdini it seems to work fine. The closest to success I got was to write it out as a bgeo sequence and use Houdini Engine to bring that into Maya and write an .abc from Maya--the geo appeared okay, but the color set didn't carry over (even though I had "write color sets" clicked).
What is wrong with my settings, or with the way my system is built?
Thanks for any help!
Technical Discussion » Alembic export to Maya--color set not read
- bentway23
- 100 posts
- Offline
This might be more of a Maya question, but I have yet to find a Maya forum where one can find an answer within a year . . . . (Also, I've Googled aplenty and this question doesn't appear to have been asked since 2014.)
I'm exporting an alembic of geo for use in Maya. I've made sure the Cd values are promoted to vertices so Maya can read them, but when I import it into Maya, on the first frame (1000.0) the colors are clearly there in the viewport, and in Mesh Display-->Color Set Editor Cd(RGBA) shows. The thing is, the minute I change frames it is gone, not appearing in the viewport, not showing up through Arnold's User Data Color, and not in the color set editor--even if I go back to frame 1000 it is gone. I've tried it both with and without "export vertex colors" checked in the Arnold tab on the shape node in Maya and even if it's just on frame 1000(.0) with the color set showing Arnold's user color node isn't seeing it (or piping it into the render, at least).
If I bring the import back into Houdini it works great. What am I doing wrong?
I'm exporting an alembic of geo for use in Maya. I've made sure the Cd values are promoted to vertices so Maya can read them, but when I import it into Maya, on the first frame (1000.0) the colors are clearly there in the viewport, and in Mesh Display-->Color Set Editor Cd(RGBA) shows. The thing is, the minute I change frames it is gone, not appearing in the viewport, not showing up through Arnold's User Data Color, and not in the color set editor--even if I go back to frame 1000 it is gone. I've tried it both with and without "export vertex colors" checked in the Arnold tab on the shape node in Maya and even if it's just on frame 1000(.0) with the color set showing Arnold's user color node isn't seeing it (or piping it into the render, at least).
If I bring the import back into Houdini it works great. What am I doing wrong?
Technical Discussion » Gentle nudge/push particles away from surface (using SDF?)
- bentway23
- 100 posts
- Offline
Alrighy, I have a big wedge of stupid in my head that is keeping me from doing this. I have particles emitting into a container (a super-simplified version is attached). I would like to get it where instead of the usual bounce/slide off a wall, the particles slow down and gradually move/are pushed away from the containing walls. I'm assuming this would be done using SDF/volume sample/volume gradient and a POP wrangle modifying the velocity based on the sampled values, but I just can't get it to work. I've seen lots of tutorials on using this approach to get particles to stay NEAR a surface, but my brain isn't adapting that to a nudge-away-from-a-surface instead of a nudge-towards. What am I doing wrong?
The POP wrangle code-in-progress (the collider is input 2, the VDB is input 3):
float samp = volumesample(2, 0, @P);
vector grad = volumegradient(2, 0, @P);
@v *= -grad;
SOMETHING is happening, just not what I want. (And ideally I could have a second SDF inside the container pushing out, thus gently keeping the particles within a certain radius of the container wall, not going to the center.)
Thanks for any help!
The POP wrangle code-in-progress (the collider is input 2, the VDB is input 3):
float samp = volumesample(2, 0, @P);
vector grad = volumegradient(2, 0, @P);
@v *= -grad;
SOMETHING is happening, just not what I want. (And ideally I could have a second SDF inside the container pushing out, thus gently keeping the particles within a certain radius of the container wall, not going to the center.)
Thanks for any help!
Technical Discussion » Procedurally set bend capture region?
- bentway23
- 100 posts
- Offline
Would there be a way to procedurally set the capture region for a bend deformer, for instance if you're using it in a for loop for the same object but with different sizes/rotations? I guess the main question would be figuring out its angle, as bounding box and centroid expressions could do most of the placing and sizing (except if the object was rotated the bounding box size wouldn't be accurate).
Houdini Engine for Maya » Recaching Houdini (Engine) particles in Maya and getting attributes to carry over
- bentway23
- 100 posts
- Offline
Alrighty, I think I've found it, in case anyone else has this issue: When you create the new nParticles to attach the cache to you must first create the PP attributes. Then when you attach the (pre-existing) nCache it appears to read them in automatically. (I saw this on a separate thread, it just took a little toying about before I could make it happen in my own life.)
Just hoping to be of service, because knowing is half the battle.
Just hoping to be of service, because knowing is half the battle.
Edited by bentway23 - Feb. 2, 2021 11:24:39
Houdini Engine for Maya » Recaching Houdini (Engine) particles in Maya and getting attributes to carry over
- bentway23
- 100 posts
- Offline
Hullo--
This is an edit from earlier--
Using Engine I was able to import a particle cache into Maya just fine. Scale and color came through and were easily applied in Arnold. However, I need to recache the particles as a Maya nParticle cache to hand it off to someone else not using Houdini Engine. I can get it to cache the general position/lifespan, but it doesn't appear that any color/scale(radiusPP)/alpha(/transparency) information is in the re-cache. How can I make sure those attributes make it through to the nParticles cache?
Thanks!
This is an edit from earlier--
Using Engine I was able to import a particle cache into Maya just fine. Scale and color came through and were easily applied in Arnold. However, I need to recache the particles as a Maya nParticle cache to hand it off to someone else not using Houdini Engine. I can get it to cache the general position/lifespan, but it doesn't appear that any color/scale(radiusPP)/alpha(/transparency) information is in the re-cache. How can I make sure those attributes make it through to the nParticles cache?
Thanks!
Edited by bentway23 - Feb. 2, 2021 09:11:20
-
- Quick Links