Purely personal POV but I've been super impressed with my Threadrippers. Main workstation I've had a few years has always felt insanely fast, and that's a relatively old Threadripper now. Bought another cheap old one (like, really cheap - mbd + TR1920, 12/24 core for £300 total) to use as a render node the other day - it flies through cooks/sims. Their huge number of PCI lanes makes multiple GPU setups quick and simple to set up, and I've found a lot of my work lends itself to multithreading (forced me to get my head round Compile Blocks...) You get a good bit of bang for your buck with them. Not sure I'll ever want to go back to Intel. Apple Silicon and Threadrippers seem the way forward for my needs.
YMMV of course; depends on the sort of work you do (within both Houdini and the other apps you use) whether tons of cores will work better for you than single-core speed. In Houdini specifically I find multicore speed to be the most important factor as it defines how long I have to wait for cooks/sims; I sorta feel like that part of work has the most impact on my overall productivity - how long I have to wait before I can carry on clicking - but it'd be interesting to hear from others on this.
And at the risk of sounding like generic advice - even for a new machine, chances are you'll get significantly more value from your money if you buy previous generation, rather than the latest and greatest
Found 146 posts.
Search results Show results as topic list.
Technical Discussion » Optimal CPU in a new $ 2-4k build
- howiem
- 146 posts
- Offline
Technical Discussion » Convert instances
- howiem
- 146 posts
- Offline
There may be a better way, but you could make a new geo object, merge the base mesh in, merge the instance objects in - they'll come in as single points - and use a Copy to Points to copy the base mesh to the "instance" points.
Starting with this:
Create a fresh geo object, merge in the mesh and instances (in the Obj Merge node you can use a wildcard to grab all your instances in one go - in your case it'd be /obj/Light01_instance*), and copy the one onto t'other:
When you're merging in the instance objects, make sure Transform is set to Into This Object, so the points' locations come in correctly:
Starting with this:
Create a fresh geo object, merge in the mesh and instances (in the Obj Merge node you can use a wildcard to grab all your instances in one go - in your case it'd be /obj/Light01_instance*), and copy the one onto t'other:
When you're merging in the instance objects, make sure Transform is set to Into This Object, so the points' locations come in correctly:
Edited by howiem - 2024年3月22日 07:30:33
Technical Discussion » Simplest way to sweep points into polys?
- howiem
- 146 posts
- Offline
Thanks Konstantin: that approach still leaves me with lots of duplicate paths at the end though (as it creates multiple connections to each point, so each point turns into multiple vertices), the thing I'm trying to avoid: your example starts w 12 points, but by the time they've been swept round the backbone there are 50 paths.
But it made me think of another way - if the points have to be connected up before Sweep can use them, just using Add > Polygons will connect them into a single polyline without making duplicate vertices. Works perfectly - after Sweep -> Ends I end up with the same number of paths as the points I started with.
Thanks for the nudge ^_^
But it made me think of another way - if the points have to be connected up before Sweep can use them, just using Add > Polygons will connect them into a single polyline without making duplicate vertices. Works perfectly - after Sweep -> Ends I end up with the same number of paths as the points I started with.
Thanks for the nudge ^_^
Edited by howiem - 2024年3月22日 07:16:10
Technical Discussion » Scale Geometry Inwards
- howiem
- 146 posts
- Offline
CYTE
I guess a super solid solution isn't that trivial.
That's it in a nutshell. Concave n-gons like your pic can trip up any attempt to inset them, depending on how they're triangulated. The PolyExtrude node does a spectacular job of handling insetting, but it sure ain't trivial
Technical Discussion » H20 Viewport refresh bug..
- howiem
- 146 posts
- Offline
I'm finding this too, in H20; had put it down to having two viewport panes open so I can have camera view in one, ad-hoc perspective "working" view in another.
Work in one node for a bit, jump out and find half my scene objects stop animating correctly in the viewport* unless I hop into their node and touch something.
Spending a lot of time closing the Scene View pane and opening a new one. Even opening older projects from a few years back, this viewport thing is more frequent.
Linux (Ubuntu ... uhh... 22 I think), latest nVid drivers.
Didn't know about the Labs 'reset viewport' thing tho, will have a look - thanks Jonathan
* one or other of the viewports - one will carry on behaving and the other one will have half the obj's frozen
Work in one node for a bit, jump out and find half my scene objects stop animating correctly in the viewport* unless I hop into their node and touch something.
Spending a lot of time closing the Scene View pane and opening a new one. Even opening older projects from a few years back, this viewport thing is more frequent.
Linux (Ubuntu ... uhh... 22 I think), latest nVid drivers.
Didn't know about the Labs 'reset viewport' thing tho, will have a look - thanks Jonathan
* one or other of the viewports - one will carry on behaving and the other one will have half the obj's frozen
Edited by howiem - 2024年3月22日 06:23:41
Technical Discussion » Simplest way to sweep points into polys?
- howiem
- 146 posts
- Offline
Having a bit of a brainfart. Been a while since I've been in Houdini, and I'm trying to do something seemingly simple but can only think of relatively complicated ways to pull it off: I want to create a bunch of polylines by sweeping a bunch of points down another path / backbone.
Sweep (2.0) doesn't seem to want to do it - it'll only sweep polys / surfaces, not points, it seems.
This is a simple example of the kinda thing I'm trying to do - bunch of points (in the centre) swept round a circle.
To get Sweep to do this I had to use Tri2D to turn the points into a mesh, then I could sweep it, then unroll the faces, but it's untidy and you end up with a bunch of duplicate stuff to clean up later. And it feels like I'm missing a simpler way.
I could animate my collection of points round the path and then Trail it, but that seems overly complicated, especially if I want to have the points themselves animated too.
Could probably do it in VEX, which seems hugely attractive but I need to focus on getting the result I need, rather than having fun mucking about with code. Houdini has such a wonderful way of providing lots of rabbitholes to explore and play in... I honestly don't know how you guys get real work done when there are so many exciting distractions
Is there a simple and stoopidly obvious way that I'm missing?
Sweep (2.0) doesn't seem to want to do it - it'll only sweep polys / surfaces, not points, it seems.
This is a simple example of the kinda thing I'm trying to do - bunch of points (in the centre) swept round a circle.
To get Sweep to do this I had to use Tri2D to turn the points into a mesh, then I could sweep it, then unroll the faces, but it's untidy and you end up with a bunch of duplicate stuff to clean up later. And it feels like I'm missing a simpler way.
I could animate my collection of points round the path and then Trail it, but that seems overly complicated, especially if I want to have the points themselves animated too.
Could probably do it in VEX, which seems hugely attractive but I need to focus on getting the result I need, rather than having fun mucking about with code. Houdini has such a wonderful way of providing lots of rabbitholes to explore and play in... I honestly don't know how you guys get real work done when there are so many exciting distractions
Is there a simple and stoopidly obvious way that I'm missing?
Edited by howiem - 2024年3月22日 06:13:53
Technical Discussion » HQ/hbatch - Where should env vars be set? (not houdini.env!)
- howiem
- 146 posts
- Offline
You should only need to init the vars that are present in your houdini.env file. That licensing issue is new to me - I'm guessing you're using the new Maxon installer. You do need to make sure that you're logged in as the same user when you install Redshift as the one that you run hclient from. What happens if you try running a job manually from that machine (ie with hbatch)?
(just to add - you don't necessarily need to do all this variable set up with scripting; if it's just you using this network you may as well set up an HQueue ROP node with whatever vars you like, then save it as the default. Then whenever you drop an HQ ROP in it'll already have things the way you like 'em)
(another thought: I haven't used this new Maxon installer, but I seem to remember it's not node-locked - so you may have to login manually on the client machine before you can expect it to service Redshift jobs)
(just to add - you don't necessarily need to do all this variable set up with scripting; if it's just you using this network you may as well set up an HQueue ROP node with whatever vars you like, then save it as the default. Then whenever you drop an HQ ROP in it'll already have things the way you like 'em)
(another thought: I haven't used this new Maxon installer, but I seem to remember it's not node-locked - so you may have to login manually on the client machine before you can expect it to service Redshift jobs)
Edited by howiem - 2021年11月16日 12:16:23
Technical Discussion » The smooth(...) vex function - what's the maths behind it?
- howiem
- 146 posts
- Offline
I need to be able to calculate where the midpoint of a smooth function will land for a given power. Or rather, the reverse - what power/rolloff to put in if I want the midpoint to land in a certain place.
I'm mucking about with CHOPs, generating my own channel data with Channel Wrangles and VEX. Lots of fun. I'm using the smooth() function to ease between values. It looks a bit like the smooth(start, end, amount, xxx) function works a bit like easep(xxx) function in the Animation Editor, with the power/rolloff parameter moving the transition forwards or backwards in time - values over 1 seem to be... uhh ... reciprocalated and inverted. Or something.
Ultimately I'd just like to work out how to "invert" the function, so if I want the midpoint of the smooth transition to land, say, exactly a quarter of the way through the transition, or exactly 2/3 of the way through, what power/rolloff I would need to use.
Any thoughts/pointers appreciated. Thanks
I'm mucking about with CHOPs, generating my own channel data with Channel Wrangles and VEX. Lots of fun. I'm using the smooth() function to ease between values. It looks a bit like the smooth(start, end, amount, xxx) function works a bit like easep(xxx) function in the Animation Editor, with the power/rolloff parameter moving the transition forwards or backwards in time - values over 1 seem to be... uhh ... reciprocalated and inverted. Or something.
Ultimately I'd just like to work out how to "invert" the function, so if I want the midpoint of the smooth transition to land, say, exactly a quarter of the way through the transition, or exactly 2/3 of the way through, what power/rolloff I would need to use.
Any thoughts/pointers appreciated. Thanks
Technical Discussion » How do I write to a specific voxel in a volume?
- howiem
- 146 posts
- Offline
Logged an RFE and got a very swift and comprehensive response. I used the "postman checking every address in the country in order to deliver one letter" analogy:
Developer response:
Interesting, huh - makes me curious about how the data is stored internally, and what layers of obfuscation sit between it and us.
So the fastest way to play with things at the voxel level will probably come down to the specifics of what you want to do. If it's a single voxel you want to access, then a vol wrangle will probably be fast enough. If you're trying to write to lots of single voxels, then creating particles and rasterising them may be fastest.
Developer response:
"It actually maybe faster to send the postman to every address than it is to try and send to specific addresses in serial. VEX and multithreading gets a bit weird about what is "fast".
Note that if vex had a setvoxel() it would actually be much slower than you may envision. Because we don't know the order, we'd have to record all voxel writes into a large list; sort that list; then play back a second time.
You can actually get a first-order approxmiation of this by just creating a point for each "setvoxel" and using volumerasterizeparticles to copy all those points into voxels. As crazy as it sounds, this might not even be that much slower than if we added directly to VEX, as we'd pretty much have to do exactly this internally anyways."
Interesting, huh - makes me curious about how the data is stored internally, and what layers of obfuscation sit between it and us.
So the fastest way to play with things at the voxel level will probably come down to the specifics of what you want to do. If it's a single voxel you want to access, then a vol wrangle will probably be fast enough. If you're trying to write to lots of single voxels, then creating particles and rasterising them may be fastest.
Technical Discussion » How do I write to a specific voxel in a volume?
- howiem
- 146 posts
- Offline
Sorry cman - never got round to it. Naughty. But I have now. I'll report back if there's any news... just typing out the RFE got my imagination going with things I wanna play with if/when it gets implemented
Technical Discussion » HQueue distribute simulation is too slow
- howiem
- 146 posts
- Offline
Well done on getting HQueue set up!
Re your sim: it'll be a number of factors.
Assuming your render is multithreaded, it's understandable that 3 x 8 core machines will take longer than 1 x 36 core machine even if all else was equal. On top of that, though, there's the network overhead as all the machines are trying to read and write to the server, and there's the overhead of having to spin up the Houdini instances.
Network overhead: if you can afford it, it's worth running a 10GbE network (cost me around £300 for the switch, then £50 each for second hand network cards). It's made a huge difference to renders; I can actually comp footage directly from the server, no need to copy it locally. And sims and geo processing is significantly faster. *Everything* is faster.
Houdini spin-up time can be significant as well; I started with plain old HDDs in the render nodes, and when I eventually replaced them with cheap and small SSDs (after all, you don't need much space) they fire up Houdini incredibly fast now. With rendering (my main farm task) picking the right batch size is a matter of balance; small batches mean lots of time spent loading and unloading Houdini, big batches can leave you waiting for that last one to render while the other machines sit around doing nothing.
I'm digressing.
That 20% CPU usage thing - it could be that it's averaging out the usage, so it may be counting the time the machine's waiting for data to arrive or files to be written. Does the sim max out the cores on your main machine?
Re your sim: it'll be a number of factors.
Assuming your render is multithreaded, it's understandable that 3 x 8 core machines will take longer than 1 x 36 core machine even if all else was equal. On top of that, though, there's the network overhead as all the machines are trying to read and write to the server, and there's the overhead of having to spin up the Houdini instances.
Network overhead: if you can afford it, it's worth running a 10GbE network (cost me around £300 for the switch, then £50 each for second hand network cards). It's made a huge difference to renders; I can actually comp footage directly from the server, no need to copy it locally. And sims and geo processing is significantly faster. *Everything* is faster.
Houdini spin-up time can be significant as well; I started with plain old HDDs in the render nodes, and when I eventually replaced them with cheap and small SSDs (after all, you don't need much space) they fire up Houdini incredibly fast now. With rendering (my main farm task) picking the right batch size is a matter of balance; small batches mean lots of time spent loading and unloading Houdini, big batches can leave you waiting for that last one to render while the other machines sit around doing nothing.
I'm digressing.
That 20% CPU usage thing - it could be that it's averaging out the usage, so it may be counting the time the machine's waiting for data to arrive or files to be written. Does the sim max out the cores on your main machine?
Technical Discussion » What's the easiest way to un-HDA / disassociate a node?
- howiem
- 146 posts
- Offline
I've an unlocked HDA that I'm diving into and making changes to specifically for a hero object. As long as it stays unlocked none of these changes will affect the actual HDA asset, which is good.
It'd be even safer, though, if I could disassociate this unlocked node from the HDA asset on disk - get rid of that little red unlocked padlock - and get rid of the "Save Node Type" and "Match Current Definition" context menu items, both of which would spoil my day somewhat.
I don't want to create a duplicate HDA, as this is a single-use situation (and zomg I need to stop feeding my HDA library as it's got kinda obese). And there's nothing to stop me copying everything into a new geo node, but then there's all the parameters, of which there are... many.
Is there a quick way?
It'd be even safer, though, if I could disassociate this unlocked node from the HDA asset on disk - get rid of that little red unlocked padlock - and get rid of the "Save Node Type" and "Match Current Definition" context menu items, both of which would spoil my day somewhat.
I don't want to create a duplicate HDA, as this is a single-use situation (and zomg I need to stop feeding my HDA library as it's got kinda obese). And there's nothing to stop me copying everything into a new geo node, but then there's all the parameters, of which there are... many.
Is there a quick way?
Edited by howiem - 2021年3月30日 06:35:45
Technical Discussion » Extract rotations for an alembic as a channel
- howiem
- 146 posts
- Offline
Hopefully someone more experienced than me will chip in, but I still can't quite put my finger on what it is you're trying to do, and despite that, this smells like the wrong approach (!)
You have an object moving and spinning, and it's sourcing smoke. There are no collisions set up, and no velocities are being read into the sim, so the smoke's only motion is upward (from the temperature differential) plus dissipation.
What are you hoping to see?
As an aside: if you want to uniformly scale down the speed of an animated transformation - ie reduce rotational and translational speeds by the same factor - it's the same as scaling time, which you can do by adjusting the frame / frame-rate parms on the Alembic SOP. No need to extract components.
If you absolutely have to "scale down" how much an object rotates over a frame, you could slerp between the current and next frame's orients, but I'm not sure what that'd achieve here that couldn't be done more simply.
It'd make the scene easier to understand and debug if you only import the alembic once (make a dedicated "get_the_alembic" geo, say, with an Alembic SOP wired straight to a null called "OUT") then object merge that null into the various places you need it.
You have an object moving and spinning, and it's sourcing smoke. There are no collisions set up, and no velocities are being read into the sim, so the smoke's only motion is upward (from the temperature differential) plus dissipation.
What are you hoping to see?
As an aside: if you want to uniformly scale down the speed of an animated transformation - ie reduce rotational and translational speeds by the same factor - it's the same as scaling time, which you can do by adjusting the frame / frame-rate parms on the Alembic SOP. No need to extract components.
If you absolutely have to "scale down" how much an object rotates over a frame, you could slerp between the current and next frame's orients, but I'm not sure what that'd achieve here that couldn't be done more simply.
It'd make the scene easier to understand and debug if you only import the alembic once (make a dedicated "get_the_alembic" geo, say, with an Alembic SOP wired straight to a null called "OUT") then object merge that null into the various places you need it.
Technical Discussion » How to expand/eval string in VEX?
- howiem
- 146 posts
- Offline
Use chs() rather than chsraw() if you want Houdini to expand aliases. Is there some particular reason you're favouring chsraw()? You can still perform string manipulation shenanigans - this works, for example:
s@myname = chs("my_parm_with_job_alias_in_it") + "/test_folder/test_file.ext";
Edited by howiem - 2021年3月29日 01:57:43
Technical Discussion » Filling water tank
- howiem
- 146 posts
- Offline
The scene is very strangely set up (there's some unnecessary ocean stuff mixed in there making it a little hard to debug). It'd be worth watching a few tutorials to get the basics of FLIP sorted out (the applied houdini ones are well worth the money, but there are free ones out there too).
Technical Discussion » Extract rotations for an alembic as a channel
- howiem
- 146 posts
- Offline
Not clear on what you're trying to do exactly, but I've limited experience with alembics. When you say you're trying to reduce the inherited velocity, what exactly do you mean?
Scaling rots and translates... it sounds like you're not trying to slow the animation down (ie run it longer) but rather just scale down motion blur, or scale down the velocity that's getting passed into a sim. In which case you may as well just deal with the v attribute directly. Unpack the alembic, add point velocities with a Trail SOP (which has a velocity scale parm right there too, saving you a wrangle).
Or have I misinterpreted what you're after?
Scaling rots and translates... it sounds like you're not trying to slow the animation down (ie run it longer) but rather just scale down motion blur, or scale down the velocity that's getting passed into a sim. In which case you may as well just deal with the v attribute directly. Unpack the alembic, add point velocities with a Trail SOP (which has a velocity scale parm right there too, saving you a wrangle).
Or have I misinterpreted what you're after?
Technical Discussion » Batch rendering thousands of videos
- howiem
- 146 posts
- Offline
Intriguing problem It's hard to give anything but very general advice without knowing more specifics.
30 secs a frame is fairly heavy; if practical, Redshift (or similar) may help knock that down significantly. You mention GPU but not which renderer you're using - if it's Mantra, the GPU isn't helping.
Then it's about batching stuff. 4000 separate jobs is a lot of winding up and winding down of Houdini instances, so if there's a way to consolidate batches of videos together that'd help. Can you set up your scene so, say, 50 films are treated together? Every 75 frames, reset the camera position, set up the next objects. Best to allow a frame or two of "reset" time between consecutive shots to allow motion blurs to reset as well. But that way you have 80 renders instead of 4000. Chunking them into output videos is then just an ffmpeg / shell script task.
Just thoughts, but do colour in some more detail if you want better quality suggestions
30 secs a frame is fairly heavy; if practical, Redshift (or similar) may help knock that down significantly. You mention GPU but not which renderer you're using - if it's Mantra, the GPU isn't helping.
Then it's about batching stuff. 4000 separate jobs is a lot of winding up and winding down of Houdini instances, so if there's a way to consolidate batches of videos together that'd help. Can you set up your scene so, say, 50 films are treated together? Every 75 frames, reset the camera position, set up the next objects. Best to allow a frame or two of "reset" time between consecutive shots to allow motion blurs to reset as well. But that way you have 80 renders instead of 4000. Chunking them into output videos is then just an ffmpeg / shell script task.
Just thoughts, but do colour in some more detail if you want better quality suggestions
Technical Discussion » Redshift not loading what should be a basic scene
- howiem
- 146 posts
- Offline
The scene you posted seems to hang at the extraction (conversion to RS/GPU) point, because there's a mix of packed prims and normal geo within a single object.
For RS to instance the packed prims properly they need to be in their own object, then you can go to Redshift OBJ > Settings > Instancing and enable "Instance SOP Level Packed Primitives".
In your scene I've split off the packed prims and Object Merged them into a separate geo; it renders nice and quickly
For RS to instance the packed prims properly they need to be in their own object, then you can go to Redshift OBJ > Settings > Instancing and enable "Instance SOP Level Packed Primitives".
In your scene I've split off the packed prims and Object Merged them into a separate geo; it renders nice and quickly
Edited by howiem - 2021年3月26日 09:05:25
Technical Discussion » Tell Houdini/HQ to ignore unknown nodes (redshift) on load?
- howiem
- 146 posts
- Offline
Trying to set up an HQueue client node purely to cook stuff.
But because my scenes contain Redshift nodes, Houdini errors out on load, and the job fails, with lots of “Bad node type found: redshift_vopnet” and “Error: Bad parent found (parent is not a network): mat/redshift_material”.
I'm not trying to render, and the Geometry ROP I'm trying to cook isn't dependent on anything Redshifty.
Any thoughts? There must be other people who use Redshift in their scenes but need to cook stuff without tying up a Redshift License, surely?
But because my scenes contain Redshift nodes, Houdini errors out on load, and the job fails, with lots of “Bad node type found: redshift_vopnet” and “Error: Bad parent found (parent is not a network): mat/redshift_material”.
I'm not trying to render, and the Geometry ROP I'm trying to cook isn't dependent on anything Redshifty.
Any thoughts? There must be other people who use Redshift in their scenes but need to cook stuff without tying up a Redshift License, surely?
Technical Discussion » For reference only: License Manager Timeout error solution
- howiem
- 146 posts
- Offline
Putting this here for reference. Could be that my problems were an edge case (like the rest of my existence, ha) but just in case it helps someone…
Unlike previous upgrades, the move to 18.5 gave me real problems with licensing on my linux machines.
Sesictrl kept timing out, saying:
Licence Manager - machinename:1715 Timeout was reached
ERROR: You do not have read access to server machinename
After many hours, I sussed it: Sesictrl has started (?) trying to contact the server using the machine's name, rather than internally looping back, or using the machine's IP address. So the solution was simple in the end:
Add the machine's own name to /etc/hosts. There's probably already a line to resolve localhost to the 127.0.0.1, so just add the machine's name to the end of that line:
127.0.0.1 localhost machinename
All is well and my various machines are working again. But that cost a fair bit of time to sort out…
tags: sesictrl license problem timing out
Unlike previous upgrades, the move to 18.5 gave me real problems with licensing on my linux machines.
Sesictrl kept timing out, saying:
Licence Manager - machinename:1715 Timeout was reached
ERROR: You do not have read access to server machinename
After many hours, I sussed it: Sesictrl has started (?) trying to contact the server using the machine's name, rather than internally looping back, or using the machine's IP address. So the solution was simple in the end:
Add the machine's own name to /etc/hosts. There's probably already a line to resolve localhost to the 127.0.0.1, so just add the machine's name to the end of that line:
127.0.0.1 localhost machinename
All is well and my various machines are working again. But that cost a fair bit of time to sort out…
tags: sesictrl license problem timing out
-
- Quick Links