That's fukn brilliant. No clue why SESI hasn't implemented this in the network editor - I get that for modelling it makes sense having this in the viewport, but as I work in SOPs, this makes way more sense for a lot of stuff than having it in the viewport.
Well, I had no clue about this thread, the OP obviously didn't have SQUIRRELS enough to take it up with me personally, but my side of this is really simple. If I record a quicktip and link to it in the Facebook group and you come in and post “This is simpler to do with my plugin” - that makes you an immense UNICORN. Now, I have no issues with getting suggestions on how to do my stuff better - I even rerecorded that specific quicktip just because I got constructive feedback on it - but you promoting your stuff on my back, in the way you did it, no, that's amazingly FUNTIME rude, and if you don't understand that, well, as I said, you're just an APRICOT.
As for whining on my Patreon - dude, you are developing MOPs with Moritz of Entagma who posts a shitton of Patreon only stuff - I don't, I post stuff with 3-7 days delay, then I post everything openly - so if you have issues about Patreon, you should take that SHOE up with Moritz way before you whine about me - and just generally stay the FUNTIME away from me and what I do, I don't want anything to do with you, whatsoever.
Edited by forum admin. Thread locked. Thanks for playing.
Someone posted a question about this in the Houdini ARtists group on Facebook and I found this thread looking for a better solution - but here's one way…
After hours of troubleshooting I did a reinstall - it was a new install anyway, and it did the trick.
I must have done something along the way, but not being able to remember what it might have been (did the install 2 months ago), a reinstall was just the fastest way to solve it.
Tnx for the feedback, guys.
Edit: Still pretty sure this was related to the Wacom settings, I remember as much as that was the only thing I had messed with in the system, else it was a completely fresh install.
Yeah. I suspect it's the kernel, though, so messing with it now. I was hoping someone would recognize the specific issue, but no luck with that so far…
If you want render power and aren't bothered with bad single-thread performance, you can build monster machines for nothing buying server parts off Ebay. $200 for 128Gb of ECC registered DIMMs, $300-400 for two 10-12 core Xeon V2's, a.s.o… Just crazy.
And staff, does that mean SESI staff or forum staff? I'd love to know if Ryzen optimization is in the cards for Houdini..?
Oh, it's not so much that it's hidden as the fact AMD has done a lot of “new” trickery, designing Ryzen. They got to a point where they just couldn't compete with Intel continuing on their current path, so they combined rethinking a lot of stuff with fighting dirty - kinda. I mean, there's a reason for stuff like the 64 Gb RAM limit, they did stuff Intel never would - huge risk, but man did it pay off. What an immense success they've had - and that's great news for end users as that'll force Intel to innovate and lower prices as well.
Hehe, no 10% loads updating the viewpost, not on a GTX1070… Would love to hear one of the devs fill in the blanks on how the OpenCL requests are handled - thjough I'll do some research and see if it's possible Ryzen or the AM4 chipset that uses some dark magic trickery…
And yeah, but Linux is just better at handling all this stuff, memory handling, the hardware abstraction, all the API's, everything runs 5-15% faster. I really should take the time to go back to running Linux, the only reason I haven't is I'm lazy, been pushing it ahead of me. And it's friggin tricky to choose a Distro, hehe…
(not gonna run Ubuntu, though, I was involved with it as TL for the Swedish LoCo back in 2011, so not touching that $#!{, but mint perhaps, or Centos, less package hassle with some software, I've heard)
Created a torus at two units up, used the FLIP from object shelf tool, changed it to 0.01 resolution, added a ground plane. 1.15 minutes on the Ryzen 1800X @ 4.1 GHz, 1.15 minutes on the GTX1070, running the first 20 frames.
Did a similar setup using grains, exactly the same time for that as well, exact same frame number after 30 seconds.
And both setups were approximately 25% slower with OpenCL off - which kinda surprised me, I think that's a bit up from the OpenCL in 15-15.5 where I usually saw 10-15% max, with OpenCL on… Kinda weird, though, that it makes no to very little difference running OpenCL on the CPU or GPU.
I'm also noticing really weird loads. Running the grain setup with it set to GPU, I get 50% loads on my CPU and a continuous load of 12-15% on my primary GPU. Setting it to CPU I get 70% loads on my CPU cores, but the GPU loads pulses from 1 to 12 to 1 to 16 to 1, etc, at every frame… So there is a difference how Houdini calls on the hardware, for sure, but no clue why my GPU gets loads when Houdini is set to use the CPU's - perhaps Ryzen pipes the OpenCL requests to the GPU's itself? AMD is doing sneaky stuff with Ryzen, so perhaps it's something like that…
I don't think AMD is a disaster in Houdini, but it's a bad choice overall, looking to pro graphics and video. And Nvidia also gives you CUDA - and I don't care about the discussion around CUDA/OpenCL, CUDA is used NOW and thus nice to have support for in the hardware.
As for the 64Gb limit, it's gonna be interesting to see if I hit that ceiling. I wouldn't have dropped my dual Xeon's and 128Gb ECC RAM if I were still comping for a living, that's for sure, but so far I haven't had any RAM issues with Houdini accept when I've done stupid things like set the remesh node to 0.000001 or alike…
I run rainmeter with listings of processes by memory and RAM, core loads, etc, as it's so easy to find where the bottlenecks are - way faster than messing with the performance metering in Houdini… It's usually some single threaded node that drags you down, my recurring example, the remesh node. Remeshing per timestep isn't the smartest move.
Just setup a quick FEM scene and it runs really good. FEM is heavy to sim at higher resolution, though, but you rarely need to go very high res as is uses smooth deformations, so you sim lower and deform a higher res object…
On a side note, though, I'd love to hear if the SESI dev's are optimizing for Ryzen - there's proposedly a lot of performance gains from doing that - but not starting a specific thread for that, either way - but perhaps one of them will run into this post.
Ryzen is amazing, running an 1800x and it is rock stable at 4.1 GHz… But AMD GPUs is never a good choice for computer graphics work - and this is not about AMD vs Nvidia, AMD GPUs are great value for gaming, etc - but they cause a lot of issues in a lot of different applications, just check between application support forums, this is just as it is.
This script works perfectly - great thanks to Galagast & MrScienceOfficer at OD Force forums!
panes= hou.ui.currentPaneTabs()
for p in panes:
if p.type().name() =='SceneViewer':
guide = hou.viewportGuide.NodeGuides
val = p.curViewport().settings().guideEnabled(guide)
p.curViewport().settings().enableGuide(guide, not val)
Hehe, yeah, I'm well aware how the display options work, but as the title clearly states I'm looking for a shortcut key or a way to create one - or if SESI wants to put a tickbox in the nodes that use them, add a button on the viewport side panel for it, that'll be fine too… I'm just friggin tired of the standard method of placing nulls to be able to see the results of a copy SOP or alike just because the guide thingie is in the way - not offering an easy way to hide/display guides is an oversight, most apps offer that without needing to dig into some settings pane.
And sure, it's not a big deal, which is why I've used Houdini for years without whining about it, but it's just started to annoy the sh!t out of me lately as I've been using workflows with a lot of nodes using guides, and it finally got to me for real, so I'm more than mildly annoyed about this right now…
Is there a shortcut for hiding/unhiding the node guides in the viewport?
If not, how do I create one?
And also, if not, why not? Seems like something that should exist. Like ice cream. Just imagine if that didn't exist - so yeah, we need that. And I mean the shortcut, I already have ice cream.
A GTX780 should be well enough to drive Houdini. As for the GTX1070s being expensive, well, I'm running Redshift, so I make that up in saved time - I could as well have spent that cash on Gridmarkets, it's just a question about how you approach optimizing your workflow, really.
Interesting, though, that I had issues <530 and you have issues >530, that does indicate it could actually be connected to the Nvidia hardware and/or driver API something something…