habernir Soothsayer if you have amd cpu than opencl cpu should work out of the box in houdini. i think the problem its not in houdini , so maybe the libraries are not installed in ubuntu and you need to install it
Any idea of what might be missing? I ran clinfo in a terminal and the cpu is not listed as an opencl device, only the gpu is. I have a ryzen 3950x, so that should work ok normally.
I've recently had to reinstall my OS, ubuntu, courtesy of nvidia drivers. Now I'm finding that opencl doesn't work anymore on Houdini. It is shown in houdini->edit->preferences but inspecting from a shell the cpu doesn't seem to be listed it as an opencl device at all!
Some internet sleuthing reveals the shocking information that AMD has abandoned support for opencl on their cpu so people revert to pocl or other things, with mixed results.
How do folks handle this? Some sop nodes require opencl (attribute blur I think?) and it is common as an option in dop nodes. Opencl on gpu can be tricky because of limited gpu memory.
Personally I love the documentation, better than that of any other software I have used but maybe this is because I've used it for such a long time. Sometimes it's a bit inconsistent but I can live with that. Can somebody point me to some examples that illustrate the problem?
It may not necessarily be nvidia drivers fault! I've had terrible problems with the latest Linux kernel updates causing problems with (530) nvidia drivers. I still have opencl problems on the CPU but at least it runs on the GPU now.
I find that sim workflows with checkpoints are very easy to screw up. It's a bit like takes, it sounds great at first and then things go wrong and you don't notice until it's too late. That's just my personal experience and it may work well for some.
On Linux you can also put your machine to sleep and the just awake it at a later point and things will continue where they stopped. Not 100% sure that works with opencl on GPUs.
I found out that if I change the HOUDINI_TEMP_DIR in the houdini.env then I can't log into the Houdini Launcher anymore. It will start but I'm unable to log-in or start the license server.
I'd say build your own and run Linux on it. That way you get it at a good price and have full control over what you put in and how/what to upgrade.
I'd also put emphasis on ram memory and cpu cores, but also I don't find that high ram or cpu frequencies matter much.
With gpu it's a mixed bag for me. They are cool but more limited, at least in Houdini (19.5). Again, memory beats gpu speed. In the end I go cpu almost always.
It's useful to think of bottlenecks in your setup. All the gpu power in the world doesn't help you much it the sim halfway through silently crashes because it is out of memory, or if you find that the cool thing you want to do only works on a cpu. Similarly, what's the point of overclocking by 10% if most of your time is spend reading geo from a disk, or if instead of 16 hours you have to wait 16 hours and 11 minutes for something.
ram, disk space, io, those things can kill efficiency very quickly, much more so than minute gains from extra freqs and such things.
Thanks all. So, my mistake was that I assumed it would take usd files but it only works with bgeo (not even alembics). It might be useful if sidefx would mention it in the docs or throw a warning/error. I can unlock the node and fix it but since everything everywhere in Solaris is about usd files it would be cool if this was working with usd out of the box.
I'm trying to understand the component geometry lop and I'm having trouble figuring out what is happening if we set source to file mode. If I read usd files they don't show up in the viewport or scene graph. If I inspect the code in the layer it generates I see no indication to it being loaded or referenced.
So, what am I expecting to see when loading usd files? I was imagining the loaded model in the scene but perhaps my understanding of it is lacking.
Old thread but I have a similar problem with Arnold and material variants. Karma accepts setting the variants and it's lovely and beautiful but Arnold complains about "time varying variant selections". The cache lop removes the error but Arnold ignores the variant settings.
matsnyman I hope we can get a camera composition grid overlay for rule of thirds, Fibonacci spirals and diagonal lines. C4D has this and it makes scene block outs so much easier.
You have Video safe areas and Field guides in the display options. Maybe there's already an easy hack to modify those?
I think xpu sampling in 19.5 is just a uniform sample count identical for every pixel. Nothing clever about, it just runs whatever samples you specify over everything. I expect that to change in future versions.
- I don't trust the gradient calculation, I never have. Something's not right about it but I don't know what. You can easily get weird results.
- That aside, I don't trust the vector calculations (advect) and the viewport so I did a little test where calculate the dot product. They hover neatly around 0, as they should. The vectors however don't always look right.
- Maybe something with face vs center sampling might interfere but sometimes I can't calculate a gradient at all out of it.
So, I don't know but I have a feeling Houdini isn't telling us the truth.
GPU render engines are relatively new so they have to fight their way into big studios. GPUs are very expensive, use a lot of power, are memory constrained and much less versatile than CPUs (farms also do a lot of other stuff than just rendering) . In a big production it doesn't matter all that much I find. As long as renders finish over night and don't crash or waste time on weird problems it's fine. At home or in smaller studios it's a different thing and it's easier to get a fast gpu and do it all on there. Because of these things perhaps development of code that runs on gpu is taking less priority.