Once again another GPU question.

   1507   3   1
User Avatar
Member
1 posts
Joined: Feb. 2020
Offline
Hi everyone!

I'm thinking to jump into and try learn some Houdini but need to upgrade my workstation bit by bit, starting with my GPU. I'm looking at getting a RTX 2070 SUPER or a GTX 1080Ti(second hand but never used, for the same price as the 2070). So my question is which one would you guys recommend that I get if I want to play around with sims?

Thankful for any advice I can get on this.
User Avatar
Member
7741 posts
Joined: Sept. 2011
Offline
I don't think it matters that much which gpu to get for doing sims. CPU and ram matter more.
User Avatar
Member
146 posts
Joined: Jan. 2018
Offline
And I can tell even a poor man's 1070 like mine is considerabily quicker on OpenCL than my 16 cores CPU… The issue is usually the card memory…
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
Although it's probably useless to answer yet another 1-post-user-question (no hard feelings, it's just a matter of exclusively bad experience) … my purely analogue cents would be:

It depends.

When I write CUDA kernels, I often run out of memory quicker than I can say “stop, wait, hold on a second” (which may be because that phrase is way too long anyway) on my RTX2080. Therefore, I would always suggest getting as much GPU memory as you can, for RAM can only be replaced by more RAM. (And yes, I admit that sometimes those memory-overflows are thanks to me having forgotten to actually FREE the memory after using it :-P )
Most of the time spent with GPU usage is transferring data back and fore anyway. So if your SIM has to shuffle data a lot, more memory on the GPU might be - again - the better thing than some few more ticks on a clock.

That said, it also depends on whether the simulation you intend to do actually USES the GPU in the first place. Most of Houdini's user interaction is very single-corish (synonymous to “incredibly laggy”), what good is a fast sim if you can't get to clicking the button to start it because a few billion particles are bogging your CPU down already …
I do understand that some simulations in Houdini can and do utilize the GPU, however, I am not sure how much data transfer is done in between steps. I would HOPE that “every vertex that might play a role further down the line” gets transferred in one huge chunk of just-about-everything. If not, memory bandwidth would be the bottle-neck and you might want to go for the fastest PCI route instead of looking at memory.

TLDR: My personal suggestion, without knowing precisely what you want to do, is to go for more memory over higher clock speed. That comes with the caveat that MAYBE Houdini spends more time on transferring data back and fore than on the actual (GPU-side) simulation, so faster clock (memory) COULD be more important.

I realize that I am not helpful. Again.


Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
  • Quick Links