PC for Houdini and 5000 euros max.

   12359   22   4
User Avatar
Member
1 posts
Joined: Feb. 2017
Offline
Hello everyone,

After two years learning Houdini with a i7-4930K (3.40ghz) and 32gb of RAM
I have decided to invest in a good computer to simulate and create complex effects and render with Mantra and Arnold.

My budget is maximum 5000 euros. And I do not know whether to go for a configuration of dual Xeon or a powerful i7 with 128GB of Ram.

Can you help me create an ideal setup?

I have a 750W power supply that I think I can use as well as a 128GB SSD and a 2TB HDD and a 4GB GTX 770 (though I was thinking of buying a 1080)

Thank you so much for everything!
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
Moin,

not pretending to have the perfect answer, I would make a few assumptions:

- there is a considerate trend towards leveraging GPU for number crunching. Depending on the kind of simulation you are doing, investing into a 1080 or the like might make sense.
- RAM can only be replaced by more RAM, ideally MUCH MORE RAM. When in doubt, use even more RAM.
- writing huge simulation caches to disk (or reading them) can suffer from mechanical HDD bottlenecking, so some TB SSD would be next on my list
- more cores (Xeon) are only useful if your simulations actually leverage multithreading. Personally (this is probably very much discussable) I would go for higher core frequencies more than for more cores.

But, seriously, for specific requirements I would opt for using some Azure or AWS resource when needed instead of investing into a single machine. I do know the feeling of having “it all at your command”, but it's very much possible that just renting horse power is a lot cheaper and more productive - probably depends on how your licensing works, though.

Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
User Avatar
Member
28 posts
Joined: July 2015
Offline
Do NOT buy anything until AMD's Ryzen is out…

There are very strong indications that the architecture performs like Intels broadwell but at half the price.




With that said, I would personally invest the bulk of the budget into multiple GPU's instead and wait for arnoldGPU or just buy REDSHIFT
User Avatar
Member
19 posts
Joined: June 2016
Offline
malbrecht
Moin,

not pretending to have the perfect answer, I would make a few assumptions:

- there is a considerate trend towards leveraging GPU for number crunching. Depending on the kind of simulation you are doing, investing into a 1080 or the like might make sense.
- RAM can only be replaced by more RAM, ideally MUCH MORE RAM. When in doubt, use even more RAM.
- writing huge simulation caches to disk (or reading them) can suffer from mechanical HDD bottlenecking, so some TB SSD would be next on my list
- more cores (Xeon) are only useful if your simulations actually leverage multithreading. Personally (this is probably very much discussable) I would go for higher core frequencies more than for more cores.

Please do discuss. I was under the assumption that almost everything in programs like Maya and Houdini were threaded. I picked up a 22 core Xeon just for large scale sims and render times I think they run a 2.2ghz each and I gave up a quad core 4970k (4.00ghz per core). What are some of the major cons of a system like this?
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
That Multicore-discussion sometimes tends to involve religious beliefs … For sure it is wrong to assume that “everything” in programs like Maya or Houdini is “threaded” - that would not make much sense, since all multi-threading evaluation comes with additional computation costs.

Imagine you have a specific task - like folding a single sheet of paper once. You have to do that leveraging your 120 men and women crew, everyone *has* to be involved. It will take you considerably longer to execute that task actively involving 120 people than if you did it yourself, single-threaded.

Multi-Threading makes sense if you a) have the data volume to actually make use of more than one processor digging through it (see example: 120 sheets of paper seem like a better context than a single one), b) the data can be worked on separately (if everyone of those 120 people has to wait for the person before her to finish, 120 people doesn't give you much of a boost, only if each task is “exactly identical to all others” - possibly exaggerating here, but you get the picture - you can distribute the tasks to arbitrary cores), c) you have the protocol and tech to combine the data without loosing the speed advantage you gained by distributing the jobs (example: If piling up the paper takes you longer than folding and piling them yourself, because everyone of those 120 people finishes at a different time, you have to wait, maybe drive to their home to pick stuff up, drink some … you better do it yourself after all).

In heavy simulations, especially grid/net based, FLIPs you often can leverage multi-threading quite well. But if you want every particle's velocity to influence other particles within the same evaluation step, multi-threading won't work that well.

These are just basic ideas of where “multi-threading is always better” just doesn't work. Developers have to weight pro and con on every single job to maximize performance and minimize risks of errors.

Another issue with multi-threading is data access. If you need to access large amounts of shared data from every single thread that is running in parallel, you need to have a data pipeline that allows for this kind of parallel random access. If your data is stored on mechanical HDD and every thread constantly pushes the readwrite head back and forth, you will definitely loose more spped than you would gain from a well balanced sequential access model.
Again, this is just an idea with a big “if” inside.

In sum: For certain tasks a higher single core step rate may well outperform multi-threaded low-frequency CPU solutions. In other situations multi-threading may give you the better outcome. In theory - and unfortunately it does not really work that way, this is just to illustrate something - a 4-core/8-thread CPU at 4.4GHz should outperform a 8-core/16-thread 2.2GHz CPU, because less overhead for multi-threading is involved.
If your data, your data access means, the jobs that get distributed AND thread management allows for multi-threading, more threads quite often perform slightly better, because you can keep more data in first level CPU cache, giving you faster access than with larger data sets in lower level cache or even default RAM.

Before I write a book on this, I'll stop - I am not saying that “single threading” is faster, don't get me wrong. I am simply trying to underline that it isn't always about having the most cores. It's about the right combination of task and power.

A lower stepping rate on your main CPU will always slow down your UXP, though. And since most programs, including the ones you picked, will have single-threaded GUIs, a 2.2GHz model for me would not do the job if I can go for a 4.4GHz instead. This, again, is over-simplification, just to make the point.

Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
User Avatar
Member
9 posts
Joined: Feb. 2017
Offline
malbrecht
That Multicore-discussion sometimes tends to involve religious beliefs … For sure it is wrong to assume that “everything” in programs like Maya or Houdini is “threaded” - that would not make much sense, since all multi-threading evaluation comes with additional computation costs.

Imagine you have a specific task - like folding a single sheet of paper once. You have to do that leveraging your 120 men and women crew, everyone *has* to be involved. It will take you considerably longer to execute that task actively involving 120 people than if you did it yourself, single-threaded.

Multi-Threading makes sense if you a) have the data volume to actually make use of more than one processor digging through it (see example: 120 sheets of paper seem like a better context than a single one), b) the data can be worked on separately (if everyone of those 120 people has to wait for the person before her to finish, 120 people doesn't give you much of a boost, only if each task is “exactly identical to all others” - possibly exaggerating here, but you get the picture - you can distribute the tasks to arbitrary cores), c) you have the protocol and tech to combine the data without loosing the speed advantage you gained by distributing the jobs (example: If piling up the paper takes you longer than folding and piling them yourself, because everyone of those 120 people finishes at a different time, you have to wait, maybe drive to their home to pick stuff up, drink some … you better do it yourself after all).

In heavy simulations, especially grid/net based, FLIPs you often can leverage multi-threading quite well. But if you want every particle's velocity to influence other particles within the same evaluation step, multi-threading won't work that well.

These are just basic ideas of where “multi-threading is always better” just doesn't work. Developers have to weight pro and con on every single job to maximize performance and minimize risks of errors.

Another issue with multi-threading is data access. If you need to access large amounts of shared data from every single thread that is running in parallel, you need to have a data pipeline that allows for this kind of parallel random access. If your data is stored on mechanical HDD and every thread constantly pushes the readwrite head back and forth, you will definitely loose more spped than you would gain from a well balanced sequential access model.
Again, this is just an idea with a big “if” inside.

In sum: For certain tasks a higher single core step rate may well outperform multi-threaded low-frequency CPU solutions. In other situations multi-threading may give you the better outcome. In theory - and unfortunately it does not really work that way, this is just to illustrate something - a 4-core/8-thread CPU at 4.4GHz should outperform a 8-core/16-thread 2.2GHz CPU, because less overhead for multi-threading is involved.
If your data, your data access means, the jobs that get distributed AND thread management allows for multi-threading, more threads quite often perform slightly better, because you can keep more data in first level CPU cache, giving you faster access than with larger data sets in lower level cache or even default RAM.

Before I write a book on this, I'll stop - I am not saying that “single threading” is faster, don't get me wrong. I am simply trying to underline that it isn't always about having the most cores. It's about the right combination of task and power.

A lower stepping rate on your main CPU will always slow down your UXP, though. And since most programs, including the ones you picked, will have single-threaded GUIs, a 2.2GHz model for me would not do the job if I can go for a 4.4GHz instead. This, again, is over-simplification, just to make the point.

Marc

Hi there!

Just a quick question, so I wanted to buy a pair of used Xeon-s to have a multi CPU budget PC. I was aiming for that magical cheap 36 threads. But now I don't know what to buy, I wanna do small scale and large scale fluid sims.
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
Hi,

I was wondering why someone would honestly quote the whole textwall of mine - maybe to make a point or something - but …

> Just a quick question (…)

… there isn't even a *question* in your comment :-D

OK, seriously: Like I tried to point out, it depends. It is NOT about the number of cores, but about how the software you use makes use of them, what cache advantage you get, how much data you juggle, how you access the data for read and write.
The best information you could gather would be from people doing the kind of work you do. Maybe someone chimes in and can give some numbers here, because I, personally, biased as I am, doubt that there is a “perfect answer”.

Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
User Avatar
Member
9 posts
Joined: Feb. 2017
Offline
Sorry for the whole quote, I'm new to this forum.
So to be exact, I used Realflow for quite a long time and I'm currently learning Houdini. I want to do small scale ‘abstract’ flip simulations and also large scale fluid sims (oceans, explosions, etc…) So I'm curious what would be your recommendation, based on your opinion than.
Edited by Byter3 - Feb. 16, 2017 07:41:18
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
Hi, unknown,

> I want to do small scale ‘abstract’ flip simulations and also large scale fluid sims (oceans, explosions, etc…

… I can not give any recommendation for that, because those examples seem to illustrate the exact opposites of one another. “Small abstract” smells like high-step-rate-4-core, “large scale fluid” smells like network distributed server-farm … And since, in the end, simulation is always *time* *consuming*, for me, personally, UXP comes first, second and fifth. That means: I want the software respond to my input. When it's time to simulate, I will definitely waste my own lifetime somewhere else, not in front of the computer, so 12 hours or 15 hours don't matter.
To me.
I am not talking big time-pressure-pipeline-scenarios here.

For me performance when *I* use the system is of highest value, so I'd go for fast single thread (why “would” - I did exactly that).
Also my bet would be on Houdini and other tools leveraging GPUs more and more, so I'd probably take care that my system can handle a bunch of 1080 or whatever comes next easily (memory pipelines etc).

Personally - this is by no means a suggestion you should base your life insurance on - I only see heavy multi-core performance pay off in CPU rendering. That's something I, again, personally, would outsource anyway, I don't need that noise and energy consumption in my closet. Read: I don't see much use in multi-multi-core tech. 6 cores: fine. 8 cores: fine. But that's already scratching at “data management bottlenecks” there, running 16 threads with data will put some heavy strain on your data IO. Without the data IO (PCIe SSD) I consider that somewhat academic …

Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
User Avatar
Member
9 posts
Joined: Feb. 2017
Offline
Thank you so much for your time to answering my questions, it's really educational for me.

So what I'm getting based on your opinion and my little experience is that, the ideal setup would be a low core (4-8) high frequency workstation in front of me, and a high core (32) somewhat lower frequency Node in the closet. I'm working in a full time job, so I only have a limited time to work at home on my personal and “learning” projects.
Edited by Byter3 - Feb. 16, 2017 09:26:37
User Avatar
Member
19 posts
Joined: June 2016
Offline
malbrecht, thanks for all that information. One more question though…

I mainly do large scale flip simulation that push a lot of particles, but with GPU rendering gaining a lot of traction here's something I don't fully understand. When you use the GPU to render out particle sims, does to CPU and GPU work together to simulate these scenes, or purely using the GPU? Is there a limitation with using openCL that would make more sense not to use it? Or should you almost always use OpenCL when doing large scale flip sims? I have a Titan X and I never know if I should be using it or not.
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
Hi,

Preface: please remember that what I am saying here is personal opinion and very much open to discussion …

> I mainly do large scale flip simulation that push a lot of particles, but with GPU rendering gaining a lot of traction here's something I don't fully understand.

Mind you: Simulation and Rendering are two different beasts. Example: If you simulate fluid, you often deal with particles that you “remesh” after the simulation is done, you only *render* the surface of the generated mesh. In rendering, therefor, you don't deal with particles (in this specific case) at all.

> When you use the GPU to render out particle sims, does to CPU and GPU work together to simulate these scenes, or purely using the GPU?

That depends on your renderer (can it utilize system RAM to hold data that it cannot keep on the GPU memory?), on the way you set up the shader (are you actually rendering particles or are you meshing?) … for example (note, I am on thin ice there), as far as I know Redshift, a GPU renderer, can push data into system memory and still incooperate that into building the scene for rendering. That may not be as fast as a full onboard-GPU rendering, but it might still be faster than CPU rendering - depends on scene, setup and, probably, phase of the moon.

> Is there a limitation with using openCL that would make more sense not to use it?

This question, I think, is not really related to the above: Cuda, openCL etc. are just ways to address the GPU and make it “do something” - WHAT you actually do depends on the software package you use. For simulation in Houdini, for example, it depends on what calculations developers have “translated” into openCL.

> Or should you almost always use OpenCL when doing large scale flip sims?

See above: It depends on what your software package or its developers actually push into the GPU, I don't think that there is any “general” answer to the question “should I always use openCL or Cuda for FLIP simulations”. I would guess that this kind of simulation almost always is *faster* on modern high end GPUs, but that's an educated guess, nothing more. Who is to say that the next or one after that generation of CPUs does not come with a built in DSP chip for simulations?

Sorry, no really definite answers here, I think it's really up to people *testing* specific scenes and coming up with numbers.
For example: In Fabric the core developers tend to only multi-thread geometry manipulation if more than 1000 points are to be handled, else they keep it all on one thread. This is, of course, just a ball-park-number that might not actually fit your specific geometry that well. Now considering that piping data back and forth to GPU might be another bottleneck (maybe data has to be sorted/prepped in a specific way), you might only start with 20k points to even consider GPU usage - this is a wild guess, no real number.
Or, if the actual calculation only takes half a second on CPU and only 100ms on a GPU, but preparing data and piping it back and forth takes more than 2 seconds … using the GPU wouldn't help.

Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
User Avatar
Member
19 posts
Joined: June 2016
Offline
I'm sorry, I really worded that wrong. When I say “render out” I just mean simulate particles. Not mesh, or render. Sorry! I'm very home taught and sometimes make up or use wrong terms!
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
> I'm very home taught and sometimes make up or use wrong terms!

hehe, all good - so am I. Homeschool gives you the most bang for the boom. Or so …

That's why I am very much interested in additional opinions …

Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
User Avatar
Member
155 posts
Joined: Nov. 2015
Online
Very interesting discussion you have here.

I was wondering what Houdini 16 brings to the table concerning OpenCL.
Pyro is now (with H16) fully OpenCL accelerated. I was not aware that fluid simulations are.

Also:
As far as I understand it Nvidia still doesn't support OpenCL 2 and above.
AMD cards on the other hand does. This will get interesting as they are about to come out with new ones.

Concerning CPUs and multithreaded.. I am kind of regretting my purchase of my dual Xeon E5-2670 @ 2.60Ghz CPUs.
I write “kind of” because I do a lot of rendering in Arnold which is quite happy with all the cores but still.. for most situations.. less cores with a much higher frequency would be better. Even prosumer CPUs like Intels 8 core i7.

I am also quite interested in the new AMD CPUs they just released.
Even if they don't quite hold up to their promises, which they seem to do, the benefit of having competition in price and speed to Intels chips is very welcome. The i7 with 8 cores is just way to expensive in my opinion. The 1800x 8 core Ryzen from AMD seems to be on par at not even half the price as I understand it.

As it stands now I need a new workstation but I will wait a couple of months until:
- we see what the deal is with Ryzen in performance and also support with software and hardware
- intel answers Ryzen with new chips or price cuts
- the just announced GTX 1080 Ti is reviewed (it seems there won't be any reason to buy the titan instead)
- Solidangle releases a GPU accelerated Arnold (and if it will be cuda or openCL - this will most likely decide my GPU vendor)
- maybe, just maybe, please, Apple will update the mac pro
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
If I should have frozen out others from participating in this discussion, I apologize. I was hoping for some more discussion, since I believe the TO got exactly ZERO out of this thread (at least she never came back to say anything).

AMD:
I am keeping more than an eye on the Ryzen discussion. Unfortunately it, again, circles around game performance, which is of almost zero interest to me, because for number crunching and UXP I do not need the fastest data transfer from CPU to GPU and back.

First tests I have read about seem to indicate that single core performance on the 1800 is not as good as hoped for. I don't find the tests too convincing, because they are based on some game engine benchmarks and I would definitely prefer trying to manually move more than a few dozen items in a scene in modo to seeing some multi-colored polygon monsters doing BVH interpretations, but at least these hints seem to be in line what I tried to discuss above: Single core performance should not be underestimated for a “work machine”, while multi core performance may be of more importance for a render slave.
In the end I guess it will be about “considerably lower price for slightly less performance”, after the dust has settled and street prices have balanced out. With the topic of this thread being about a 5k machine, I do think that there are more considerations than the CPU alone (like RAM, SSD etc).

GPU:
The 1080 ti looks promising. I fear that NVidia is pulling stunts again, they are not exactly known to be upright and honest about their specs, but price/performance ratio really looks intriguing. I'm definitely holding back on buying a 1080 until there's more trustworthy information about those cards.

Apple:
hehe. Censored everything I wrote.

Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
User Avatar
Member
155 posts
Joined: Nov. 2015
Online
concerning the topic itself:

the short answer is: wait, if you can, for a couple of months before deciding on hardware

GPU:
Solidangle is about to come out with Arnold GPU support (at least that's what I've been hearing)
If they pull off feature parity to the CPU renderer, which I am led to believe, it makes the decicion already a lot easier.

CPU: also wait, like marc said, until at least the dust has settled and or (if at all?) Intel answered in any way.

For me, I am almost certain I will build my next workstation as a GPU beast with at least 2 GPUs (most likely the before mentioned 1080Ti) and whatever 8 or more core prosumer CPU has the best cinebench score.

I would also consider a mainboard with support for a PCIe M.2 SSD. Even if you don't buy an m.2 right away, to have the option is future proofing your build.
User Avatar
Member
72 posts
Joined: Jan. 2017
Offline
Personally I'm debating whether to go with the 1700x Ryzen with the recently price slashed GTX 1080, or 1800x Ryzen with a GTX 1070. I think I'm sold on the Ryzen as rendering and video encoding is on par with intels $1000 processor. I won't be using any render farms as this isn't my primary profession and I only (for now) do small indie stuff. So for my rendering needs, Ryzen looks like a real winner for me.

I'm just wondering if it's worth going with a Gtx 1080 or not over a slightly higher CPU clock speed. Haven't played with Arnold which looks like from the above discussion uses GPU rendering. So that is something to consider if I try Arnold out and like it… Opinions on the above two choices? Not that big of a choice, both systems are pretty close to each other.
User Avatar
Member
2038 posts
Joined: Sept. 2015
Offline
…Aside from potential ‘bugs/issues’ for any new chip…I would go for the Ryzen on savings alone which could be put into paying for more Ram. Of course that's if there is a budget concern.
User Avatar
Member
260 posts
Joined: Nov. 2014
Offline
HappehLemons
I pic
We have similar setup, 2x10core Xeons on 2.3Ghz.
They are fast for rendering and sims, but its not a good choice. As most of the stuff in houdini is not multi threaded, its better to go with something with higher max clock and less cores. I think i was looking at some 6-8 cores Xeons with 3.2Ghz which will give you same rendering power, but more punch for working in Houdini.
While switching between 3.2Ghz machines with one CPU and mine machine i can feel significant differences while working in houdini.
  • Quick Links