Getting into Houdini, question about computer hardware

   18448   24   1
User Avatar
Member
5 posts
Joined: July 2013
Offline
Sorry if this has been asked before, but I couldn't find a detailed answer. I am just starting to learn Houdini through the Apprentice edition and I am wondering how Houdini calculates simulations and renders. Are they both mainly reliant on CPU power or do they both also use GPU acceleration through OpenCL?

I thought I read somewhere in the documentation that simulations are OpenCL accelerated, but do renders also benefit from that?

I'm coming from the Cinema 4D world and I have a workstation that geared toward CPU power, but I'm wondering if I get into Houdini enough that I should invest in a GPU with strong Open CL performance.
User Avatar
Member
36 posts
Joined: Sept. 2008
Offline
No, mantra render only uses CPU to process.
Simulations can use OpenCL, but you only will have benefits with Tesla cards. Since is GPU you will have several pitfalls.
6 to 8 Gigs of ram will not be enough to gigantic simulations.
Box re-size is not efficient for GPU.
Flip is partially efficient with GPU.
Tesla GPU cards are very expensive.

You will have benefits mostly from doing smoke and fire.

If you have money to invest, I suppose motherboards with several processors will do the job better than the card.

Also SSDs to help with swap is a good call.
User Avatar
Member
5 posts
Joined: July 2013
Offline
Thanks for the information. I'll stay away from spending money on a Tesla card since it sounds like it's not going to give me a big performance boost.

Does the amount of system memory I have help simulations and will 32GB be enough?

I already sunk quite a bit of money into processors (quad AMD Opteron setup) and SSDs when building my work station for Cinema 4d, so it sounds like it will suit Houdini well too.

Since Houdini is actually able to run natively in Linux are there performance gains over running in Windows Server?
User Avatar
Member
7720 posts
Joined: July 2005
Offline
Offhand, I'm not sure if there's any situation where Houdini is faster on Windows. If you want speed, use Houdini on Linux.
User Avatar
Member
5 posts
Joined: July 2013
Offline
edward
Offhand, I'm not sure if there's any situation where Houdini is faster on Windows. If you want speed, use Houdini on Linux.

Thanks, that's what I figured since most software with a native Linux version seem to be faster than their Windows counterpart, but figured I'd ask anyone with experience before spending the time to set it up.
User Avatar
Member
258 posts
Joined: July 2006
Offline
I have a 32 Gb I7 2600 K 4.5 GHZ 7950 3gB AMD rig. I would be extremely happy with it only if there was a viewport performance patch for amd video cards. This cards spins 20+ mil polygons inside maya like a tornado(+- 40fps) yet 2 mil + polygons inside Houdini is like a broken carousel speed (+- 2fps)

Apart from that. I am an FX TD and I find SSDs totally unusuable because of the cache sizes. Think about it

a 400 Frame flip simulation

400 Frames of Flip Sim Cache my example 27mil 850Mb/frame
on top of that the Foam Whitewater etc 25mil points 900Mb/frame
only the particle cache totals almost 2gb per frame * 400 = 800 GB
so I got myself 2x 2TB Black Caviars on RAID 0 , maybe not as fast as an SSD but what good is it if you can not use it.

So for Houdini I suggest buy a good gaming GPU from NVIDIA
Head of CG @ MPC
CG Supervisor/ Sr. FX TD /
https://gumroad.com/timvfx [gumroad.com]
www.timucinozger.com
User Avatar
Member
166 posts
Joined: March 2013
Offline
tricecold
Apart from that. I am an FX TD and I find SSDs totally unusuable because of the cache sizes. Think about it
Obviously you wouldn't store the caches on the SSD. I have the whole system including Houdini running off an 120GB SSD, and I have separate, larger normal HDDs to store stuff. Works wonderful.
User Avatar
Staff
5158 posts
Joined: July 2005
Offline
I have a 32 Gb I7 2600 K 4.5 GHZ 7950 3gB AMD rig. I would be extremely happy with it only if there was a viewport performance patch for amd video cards.

There's a good comparison of the latest consumer cards and workstation cards here:

http://www.tomshardware.com/reviews/best-workstation-graphics-card,3493.html [tomshardware.com]

No Houdini benches, but it does illustrate that in some cases, consumer gaming cards are well behind their workstation brethren. Oddly, consumer cards are generally better or on-par with workstation cards for compute, though.

In the meantime, I'll take a look and see if I can track down any inefficiencies. For example, there was a case of extreme sluggishness on OSX that was fixed a while back by merely switching the data format of the normals (AMD 7950). AMD cards do seems to be a bit more picky about the format of the data that they are passed, compared to Nvidia cards.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
tricecold
I have a 32 Gb I7 2600 K 4.5 GHZ 7950 3gB AMD rig. I would be extremely happy with it only if there was a viewport performance patch for amd video cards. This cards spins 20+ mil polygons inside maya like a tornado(+- 40fps) yet 2 mil + polygons inside Houdini is like a broken carousel speed (+- 2fps)


Not sure why the AMD 7950 is running like that. Same card on Os X, 2 mil+ poly ~50fps.

There is a bug in os X 10.8.4, that is fixed when running 10.9, when selecting components on that test scene - it takes 45+ seconds to box select primitives - it looks single threaded.

EDIT: 21+million polys spin test = ~10fps , High Quality Lighting mode, Smooth Shaded, GL 2.1 viewport, OsX 10.8.4, Houdini 12.5.475
User Avatar
Member
5 posts
Joined: July 2013
Offline
twod
There's a good comparison of the latest consumer cards and workstation cards here:

http://www.tomshardware.com/reviews/best-workstation-graphics-card,3493.html [tomshardware.com]

No Houdini benches, but it does illustrate that in some cases, consumer gaming cards are well behind their workstation brethren. Oddly, consumer cards are generally better or on-par with workstation cards for compute, though.

Thanks, that was a helpful read. I wonder if comparisons like that will also help to drive Nvidia and AMD to really push development on workstation cards more to justify their cost over their gaming counterparts.
User Avatar
Member
269 posts
Joined: July 2010
Offline
Interesting - i didn't know ssds not good for caching - i didnt know that.

As for GPU - I went with a quadro k5000 recently as an upgrade - mainly i have to say after all the help and info on these forums - and i can confirm that its awesome. Houdini is running brilliantly and fast.
Director @ Valkyrie Beowulf
www.vwulf.com
https://linktr.ee/neilrognvaldrscholes [linktr.ee]
User Avatar
Member
166 posts
Joined: March 2013
Offline
Neil78
Interesting - i didn't know ssds not good for caching - i didnt know that.
That isn't the case. They are as good as anything really. The thing though is that SSDs are quite expensive, you'd get lots more space buying normal HDDs instead - cheaper. Also caching wouldn't exactly take benefit of the speed of SSDs, depending on what you are doing of course, the thing that takes time is the simulation itself and not writing it to disk.
User Avatar
Staff
5158 posts
Joined: July 2005
Offline
tricecold
I have a 32 Gb I7 2600 K 4.5 GHZ 7950 3gB AMD rig. I would be extremely happy with it only if there was a viewport performance patch for amd video cards.

I made a small tweak to the vertex arrays which should improve performance by up to 4x on large meshes on AMD cards in 12.5.482. This was using a FirePro W8000, which is the workstation equivalent of the Radeon 7950.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
twod
I made a small tweak to the vertex arrays which should improve performance by up to 4x on large meshes on AMD cards in 12.5.482. This was using a FirePro W8000, which is the workstation equivalent of the Radeon 7950.

Awesome!
User Avatar
Member
5 posts
Joined: July 2013
Offline
Is there a good way for me to benchmark the performance of Houdini? I found an old thread here: http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&p=98351&sid=d6ca7d7d9e7aa006ffeb9472a6e10699 [sidefx.com] but is it still relevant?
User Avatar
Member
4189 posts
Joined: June 2012
Offline
twod
tricecold
I have a 32 Gb I7 2600 K 4.5 GHZ 7950 3gB AMD rig. I would be extremely happy with it only if there was a viewport performance patch for amd video cards.

I made a small tweak to the vertex arrays which should improve performance by up to 4x on large meshes on AMD cards in 12.5.482. This was using a FirePro W8000, which is the workstation equivalent of the Radeon 7950.

This 25million poly scene is spinning at ~7.2fps at Scene level on AMD7950/OsX 10.8.4/H 12.5.483. I'd be interested if the FirePro W8000 and GL3.2+ improves it.

Attachments:
25mil_test.hip (143.7 KB)
25milTest.png (118.5 KB)

User Avatar
Staff
5158 posts
Joined: July 2005
Offline
This 25million poly scene is spinning at ~7.2fps at Scene level on AMD7950/OsX 10.8.4/H 12.5.483. I'd be interested if the FirePro W8000 and GL3.2+ improves it.

Currently AMD cards can't handle large vertex arrays, it seems. This causes a massive slowdown in rendering which isn't present on Nvidia cards (a GEForce 670 tumbles at ~40ms, or 25fps). Even rendering 1/4 the polys (6.25M) on a FirePro W8000 takes 3.3 seconds to render (similar results for the FirePro V4900 and AMD 6950). Your scene takes over a minute to render. Oddly though, GL2.1 + no lighting renders are quite fast (46ms, 21fps). GL 2.1+ any lighting again slows to a crawl. Somewhat disappointing.

In H13 there is an additional fix to divide large meshes into smaller chunks for AMD cards, and with this fix, the FirePro W8000 draws the scene in 80ms (12fps). The consumer 6950 comes in around 100ms (10fps). However, I'm not sure if this can be backported to 12.5.
User Avatar
Member
166 posts
Joined: March 2013
Offline
twod
(a GEForce 670 tumbles at ~40ms, or 25fps)
A question on that, I have a 670 and I also get around 40ms/25fps. However that is only when the object isn't selected and I rotate it through the parameters. If I select it and rotate with the viewport handles, I get 270ms/4fps - is this normal?

I guess it has to do with the wireframe being shown, because if I turn on Smooth Wire Shaded and rotate through the parameters again - I then get 270ms/4fps there as well.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
Even rendering 1/4 the polys (6.25M) on a FirePro W8000 takes 3.3 seconds to render (similar results for the FirePro V4900 and AMD 6950). Your scene takes over a minute to render. .

Hmmm, bummer - was hoping that AMD was the way to go with Houdini on OS X, as the new Mac Pro is going to be shipping only with AMD. Hoping someone can run this test this scene on OsX Mavericks 10.9- when I had it on the system the tests I ran in DP1/2 were very promising.
User Avatar
Staff
5158 posts
Joined: July 2005
Offline
I should note that this slowdown on AMD is only if the 25 million polygons is in one large mesh. If a scene totals 25M polys in 100 objects different objects (250K/object), then the AMD card can draw it at ~25fps.

However, I was testing on Win7 64b, not OSX. I don't have an OSX/FirePro config (yet?).

If I select it and rotate with the viewport handles, I get 270ms/4fps - is this normal?

Yes, the wire-over-shaded shader does cause a performance hit, which is more noticeable for large models (I get a touch over 4fps). Still, it beats GL2 + wire-over-shaded + smooth lines (nearly 1s draw time), though GL2 + wire-over-shaded draws around 7fps without smooth lines. Either way, there is a fairly large cost to enabling wire-over shading.
  • Quick Links