Houdini 12 GPU acceleration details

   124919   27   2
User Avatar
Member
35 posts
Joined:
Offline
twod
Nvidia's newer drivers automatically route compute tasks to the Tesla card, and graphics tasks to the Quadro, if you have two in your system. This doesn't work with GEForce cards, however; just Tesla+Quadro.

Good to know.
kind of silly if the new gtx cards won't work. Quadros are too overpriced Has anyone come across proper documentation stating exactly why quadro cards are a prefered over fermi gtx for cg applications in general?
User Avatar
Staff
5199 posts
Joined: July 2005
Offline
Has anyone come across proper documentation stating exactly why quadro cards are a prefered over fermi gtx for cg applications in general?

I haven't read anything like an FAQ on the matter, but I have been able to piece together the following bits from reading articles & posts over the years:

- GEForce cards have their geometry rates capped, while Quadro cards do not. Games are often more shader-heavy than geometry-heavy, while digital content creation (DCC) is usually the opposite. This allows the game-oriented GEForces to be clocked higher than the Quadros, boosting their pixel shading output, while still remaining within thermal and power limits. In other words, the GEForces are firmware-optimized for games.

- Quadros have more memory than their equivalent GEForce counterparts, usually on the order of 2-4x, simply because games impose rigid requirements on the textures and geometry they draw, whereas DCC packages cannot.

- Drivers for Quadros are optimized differently than GEForces. DCC mostly draw to multiple windowed viewports at a time, while games often draw to a single, fullscreen window. Quadro drivers optimize workstation-app GL features such as smooth lines, and tend to be more stable over time for workstation apps like Houdini.

- Marketing often dictates a certain feature set for workstation vs. gaming cards. Just like Houdini Escape is less expensive than Master but with fewer features, so the GEForce is cheaper than the Quadro. Examples of features that Quadros have that GEForces do not - ECC memory, dual DMA copy engines, quad-buffer stereo, half-rate FP64 math (vs. 1/4, 1/12), and faster geometry processing. Many of these are crucial to Quadro users.

Most workstation applications will still run on a GEForce card, just not as well. Here are some benchmarks: hothardware [hothardware.com], Tom's Hardware [tomshardware.com]. I suppose it's up to you as a consumer to judge if the performance and feature benefits are worth the extra expense.
User Avatar
Member
54 posts
Joined: Oct. 2011
Offline
I have a question related to the multiple GPUs on a computer, mainly the Quadro/Tesala setup. Nvidia has come out with their Maximus technology, which “better” uses their Quadro/tesala setups. An earlier reply indicates that Houdini will be able to use a Quadro and Tesala, regardless, is this true?

This is encouraging if it is true. Maximus technology, as far as I can tell, is only accessible if you buy a HP or Dell workstation configured with the Quadro/Tesala and special maximus drivers. It kills me to buy a machine, that I can build myself for $2,000-$5,000 less than what Dell/HP would ask for.
User Avatar
Member
96 posts
Joined: May 2008
Offline
Seems like most are using nvidia gpus (me included), but since I'm in the process of planning out a new rig, I want to at least consider the new ATI cards - any experiences in terms of OpenCL? I hear the main difference in speed between ATI and nvidia is single vs double floating point precision. From the looks of this https://en.bitcoin.it/wiki/Mining_hardware_comparison [en.bitcoin.it] ATI appears to be quite a bit faster on the top end, but at the same time most seem to go for nvidia nonetheless.
So, what's the situation with H12?
User Avatar
Staff
5199 posts
Joined: July 2005
Offline
An AMD card with the VLIW architecture, which includes all AMD OpenCL-capable cards below the new 7000 series, are not as efficient at compute for Houdini 12 as a comparable Nvidia card is. AMD cards do seem to be a bit more tolerant for handling larger sims with the same amount of memory vs. an Nvidia card with the same VRAM size, though. We have not tested a AMD 7000 series card (7970, 7950, 7770, 7750) as of yet.
User Avatar
Member
96 posts
Joined: May 2008
Offline
Thanks for the info, so that's a good reason to stick with nvidia (which I prefer anyway). Cheers.
User Avatar
Member
68 posts
Joined: Oct. 2011
Offline
Quick question, which of the two are going to perform much better 2gb ddr3 or 1gb ddr5 assuming similar clock speed? Thank you
User Avatar
Member
96 posts
Joined: May 2008
Offline
I would expect the difference in speed to be minimal - but the limitations of only having 1GB to be severe. It's really easy to hit 1GB mem usage, so I wouldn't even consider it regardless of a potential few percent speed increase (which is the most I'd expect if at all).
  • Quick Links