Herbert123

Herbert123

About Me

Connect

LOCATION
Not Specified
WEBSITE

Houdini Skills

Availability

Not Specified

Recent Forum Posts

Houdini 12 and AMD 7970 - disappointment July 14, 2012, 4:16 p.m.

Yes, I thought so myself - thank you very much for looking into this.

AMD has its work cut out for itself, since CUDA is also better supported in most applications.

Houdini 12 and AMD 7970 - disappointment July 14, 2012, 1:56 a.m.

Good news: the latest version works. and does not crash.

Bad news: testing with a semi-complex object downloaded from
http://graphics.cs.williams.edu/data/meshes.xml [graphics.cs.williams.edu]
(furry ball object, 1.4 million vertices, 2.8 million faces)

I get about 2/3 fps in the viewport in object mode. That can't be right: LW11, Cinema4d, Blender and Softimage all have no problem at all with this object (blender: 100fps). Softimage is a bit silly with 22 of these still running a smooth viewport. Even Blender's viewport runs at 10fps with 12 of these.

Also, selection is extremely laggy: in object mode it takes up to 7~8 seconds before the furry ball gets selected. In all the other apps this is not an issue either.

And to make matters worse: trying to enter edit mode, and select a vertex causes a pause of a minute or two, after which Houdini causes the AMD drivers to crash, which crashes Houdini as well (obviously).

So, what can I do to improve the viewport performance? I tend to have to deal with complex objects, and I wonder if there is another setting I am missing? I looked for a VBO (vertex buffer object) setting, and tried all the different view modes, but all of those run at the same 2/3 fps - this must be cpu-limited, not gpu limited.

Or perhaps do Houdini and AMD gpu just don't play well at all? Next week I will check the performance of Houdini on the Xeon workstations at work (Nvidia quadro 2000).

Too bad - but at least I am able to test some of the functionality of Houdini, which is promising.

Houdini 12 and AMD 7970 - disappointment June 21, 2012, 3:06 a.m.

no, vsynch was turned off in those tests.

Anyway, as far as I am concerned the consumer line of cards of AMD currently outperform the Nvidia ones in most 3d apps with opengl based viewports - unless we take the Quadro 6000 with app-specific drivers into account. This, at least, is my conclusion after heavy testing.

I do agree that the situation was very different 4~5 years ago - but now, with Nvidia disabling hardware acceleration for certain opengl features in their consumer cards (4xx-6xx), such as double sided lighting for polygons, and the current AMD drivers for 5xxx-7xxx seemingly non-problematic in most opengl viewports, the boundaries are much more blurry. Things are not made more transparent by most applications still using deprecated opengl functions, or, as in Houdini's case, actually using more modern features. My experience is that mileage may vary quite a bit depending on the 3d app used. It's a veritable mine field, with one app working better with AMD, and the next with Nvidia. And let's not talk about MacOs X and its opengl.

And with GPU rendering things are currently more geared towards CUDA, rather than OpenCL (AMD drivers still ‘suck’ in that regard) - but again, depending on the render engine it may not be the case (for example, Luxrender works better with AMD opencl).

Of course, you could argue that for production work one should buy a Quadro - but not all of us have that option financially.