Yes, I thought so myself - thank you very much for looking into this.
AMD has its work cut out for itself, since CUDA is also better supported in most applications.
Found 5 posts.
Search results Show results as topic list.
Technical Discussion » Houdini 12 and AMD 7970 - disappointment
- Herbert123
- 5 posts
- Offline
Technical Discussion » Houdini 12 and AMD 7970 - disappointment
- Herbert123
- 5 posts
- Offline
Good news: the latest version works. and does not crash.
Bad news: testing with a semi-complex object downloaded from
http://graphics.cs.williams.edu/data/meshes.xml [graphics.cs.williams.edu]
(furry ball object, 1.4 million vertices, 2.8 million faces)
I get about 2/3 fps in the viewport in object mode. That can't be right: LW11, Cinema4d, Blender and Softimage all have no problem at all with this object (blender: 100fps). Softimage is a bit silly with 22 of these still running a smooth viewport. Even Blender's viewport runs at 10fps with 12 of these.
Also, selection is extremely laggy: in object mode it takes up to 7~8 seconds before the furry ball gets selected. In all the other apps this is not an issue either.
And to make matters worse: trying to enter edit mode, and select a vertex causes a pause of a minute or two, after which Houdini causes the AMD drivers to crash, which crashes Houdini as well (obviously).
So, what can I do to improve the viewport performance? I tend to have to deal with complex objects, and I wonder if there is another setting I am missing? I looked for a VBO (vertex buffer object) setting, and tried all the different view modes, but all of those run at the same 2/3 fps - this must be cpu-limited, not gpu limited.
Or perhaps do Houdini and AMD gpu just don't play well at all? Next week I will check the performance of Houdini on the Xeon workstations at work (Nvidia quadro 2000).
Too bad - but at least I am able to test some of the functionality of Houdini, which is promising.
Bad news: testing with a semi-complex object downloaded from
http://graphics.cs.williams.edu/data/meshes.xml [graphics.cs.williams.edu]
(furry ball object, 1.4 million vertices, 2.8 million faces)
I get about 2/3 fps in the viewport in object mode. That can't be right: LW11, Cinema4d, Blender and Softimage all have no problem at all with this object (blender: 100fps). Softimage is a bit silly with 22 of these still running a smooth viewport. Even Blender's viewport runs at 10fps with 12 of these.
Also, selection is extremely laggy: in object mode it takes up to 7~8 seconds before the furry ball gets selected. In all the other apps this is not an issue either.
And to make matters worse: trying to enter edit mode, and select a vertex causes a pause of a minute or two, after which Houdini causes the AMD drivers to crash, which crashes Houdini as well (obviously).
So, what can I do to improve the viewport performance? I tend to have to deal with complex objects, and I wonder if there is another setting I am missing? I looked for a VBO (vertex buffer object) setting, and tried all the different view modes, but all of those run at the same 2/3 fps - this must be cpu-limited, not gpu limited.
Or perhaps do Houdini and AMD gpu just don't play well at all? Next week I will check the performance of Houdini on the Xeon workstations at work (Nvidia quadro 2000).
Too bad - but at least I am able to test some of the functionality of Houdini, which is promising.
Technical Discussion » Houdini 12 and AMD 7970 - disappointment
- Herbert123
- 5 posts
- Offline
no, vsynch was turned off in those tests.
Anyway, as far as I am concerned the consumer line of cards of AMD currently outperform the Nvidia ones in most 3d apps with opengl based viewports - unless we take the Quadro 6000 with app-specific drivers into account. This, at least, is my conclusion after heavy testing.
I do agree that the situation was very different 4~5 years ago - but now, with Nvidia disabling hardware acceleration for certain opengl features in their consumer cards (4xx-6xx), such as double sided lighting for polygons, and the current AMD drivers for 5xxx-7xxx seemingly non-problematic in most opengl viewports, the boundaries are much more blurry. Things are not made more transparent by most applications still using deprecated opengl functions, or, as in Houdini's case, actually using more modern features. My experience is that mileage may vary quite a bit depending on the 3d app used. It's a veritable mine field, with one app working better with AMD, and the next with Nvidia. And let's not talk about MacOs X and its opengl.
And with GPU rendering things are currently more geared towards CUDA, rather than OpenCL (AMD drivers still ‘suck’ in that regard) - but again, depending on the render engine it may not be the case (for example, Luxrender works better with AMD opencl).
Of course, you could argue that for production work one should buy a Quadro - but not all of us have that option financially.
Anyway, as far as I am concerned the consumer line of cards of AMD currently outperform the Nvidia ones in most 3d apps with opengl based viewports - unless we take the Quadro 6000 with app-specific drivers into account. This, at least, is my conclusion after heavy testing.
I do agree that the situation was very different 4~5 years ago - but now, with Nvidia disabling hardware acceleration for certain opengl features in their consumer cards (4xx-6xx), such as double sided lighting for polygons, and the current AMD drivers for 5xxx-7xxx seemingly non-problematic in most opengl viewports, the boundaries are much more blurry. Things are not made more transparent by most applications still using deprecated opengl functions, or, as in Houdini's case, actually using more modern features. My experience is that mileage may vary quite a bit depending on the 3d app used. It's a veritable mine field, with one app working better with AMD, and the next with Nvidia. And let's not talk about MacOs X and its opengl.
And with GPU rendering things are currently more geared towards CUDA, rather than OpenCL (AMD drivers still ‘suck’ in that regard) - but again, depending on the render engine it may not be the case (for example, Luxrender works better with AMD opencl).
Of course, you could argue that for production work one should buy a Quadro - but not all of us have that option financially.
Technical Discussion » Houdini 12 and AMD 7970 - disappointment
- Herbert123
- 5 posts
- Offline
So, explain this to me:
In SoftImage I get 160fps in shaded mode with the benchmark scene script found here (~4million verts/polys):
http://www.xsibase.com/forum/index.php?board=9;action=display;threadid=45465;start=240 [xsibase.com]
The Quadro 4000 and 580gtx achieve a paltry 60fps. Softimage runs rock-solid on my card. Actually, all the 7xxx cards run wonderfully fast in SoftImage.
Blender: again, great performance. I can go up to 34million faces, and still have a workable viewport. On consumer Nvidia cards (4xx up to 6xx) double sided lighting is no longer hardware accelerated (thanks Nvidia), so unless you turn off double sided lighting for all meshes, performance is on par, or worse, with an ancient 8800gt. And even with that turned off, Nvidia (consumer) cards cannot compete with ATI cards in solid and wireframe mode (though textured mode is faster on Nvidia cards in Blender)
At my work I tested Quadros 2000/4000 in Blender, and their performance was lackluster (granted, rather “low-end” models). Though they functioned well for both Maya and Max (special drivers).
Cinebench: I get 70fps, which is faster than a Quadro 6000.
http://www.cgchannel.com/2011/10/review-professional-gpus-nvidia-vs-amd-2011/ [cgchannel.com]
Actually, Cinema4d always has favoured ATI/AMD cards in regards to viewport performance, so c5d users recommend those over Nvidia.
Lightwave 11: good performance, and no issues at all.
Modo: same.
I have been working with Cinema4d, Blender and Lightwave, as well as other 3d apps, for the last five years, and found no issues with ATI/AMD cards at all (except for a selection lag in Blender which was quickly resolved by a patch: selection of objects is now fastest in Blender, even compared to Maya, Max, Modo and c4d).
Anyway, I am ranting here. I have been working professionally with most of these apps for some time, and found no major issues with AMD cards: quite the contrary, the current “consumer” 7970 outperforms all the graphics cards I worked with so far - including several Quadros at work.
Unless special drivers exist (such as the ones for max and maya), Quadros may not be the best answer.
Which brings me back to my original question: I tried the tip to downgrade the viewport opengl mode, and for a couple of minutes it seemed to work. However, I experience ati driver crashes all the time, so it is not really a solution.
Hopefully this will be resolved when the next drivers come out (in a couple of months?). I do understand Houdini is using more modern opengl functions, so it might be worth the wait. It is just a shame that one of the most powerful graphics cards on the market (with opencl performance blowing Nvidia cards out of the water as well) cannot be used by Houdini at this point.
In SoftImage I get 160fps in shaded mode with the benchmark scene script found here (~4million verts/polys):
http://www.xsibase.com/forum/index.php?board=9;action=display;threadid=45465;start=240 [xsibase.com]
The Quadro 4000 and 580gtx achieve a paltry 60fps. Softimage runs rock-solid on my card. Actually, all the 7xxx cards run wonderfully fast in SoftImage.
Blender: again, great performance. I can go up to 34million faces, and still have a workable viewport. On consumer Nvidia cards (4xx up to 6xx) double sided lighting is no longer hardware accelerated (thanks Nvidia), so unless you turn off double sided lighting for all meshes, performance is on par, or worse, with an ancient 8800gt. And even with that turned off, Nvidia (consumer) cards cannot compete with ATI cards in solid and wireframe mode (though textured mode is faster on Nvidia cards in Blender)
At my work I tested Quadros 2000/4000 in Blender, and their performance was lackluster (granted, rather “low-end” models). Though they functioned well for both Maya and Max (special drivers).
Cinebench: I get 70fps, which is faster than a Quadro 6000.
http://www.cgchannel.com/2011/10/review-professional-gpus-nvidia-vs-amd-2011/ [cgchannel.com]
Actually, Cinema4d always has favoured ATI/AMD cards in regards to viewport performance, so c5d users recommend those over Nvidia.
Lightwave 11: good performance, and no issues at all.
Modo: same.
I have been working with Cinema4d, Blender and Lightwave, as well as other 3d apps, for the last five years, and found no issues with ATI/AMD cards at all (except for a selection lag in Blender which was quickly resolved by a patch: selection of objects is now fastest in Blender, even compared to Maya, Max, Modo and c4d).
Anyway, I am ranting here. I have been working professionally with most of these apps for some time, and found no major issues with AMD cards: quite the contrary, the current “consumer” 7970 outperforms all the graphics cards I worked with so far - including several Quadros at work.
Unless special drivers exist (such as the ones for max and maya), Quadros may not be the best answer.
Which brings me back to my original question: I tried the tip to downgrade the viewport opengl mode, and for a couple of minutes it seemed to work. However, I experience ati driver crashes all the time, so it is not really a solution.
Hopefully this will be resolved when the next drivers come out (in a couple of months?). I do understand Houdini is using more modern opengl functions, so it might be worth the wait. It is just a shame that one of the most powerful graphics cards on the market (with opencl performance blowing Nvidia cards out of the water as well) cannot be used by Houdini at this point.
pbowmar
I'd recommend not blaming Houdini. I know that SESI have done a lot of work to make Houdini play better with ATI(AMD) graphics. The reality is: Professionals use Nvidia, and Houdini is a professional application. The reason is, Nvidia's drivers are designed for production use. ATI (AMD) have consistently, for years and years, issued bad drivers intended for home use.
I'm sorry you're having trouble, are you able to return the card and get an Nvidia card? You'll be a lot happier
Cheers,
Peter B
Technical Discussion » Houdini 12 and AMD 7970 - disappointment
- Herbert123
- 5 posts
- Offline
Since I read great things about Houdini 12 and the improved viewport, I installed both the latest build and the production solid one. They both crash on my system (Fatal error: Segmentation Fault)
Doing some research, it seems Houdini 12 does not play nice with the later drivers, and vice versa. I tried 12.4, and 12.6. Downgrading to a lower version is not an option for me.
What is it these days with all the driver/opengl issues and 3d software? Are we forced to buy overly priced Quadros, which in my experience do not improve the viewport performance at all, unless special drivers are written (such as for Max and Maya)?
I downloaded Houdini with great expectations, and I have been downloading trials/free educational versions of all commercial 3d applications to decide on which one to purchase for a complex project. Houdini is the *only* one not to work. Please fix this Side Effects - you are losing customers.
Doing some research, it seems Houdini 12 does not play nice with the later drivers, and vice versa. I tried 12.4, and 12.6. Downgrading to a lower version is not an option for me.
What is it these days with all the driver/opengl issues and 3d software? Are we forced to buy overly priced Quadros, which in my experience do not improve the viewport performance at all, unless special drivers are written (such as for Max and Maya)?
I downloaded Houdini with great expectations, and I have been downloading trials/free educational versions of all commercial 3d applications to decide on which one to purchase for a complex project. Houdini is the *only* one not to work. Please fix this Side Effects - you are losing customers.
-
- Quick Links