AMD GPUs? Experiences?

   16669   25   2
User Avatar
Member
19 posts
Joined: Oct. 2018
Offline
Hello Wizards!

I'm in a great dilemma if I should buy the just announced Radeon VII or an rtx 2070 / 2080. Due to my budget range it seems a great opportunity to grab 16gig Vram for the price of RADEON VII, also I read not bad articles about Vega 64 and its OpenGL capacity (RADEON VII will be an upgraded Vega if I get it right)
Do any of you had some experiences with Vega GPUs? I'm planning to stick with Houdini as my main 3D platform, since version 17 I have been seeing it even as a great modeling tool.
My software pack:
- Houdini
- Maya, 3d Max, Blender
- Zbrush, Mudbox
- Substance (designer, painter)
- PS
- Davinci Resolve

what I want to focus on:
- Zbrush, Substance(designer, painter), HOUDINI, Blender, (PS, Davinci resolve ofc.)
But mostly I wanna focus on Houdini - modeling, simulations, UV'ing, retopo, rigging, animating all in Houdini.

It seems a bit off topic, that I mentioned my software pack, but I can imagine that it could influence suggestions.

- I know I could benefit by using radeon VII's memory
- I do not know how much I would lose by the absolute lack of CUDA cores
- I have no clues, how AMD's drivers work with Houdini
- if I take a look at officially supported GPU list by SideFX, I can see r9, fury 390, 480 a bunch of fire pro, a radeon pro, but I do not see any Vega GPUs. Is there a reason, or they are just not tested officially?

Thanks if anyone can answer who had personal experiences in this question.

(eddited-grammar)
Edited by tardigrad3 - Jan. 15, 2019 05:32:19
User Avatar
Member
135 posts
Joined: March 2018
Offline
I use 2 vega64 cards in eGPUs on my MacBookPro. Viewport in Blender and Houdini works good/great as far as I'm concerned and a massive differnece to the built in graphics at least.
It should be even better in Windows where drivers are allowing more features.
I don't have actual performance numbers and if a 2080 would be better so it would be interesing to hear about that.
Potentially the vega vii will be very good for simulations and it is probably awesome for video work. But how knows Buy one and report back please!
User Avatar
Member
59 posts
Joined: Nov. 2014
Offline
tardigrad3
Hello Wizards!

I'm in a great dilemma if I should buy the just announced Radeon VII or an rtx 2070 / 2080. Due to my budget range it seems a great opportunity to grab 16gig Vram for the price of RADEON VII, also I read not bad articles about Vega 64 and its OpenGL capacity (RADEON VII will be an upgraded Vega if I get it right)
Do any of you had some experiences with Vega GPUs? I'm planning to stick with Houdini as my main 3D platform, since version 17 I have been seeing it even as a great modeling tool.
My software pack:
- Houdini
- Maya, 3d Max, Blender
- Zbrush, Mudbox
- Substance (designer, painter)
- PS
- Davinci Resolve

what I want to focus on:
- Zbrush, Substance(designer, painter), HOUDINI, Blender, (PS, Davinci resolve ofc.)
But mostly I wanna focus on Houdini - modeling, simulations, UV'ing, retopo, rigging, animating all in Houdini.

It seems a bit off topic, that I mentioned my software pack, but I can imagine that it could influence suggestions.

- I know I could benefit by using radeon VII's memory
- I do not know how much I would lose by the absolute lack of CUDA cores
- I have no clues, how AMD's drivers work with Houdini
- if I take a look at officially supported GPU list by SideFX, I can see r9, fury 390, 480 a bunch of fire pro, a radeon pro, but I do not see any Vega GPUs. Is there a reason, or they are just not tested officially?

Thanks if anyone can answer who had personal experiences in this question.

(eddited-grammar)
I suggest you to ask the same question/doubt in the report Bug/ref for support from the developers or customer support team.
they are fast and quick in helping.
Houdini Fx Artist (Build)
User Avatar
Member
4189 posts
Joined: June 2012
Offline
tardigrad3
- I do not know how much I would lose by the absolute lack of CUDA cores

You mainly lose in rendering. Houdini for the most part is graphics card agnostic, though, H17 Mantra has Optix denoising that only runs on Cuda. Most people use Redshift to speed up rendering and that also wont run on AMD hardware. That's a pretty big issue to not to be able to speed up your rendering workflow when Mantra is simply too slow.

Everything is pointing to the fact that H17.5 will not speed up Mantra with the exception of, maybe, Intel's Embree tech, which uses the CPU to speed up raytracing.
User Avatar
Member
19 posts
Joined: Oct. 2018
Offline
nAgApAvAn
tardigrad3
Hello Wizards!

I'm in a great dilemma if I should buy the just announced Radeon VII or an rtx 2070 / 2080. Due to my budget range it seems a great opportunity to grab 16gig Vram for the price of RADEON VII, also I read not bad articles about Vega 64 and its OpenGL capacity (RADEON VII will be an upgraded Vega if I get it right)
Do any of you had some experiences with Vega GPUs? I'm planning to stick with Houdini as my main 3D platform, since version 17 I have been seeing it even as a great modeling tool.
My software pack:
- Houdini
- Maya, 3d Max, Blender
- Zbrush, Mudbox
- Substance (designer, painter)
- PS
- Davinci Resolve

what I want to focus on:
- Zbrush, Substance(designer, painter), HOUDINI, Blender, (PS, Davinci resolve ofc.)
But mostly I wanna focus on Houdini - modeling, simulations, UV'ing, retopo, rigging, animating all in Houdini.

It seems a bit off topic, that I mentioned my software pack, but I can imagine that it could influence suggestions.

- I know I could benefit by using radeon VII's memory
- I do not know how much I would lose by the absolute lack of CUDA cores
- I have no clues, how AMD's drivers work with Houdini
- if I take a look at officially supported GPU list by SideFX, I can see r9, fury 390, 480 a bunch of fire pro, a radeon pro, but I do not see any Vega GPUs. Is there a reason, or they are just not tested officially?

Thanks if anyone can answer who had personal experiences in this question.

(eddited-grammar)
I suggest you to ask the same question/doubt in the report Bug/ref for support from the developers or customer support team.
they are fast and quick in helping.

Thanks so much for your suggestion, I will do like that!
User Avatar
Member
19 posts
Joined: Oct. 2018
Offline
fuos
tardigrad3
- I do not know how much I would lose by the absolute lack of CUDA cores

You mainly lose in rendering. Houdini for the most part is graphics card agnostic, though, H17 Mantra has Optix denoising that only runs on Cuda. Most people use Redshift to speed up rendering and that also wont run on AMD hardware. That's a pretty big issue to not to be able to speed up your rendering workflow when Mantra is simply too slow.

Everything is pointing to the fact that H17.5 will not speed up Mantra with the exception of, maybe, Intel's Embree tech, which uses the CPU to speed up raytracing.

Hello Fuos!
Thanks for your answer! I had no clue that Mantra is using CUDA cores for denoising - thanks for the info! In this case it's seems not that logical to go for the Radeon (I thought that Mantra uses just CPU) I know about Redshift and Octane (Octane was my favorite GPU render, so yea, if I take the green road again, I will end up there)
It seems that a lot points to the direction of an RTX card - I'm a bit sad about it, I doubt that I will be able to reach 16 gig Vram in the near future beside Radeon VII.
Unfortunately AMD ProRender can not much to do with Houdini simulations, I'm not sure if it even can handle Volumes or Particles (I'm not even sure, if it has a ProRender To Houdini plugin)Same due to LuxRender / LuxCore, I doubt that they even have a Houdini plugin.. I will google after some info, if it's posible to render Houdini simulations at LuxCore standalone.

I wanted to build up a full AMD workstation, but more I gather infos due to my software pack, it turns out that I will end up by buying a 9900k with an RTX card. To be honest I start to feel sorry about AMD
User Avatar
Member
575 posts
Joined: Nov. 2005
Offline
just the denoising runs on gpu, but as far as I know only in renderview, it does not render it in mantra, not sure what would happen if it runs on a animation
User Avatar
Member
4189 posts
Joined: June 2012
Offline
@sanostol - luckily you can use Optix in Mantra - in the Pixel Samples pull-down there is an option for Optix denoiser
User Avatar
Member
575 posts
Joined: Nov. 2005
Offline
aha, that's cool to know,thanks for the tip.
how does it work on animation? what happens on pure cpu base rendernodes
Edited by sanostol - Jan. 23, 2019 06:35:01
User Avatar
Member
83 posts
Joined:
Offline
tardigrad3
fuos
tardigrad3
- I do not know how much I would lose by the absolute lack of CUDA cores

You mainly lose in rendering. Houdini for the most part is graphics card agnostic, though, H17 Mantra has Optix denoising that only runs on Cuda. Most people use Redshift to speed up rendering and that also wont run on AMD hardware. That's a pretty big issue to not to be able to speed up your rendering workflow when Mantra is simply too slow.

Everything is pointing to the fact that H17.5 will not speed up Mantra with the exception of, maybe, Intel's Embree tech, which uses the CPU to speed up raytracing.

Hello Fuos!
Thanks for your answer! I had no clue that Mantra is using CUDA cores for denoising - thanks for the info! In this case it's seems not that logical to go for the Radeon (I thought that Mantra uses just CPU) I know about Redshift and Octane (Octane was my favorite GPU render, so yea, if I take the green road again, I will end up there)
It seems that a lot points to the direction of an RTX card - I'm a bit sad about it, I doubt that I will be able to reach 16 gig Vram in the near future beside Radeon VII.
Unfortunately AMD ProRender can not much to do with Houdini simulations, I'm not sure if it even can handle Volumes or Particles (I'm not even sure, if it has a ProRender To Houdini plugin)Same due to LuxRender / LuxCore, I doubt that they even have a Houdini plugin.. I will google after some info, if it's posible to render Houdini simulations at LuxCore standalone.

I wanted to build up a full AMD workstation, but more I gather infos due to my software pack, it turns out that I will end up by buying a 9900k with an RTX card. To be honest I start to feel sorry about AMD
seems not that logical to go for radeon?

look at this –> https://www.youtube.com/watch?v=xVCE09flu94 [www.youtube.com]

just to remind you mentra isn't GPU based renderer and its only matter of time until houdini will have opencl renderer (i hope)
User Avatar
Member
4189 posts
Joined: June 2012
Offline
sanostol
aha, that's cool to know,thanks for the tip.
how does it work on animation? what happens on pure cpu base rendernodes

Optix works really well, it looks like an organic Jpeg. CPU-only renders just skip the filter. Interestingly we still find Neatnoise to be invaluable, as Optix will not fix all noise, and it has amazing interframe technology that optix lacks. It's a good combination.
User Avatar
Member
19 posts
Joined: Oct. 2018
Offline
habernir
tardigrad3
fuos
tardigrad3
- I do not know how much I would lose by the absolute lack of CUDA cores

You mainly lose in rendering. Houdini for the most part is graphics card agnostic, though, H17 Mantra has Optix denoising that only runs on Cuda. Most people use Redshift to speed up rendering and that also wont run on AMD hardware. That's a pretty big issue to not to be able to speed up your rendering workflow when Mantra is simply too slow.

Everything is pointing to the fact that H17.5 will not speed up Mantra with the exception of, maybe, Intel's Embree tech, which uses the CPU to speed up raytracing.

Hello Fuos!
Thanks for your answer! I had no clue that Mantra is using CUDA cores for denoising - thanks for the info! In this case it's seems not that logical to go for the Radeon (I thought that Mantra uses just CPU) I know about Redshift and Octane (Octane was my favorite GPU render, so yea, if I take the green road again, I will end up there)
It seems that a lot points to the direction of an RTX card - I'm a bit sad about it, I doubt that I will be able to reach 16 gig Vram in the near future beside Radeon VII.
Unfortunately AMD ProRender can not much to do with Houdini simulations, I'm not sure if it even can handle Volumes or Particles (I'm not even sure, if it has a ProRender To Houdini plugin)Same due to LuxRender / LuxCore, I doubt that they even have a Houdini plugin.. I will google after some info, if it's posible to render Houdini simulations at LuxCore standalone.

I wanted to build up a full AMD workstation, but more I gather infos due to my software pack, it turns out that I will end up by buying a 9900k with an RTX card. To be honest I start to feel sorry about AMD
seems not that logical to go for radeon?

look at this –> https://www.youtube.com/watch?v=xVCE09flu94 [www.youtube.com]

just to remind you mentra isn't GPU based renderer and its only matter of time until houdini will have opencl renderer (i hope)

Yes I know that Mantra is a CPU based render.
Thanks for the youtube link, yes, I saw that test befor. Funny, that it seems like as the phase: “nomen est omen” forks not just due to names but also colors pyro - threadripper, pyro - radeon

I wish you would be right with an openCL based Mantra, but I did not heard not even rumors about it…and Radeon VII will arrive in a week, so if I wanna buy it, I have to see clear a viable solutions how I will use it up in my pipelines with some benefits vs an Nvidia GPU. Houdini has redshift(Nvidia) has Octane(Nvidia) has Arnold(most likely will be just CUDA based due to their last demo show) So basically there is no GPU based render plugin for Houdini that can utilize OpenCL.
And now even Apple messed up the scene with their “Metal” API.
Whats up with Radeon ProRender we should ask… well… thats a mistery…that would be AMDs chance to set up a competition in the GPU market due to content creator segments but they messed it up so far.
User Avatar
Member
19 posts
Joined: Oct. 2018
Offline
fuos
Optix denoiser

And another Nvidia tech…poor amd
Also, I just read a quite hidden line that was implented with v17:
“Miscellaneous changes: improved Unified Noise, new Python API for custom viewport interaction
Other changes include updates to the Unified Noise VOP, which gets a new Periodic Noise system, plus initial 64-bit support and support for AVX.”
well AVX can make a huge different in threadripper vs coreX playfield:
https://www.anandtech.com/bench/product/2265?vs=2283 [www.anandtech.com]

A bit it seems that AMD do well when it's about horse power, but when they need to back it up with tech, they can not counter intel or Nvidia.
User Avatar
Member
83 posts
Joined:
Offline
i think that sidefx will never be committed to one company (NVIDIA) soo if they will develop GPU renderer or GPU dynamic node based the most chances it will be opencl and not CUDA .

amd publish that Radeon VII it's not just for gaming also for DCC software and what amd said its that they have 30% better performance in DCC soo i don't know how its impact on houdini and if thats true .

don't forget you can export your project to other software (if you are planning do to that) and render in other software

and redshift are developing also in opencl (this is what they said but it will take time)

but if you want to use redshift today soo if you have the money go for 2080ti or 2080.
Edited by habernir - Jan. 24, 2019 09:20:53
User Avatar
Member
4189 posts
Joined: June 2012
Offline
Yep - GPUs are critical and worth considering. This setup has a Ryzen 2700X that gets all cores to ~4GHz, and, even in OCL-CPU mode cannot come close to a 1080ti for Pyro sims.
User Avatar
Member
37 posts
Joined: Aug. 2011
Offline
tardigrad3
https://www.anandtech.com/bench/product/2265?vs=2283 [www.anandtech.com]

I am considering building an AMD TR2 workstation based on the 2950X and have done quite a bit of research into this.

In your Intel AMD comparison the AMD Threadripper CPU is almost half the price of the Intel CPU, so it is not a fair comparison at least for considering the price performance ratio.

A better comparison would be:
Anandtech CPU 2019 Benchmarks [www.anandtech.com]

Overall the AMD Threadripper CPUs have a much better price performance ratio than the Intel CPUs.

On the GPU side the current generation of AMD GPUs (Polaris and Vega) are much less power efficient and generate a lot more heat and noise than the NVidia 10xxGTX and RTX 20xx series.
They do however come in somewhat cheaper.

There is currently a glut of GPUs resulting from the collapse in demand for GPUs for mining Crypto and the launch of the new NVidia RTX series, so it is possible to get the NVidia 10xxGTX series GPUs at very attractive prices.
By the time the software supports the Ray Tracing features of the NVidia RTX series NVidia will probably have released upgraded GPUs which may well have 16GB.
Edited by Charles Kirk - Jan. 25, 2019 09:35:09
User Avatar
Member
19 posts
Joined: Oct. 2018
Offline
penboack
tardigrad3
https://www.anandtech.com/bench/product/2265?vs=2283 [www.anandtech.com]

I am considering building an AMD TR2 workstation based on the 2950X and have done quite a bit of research into this.

In your Intel AMD comparison the AMD Threadripper CPU is almost half the price of the Intel CPU, so it is not a fair comparison at least for considering the price performance ratio.

A better comparison would be:
Anandtech CPU 2019 Benchmarks [www.anandtech.com]

Overall the AMD Threadripper CPUs have a much better price performance ratio than the Intel CPUs.

On the GPU side the current generation of AMD GPUs (Polaris and Vega) are much less power efficient and generate a lot more heat and noise than the NVidia 10xxGTX and RTX 20xx series.
They do however come in somewhat cheaper.

There is currently a glut of GPUs resulting from the collapse in demand for GPUs for mining Crypto and the launch of the new NVidia RTX series, so it is possible to get the NVidia 10xxGTX series GPUs at very attractive prices.
By the time the software supports the Ray Tracing features of the NVidia RTX series NVidia will probably have released upgraded GPUs which may well have 16GB.

Yes, you are right, with power and heat due to AMD Vega. But still.. it's such an opportunity to buy a 16 gig 1T/sec memory GPU for that price With such a great computing power! I'm constantly reading after AMD's ProRender and trying to convince myself that actually it works well and rdy to use (but of course I know it well, that it's not even close to Octane or Redshift)

I 100% agree that Threadrippers are the best price/performance option in a workstation! No doubt about it due to the surreal crazy price range of Intel's X CPUs.
But, one thing is important and I'm honestly crossing my finggers, that AMD will handle it by the 3000 series, that they perform way under intel if it's about single core workflow. Actually every even extrude, move, rotate, scale, bevel, champfer…. first goes true a single CPU core and just than it reach the GPU that will send it to our display. So until active modeling we still need that good single core performance. And I do not even mentioning other softwares as Zbrush (I use a lot) that no matter if it greatly support multi threaded workflow in a superior way, a TR 1950 was able to reach the same Zbench score as a 6 year old i5.
That was said. But I also should share, that I'm absolutelly aiming for a TR 3000, probably I will have no money to buy one (they will be close to intel x at their arrival imo, more cores, but 1500+ pricerange) but I will wait just for it's new gen motherboards and drop a 1920 in it that I can upgrade by time.

Thanks so much by sending that link, you are right, if we compair these CPUs due to the same pricerange, than thats what we get, but careful there, AT's most benchmark line is multithread based, so it will show you a fake result compareing a 12 core cpu to a 16 core one (24 thread vs 32 thread). Any way, your decision is great! 2950 is a superior CPU! By the way, is it urgent? Can you wait for the new gen motherboards? Even if you will drop a 2950 in it, I heard that new TR motherboards will be more than a simple PCI 4 upgrade - worth waiting imo.
User Avatar
Member
37 posts
Joined: Aug. 2011
Offline
Moving from my current systems, two laptops, one based on an Intel 2nd Generation i7 4C/8T CPU running Windows 10 and the other based on an Intel 3rd Generation i7 4C/8T mbp should give me around 5 times better Multi-Core performance.

I don't find that single core performance has much impact on my workflows, so that not a main consideration for me.

The main bottlenecks in my current workflow are making test renders in Planetside Terragen where better Multi-Core performance would greatly speed up the process by reducing test render times by around a factor of 5 and working in Unreal Engine.

I don't use ZBrush, so I can't comment on ZBRush performance on TR, I have evaluated it in the past and didn't get on with it, I'd probably look at 3DCoat if I wanted to do sculpting.

In general I only make decisions based on hardware that has been released and where test results are available with software that I use or software that I would anticipate to have similar performance to software that I use, so I tend to ignore all the rumours!
User Avatar
Member
8 posts
Joined: June 2018
Offline
Hi Tardigrad (and others).

I should mention in full disclosure I work for AMD on the ProRender software. The fact that we're looking at this forum should give you a hint. There is not a Houdini to ProRender plugin currently but we are looking to enable the workflow through USD. We've had a USD-> ProRender plugin for a while https://www.amd.com/en/support/kb/release-notes/rn-prorender-usd-v0-8-5 [www.amd.com]

Also WRT particles and volumes, we actually can do those quite well.

For users who are using the USD workflow in Houdini and interested in testing the gpu renderer, we'd love to talk and get testers!

Brian
User Avatar
Member
636 posts
Joined: June 2006
Offline
Hi brian,

Nice to hear this information.
Did you made also the contact to SESI when houdini has native USD that your plugin will work when they ship the software?
  • Quick Links