Nvidia RTX

   7878   11   2
User Avatar
Member
1755 posts
Joined: March 2014
Offline
Preface: I'm not an RTX card owner (not even possible considering the date of this posting) and not a GPU rendering engine user (Redshift, Octane, etc)

The purpose here is to speculate (because that's almost the same as imagining, which is always fun) before numbers are in and when that will happen, early adopters of these cards are encouraged to post their personal experiences and hopefully comparisons with the previous Nvidia card generation, showcased as graphs/numbers.

Most, if not all, reviews YT channels and “tech” websites are testing against games and non-relevant benchmarks, therefore I think we need our corner here to make some waves.

Sacrificial lambs (early adopters) are going to be drafted for the afterlife lottery*.

*there's no guaranteed minimum number of winners
Edited by anon_user_89151269 - Sept. 19, 2018 12:27:03
User Avatar
Member
833 posts
Joined: Jan. 2018
Offline
Ok, I'll play.

I'll speculate that people like me who are already heavily invested in 10xx GPU's aren't particularly eager to eBay all of our cards for a fraction of what we paid for them and spend another small fortune on the 20xx GPU's without some hard data and compelling evidence. A 20-30% decrease in render times as recently stated by both Redshift and Chaos Group devs might not be compelling enough for some of us.

I think those who are early adopters might also be frustrated by how long before apps actually make full use of this new technology. Supposedly Redshift 3.0 will begin to take advantage of RTX, but judging by the pace of the latest updates, I wouldn't expect the new version to actually be out and usable before March or April of next year; and even then, it might be years before the real advantages of this technology is fully leveraged.

I think gamers will be equally frustrated waiting for games that utilize the new technology in these cards, and when they do finally get it, they might decide that having physically accurate shadows and reflections might not be all that it's cracked up to be.

From a non-GPU renderer user point of view, I am not sure what the advantages will be. Perhaps some acceleration for computing simulations? But there is plenty of potential for that in 10xx gen GPU's and Houdini's implementation of it is limited at the moment. Maybe next week will bring some interesting new developments with the H17 announcement?

All in all, I feel that the benefit to early-adopters will be limited for quite some time.

For me, it's a wait-and-see game, but I really can't imagine myself investing into the new RTX cards until at least this time next year. I'm pretty damn happy with the performance of my 1080ti's at the moment, and for the price of a single 2080ti I can buy two 1080ti's which, based on what I read, would result in greater overall render speed increase.
>>Kays
For my Houdini tutorials and more visit:
https://www.youtube.com/c/RightBrainedTutorials [www.youtube.com]
User Avatar
Member
2036 posts
Joined: Sept. 2015
Offline
Your topic got me interested in doing some more reading on these new cards.

I was watching Entagmas video last night on their render engine discussion,

and Manuel was giving his reasons for preferring to use GPU rendering.

And today I noticed this ‘new’ feature of the 2080s' of ‘live’ raytracing.( although I can't pretend to know if this has any significance )

I guess if Octane/Redshift can make use of the live raytracing feature along with what I remember Manuel was saying,

it seems if someone isn't already using GPU rendering but was considering to jump in, now might be the time.
User Avatar
Member
833 posts
Joined: Jan. 2018
Offline
BabaJ
And today I noticed this ‘new’ feature of the 2080s' of ‘live’ raytracing.( although I can't pretend to know if this has any significance )


Most of that is marketing-speak that is mostly geared toward gaming applications, Unreal Engine etc.

No doubt that the new GPU's will be faster than the current generation, but nobody at this point knows exactly by how much except a handful of devs which have been dealing with pre-release models and SDK's.
>>Kays
For my Houdini tutorials and more visit:
https://www.youtube.com/c/RightBrainedTutorials [www.youtube.com]
User Avatar
Member
7737 posts
Joined: Sept. 2011
Offline
The RTX series have hardware for acceleration of BVH generation and intersection. Nvidia claims 6x speed compared to the software approach required in previous generations. Whether this is achieved by production renderers remains to be seen. Raytracing is likely to see more than the incremental “generation speedbump” increase that is seen in compute/shading/gaming usage. Perhaps the bottleneck just moves elsewhere and we do only see marginal (less than 2x) gains.
Edited by jsmack - Sept. 19, 2018 21:55:13
User Avatar
Staff
5156 posts
Joined: July 2005
Offline
Good high level summary of the RTX “cores”, if you're interested:

https://techreport.com/review/34095/popping-the-hood-on-nvidia-turing-architecture/2 [techreport.com]
User Avatar
Member
2036 posts
Joined: Sept. 2015
Offline
In Reference to twods' article link, there's emphasis on the gamer end-user for this:

Nvidia pre-trains NGX models in its own data centers and provides them to end users as “neural services” by way of GeForce Experience.

Wonder how much of a end-user population they would need to do something like training for specific areas like point/particle sims or volumetric(voxel) rather than just polygon based rendering.


Edit Addendum: Wait…there's a Quadro RTX 8000 too, 48GB. I don't think I have to check the price on that other than I think I would need to buy a lotto ticket.
Edited by BabaJ - Sept. 20, 2018 11:23:09
User Avatar
Member
1755 posts
Joined: March 2014
Offline
The Quadro RTX 8000 is $10k.
Thanks, Mark, for the link.
User Avatar
Member
47 posts
Joined: July 2017
Offline
With the new NVlink two GPUs can share their memory. This sounds very interesting in theory, more or less double the available VRAM. Will this be supported by OpenCL simulations?
User Avatar
Member
2199 posts
Joined: July 2005
Online
2 interesting announcements at CES
1. Autodesk will have Arnold rendering accelerated by RTX cards
2. RTX laptops coming 29th Jan

https://www.nvidia.com/en-gb/geforce/gaming-laptops/20-series/?url=https://www.nvidia.com/en-gb/geforce/gaming-laptops/20-series/&ncid=pa-pai-70739#cid=Paid_GEFORCE_UK_20190107_Google_Search_2060_Laptop
The trick is finding just the right hammer for every screw
User Avatar
Member
1 posts
Joined: Nov. 2015
Offline
Hi guys! Could you recommend what graphic card to choose for Houdini? I consider GTX 1080 Ti and RTX 2070 SUPER. They cost equally. I do rendering with Mantra and Redshift. I'm advised RTX 2070 SUPER for Redshift but will it accelerate PYRO and FLIP simulations? Thanks!
User Avatar
Member
833 posts
Joined: Jan. 2018
Offline
Get the 2070 Super, if the price is the same you're getting newer technology. I think the Houdini benefits will be about the same. The pyro acceleration will be ok, but not in the same way that Redshift accelerates renders.
>>Kays
For my Houdini tutorials and more visit:
https://www.youtube.com/c/RightBrainedTutorials [www.youtube.com]
  • Quick Links