arnold or redshift? h18

   34228   97   5
User Avatar
Member
12 posts
Joined: Jan. 2018
Offline
goat
@stephan6 thank you for rejoining the discussion. Unfortunately, it feels like you are are going down the same path as @Daryl.

and what exactly do you mean by that?
Actually I consider myself a vocal critic of RS on the RS forums, I do not agree with a lot of issues there, so I'm by far no RS evangelist, in case you meant that. I still use Arnold on a (almost) daily basis since its beta days, and have actually used many renderers in productions during my career. (PRMan, Air, MR, even Entropy to name some old farts here)

I'm just a strong believer that, what you guys call realism, is mostly achieved in compositing…. even today. you can single out bad CG all day long rendered with various renderers, including Arnold and Renderman. This doesn't make on more “plastic looking” than another.

goat
A renderer is far more than the kernel of MCPT, it is the sum of all its parts, as you have pointed out.

As an analogy, you have been saying that running an Intel processor renders the platform of choice the same, because, the heart of the system is the same. MacOS, linux and Windows users would like to have a word with you.

well, would you say an email written on windows isn't send like on linux? or a copied file is somewhat different? or a frame is rendered differently? look i think this analogy doesn't work at all from the beginning to make your point, just saying that even in your example the end result is the same, the time and how you get there is certainly different.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
@stephen6 so I'm curious then, your premise is that the final output is all that counts. Do you define Houdini and Maya the same thing? As all they do is manipulate 3d meshes.

You appear to ignore the toolsets and process, and, only look at outcomes.
User Avatar
Member
12 posts
Joined: Jan. 2018
Offline
goat
@stephen6 so I'm curious then, your premise is that the final output is all that counts. Do you define Houdini and Maya the same thing? As all they do is manipulate 3d meshes.

You appear to ignore the toolsets and process, and, only look at outcomes.

please don't strawman me. you claimed everything out of Redshift looks like plastic / looks bad. I begged to differ. this was this discussion all about. would you claim everything created in Maya looks bad and everything out of Houdini looks great? the process is, as you might know quite different “?” But again, this thinking doesn't even apply here because renderers are way more similar than Maya compared to Houdini these days. That's my hole point.

and yes, output is all that counts. nobody in the real world (clients, directors) cares about the process in case you didn't notice.
User Avatar
Member
833 posts
Joined: Jan. 2018
Offline
stephan6
the simple problem with your comparison is, we are not talking about sensors and lenses capturing light (analog), we are simply talking about math. renderers work linearly, use the same rendering base algorithm (http://graphics.stanford.edu/courses/cs348b-10/lectures/path/path.pdf), use mostly the same shading models (GGX), use the same pixel filters (gaussian for example) and use the same light falloffs (squared).

Well that's a romantic thought.

The truth is that the result does differ, which is why final renders from Arnold, Mantra, Vray and other unbiased renders look different even with as similar of a pipeline as possible.

Want to see just how much 0's and 1's can differ? Get a .wav file, play it from Pro Tools, play it from Logic Pro, Play it from Cubase, play it in Nuendo, play it in Quicktime Player, play it in VLC, play it in iTunes and hear the difference for yourself. Same data, different audible results. If you're bored during the quarantine, you can spend hours reading various people arguing about the differences (perceptual or otherwise) on many threads at https://www.gearslutz.com [www.gearslutz.com]
>>Kays
For my Houdini tutorials and more visit:
https://www.youtube.com/c/RightBrainedTutorials [www.youtube.com]
User Avatar
Member
12 posts
Joined: Jan. 2018
Offline
Midphase
stephan6
the simple problem with your comparison is, we are not talking about sensors and lenses capturing light (analog), we are simply talking about math. renderers work linearly, use the same rendering base algorithm (http://graphics.stanford.edu/courses/cs348b-10/lectures/path/path.pdf), use mostly the same shading models (GGX), use the same pixel filters (gaussian for example) and use the same light falloffs (squared).

Well that's a romantic thought.

The truth is that the result does differ, which is why final renders from Arnold, Mantra, Vray and other unbiased renders look different even with as similar of a pipeline as possible.

Want to see just how much 0's and 1's can differ? Get a .wav file, play it from Pro Tools, play it from Logic Pro, Play it from Cubase, play it in Nuendo, play it in Quicktime Player, play it in VLC, play it in iTunes and hear the difference for yourself. Same data, different audible results. If you're bored during the quarantine, you can spend hours reading various people arguing about the differences (perceptual or otherwise) on many threads at https://www.gearslutz.com [www.gearslutz.com]

again a poor comparison. a digital signal gets translated into an analog waveform. would you say a standard image looks different when opened in photoshop, imageviewer or another painter?

/edit: just to get this to a close, this back and forth won't lead to any conclusion. you can choose your renderer by the “look” it produces, all the power to you if this works for you. i'll continue to choose the most fitting renderer for the task based on project demands. we simply have to agree to disagree here.

best, –s
Edited by stephan6 - March 21, 2020 13:38:48
User Avatar
Member
2038 posts
Joined: Sept. 2015
Offline
again a poor comparison. a digital signal gets translated into an analog waveform. would you say a standard image looks different when opened in photoshop, imageviewer or another painter?

I think your missing the point of what others are trying to convey to you because even digital signals translated to other digital signals, let alone analog output does change.

Same *.mp4 or *.avi files look different whether viewed in my Switch, VLC, Win Media Player or Itunes.

And don't say, yeah well maybe it's just different codecs being employed and different default resolutions.

Because that's the point - just because all renderers may be using the same underlying math, that doesn't mean they are all using the same written code to convert that math into into something that can be utilized.

Same math does not mean same code - nor does it mean same end results.

In this case the math is analogue, the code is digital. At some point (as one example), decisions have to be made on how and where to handle potentional floating point erros; when writting the code for those math rendering principles you are referring too. It's not all going to be done in the same way, nor get the same results.
User Avatar
Member
527 posts
Joined: July 2005
Offline
Like with a lot of cg, the principles are out there for everyone floating in math/cg papers. It's really the application of them and specifically workflow created that people come up with that has value.

Ie Look at 3Delight's easy breasy creation and management of AOV's.



ps. We have been using the new 3Delight in production and it pretty awesome. FYI
soho vfx
User Avatar
Member
12 posts
Joined: Jan. 2018
Offline
BabaJ
again a poor comparison. a digital signal gets translated into an analog waveform. would you say a standard image looks different when opened in photoshop, imageviewer or another painter?

I think your missing the point of what others are trying to convey to you because even digital signals translated to other digital signals, let alone analog output does change.

Same *.mp4 or *.avi files look different whether viewed in my Switch, VLC, Win Media Player or Itunes.

And don't say, yeah well maybe it's just different codecs being employed and different default resolutions.

Because that's the point - just because all renderers may be using the same underlying math, that doesn't mean they are all using the same written code to convert that math into into something that can be utilized.

Same math does not mean same code - nor does it mean same end results.

In this case the math is analogue, the code is digital. At some point (as one example), decisions have to be made on how and where to handle potentional floating point erros; when writting the code for those math rendering principles you are referring too. It's not all going to be done in the same way, nor get the same results.

jesus, i'm simply not accepting the point that one modern pathtracer renders “real” or “awesome” as one user put it, and another with a very similar feature set renders bad or “plastic” as someone else phrased it. that's it, I hope i made my point clear. There are several studios I worked for which switched from Arnold (with existing 20-40 licenses) to Redshift, or added RS to their arsenal (doing commercials I might add), who would all be stupid because now it's suddenly all looking bad.
User Avatar
Member
12 posts
Joined: Jan. 2018
Offline
Wren
Like with a lot of cg, the principles are out there for everyone floating in math/cg papers. It's really the application of them and specifically workflow created that people come up with that has value.

Ie Look at 3Delight's easy breasy creation and management of AOV's.



ps. We have been using the new 3Delight in production and it pretty awesome. FYI

i'm testing it for the last two weeks in Houdini, and I love it, too! limited but awesome experience so far.
User Avatar
Member
236 posts
Joined: March 2013
Offline
In regards to the OG question. I'd choose Arnold over RS for your volumes and FLIP fluid meshes.
It's much more mature, and has all the control you want, scattering, etc. You also won't hit memory
limits with your volumes either.

What is the spec of your machine? If you are looking at pyro stuff, well, 3delight is around 5x faster
than Arnold on average, but the plugin (not the core renderer) is beta, and has a couple gotchas here
and there. As far as volumes, you are ready to go right now, and you can always pop on the Discord forum
and we will be happy to guide you in any troubles you may have. The other benefit, is that there is a 12
thread free license, not water mark, can be used for commercial work, and an unlimited core license with
12 months of updates and support for $360.

Arnold volumes are great, and you won't have many issues with it at all. Give them both a try.

Daryl, GPU is not the future of rendering, it is great for what it is targeted at, but in no way is it
coming to any decent sized production any time soon. As time goes by, audience requirements go up, complexity
on screen goes up. It's just that simple.

Mantra is on life support, but that's fine, she's well featured and more than does the job rendering any garbage
you send it. It just takes it's sweet time doing it.

Karma? No, that is years away from even being thought of as a production renderer by anyone I know.

Daryl, regarding hydra delegates. The notion of using hydra as a path to final frame renders is insane, and not
something anyone is seriously contemplating at all. Treat hydra delegates as nice viewport IPR to get a feel for
your scene, and general look dev. Then it's off to final frame rendering via the usual path.

Peace.

Lewis
I'm not lying, I'm writing fiction with my mouth.
User Avatar
Member
1755 posts
Joined: March 2014
Offline
So the take-home is that, even though there are benchmarks one can devise by which these engines can objectively be compared, it comes down to what your needs are and the level of satisfaction working with them.

Absent concrete examples, like the one posted previously, which can be analyzed in a manner conducive to a better decision making about what engine to adopt based on needs, this discussion is likely to be of no practical use to anyone looking for useful info. That is, if they prefer seeing numbers instead of anecdotes.

Therefore, IMHO, if this thread is to be of any use, it should morph into something more data focused (scene comparisons with various renders, time for computing, scene file sharing, etc) rather than an experience focused blabber.
User Avatar
Member
833 posts
Joined: Jan. 2018
Offline
pickled
it should morph into something more data focused (scene comparisons with various renders, time for computing, scene file sharing, etc)


For anyone interested, Yuichiro Yama has been doing exactly that with some very interesting results. Check out his channel here:



https://www.youtube.com/user/yu1roh2009/videos [www.youtube.com]
Edited by Midphase - March 22, 2020 20:10:51
>>Kays
For my Houdini tutorials and more visit:
https://www.youtube.com/c/RightBrainedTutorials [www.youtube.com]
User Avatar
Member
1755 posts
Joined: March 2014
Offline
cool! this makes me think that (and this could be constructed as an indictment towards the op, now that sufficient info has been accrued as far as his interested in learning as much as possible before pulling the trigger - which is apparently not a lot) most people asking for advice, are actually looking for confirmation rather than actual data driven info.
unfortunately, no surprise TBH…
User Avatar
Member
644 posts
Joined: Aug. 2013
Offline
For me. the most important thing I am looking for to help things look more “photographic” is a lot of diffuse light bounces. Although not many engines render true caustics, It really makes anything with metal, glass and water in the scene much more convincing. My other pet hate is render engines not being able to split the color spectrum properly in caustics. I do a lot of swimming (until all the pools where just closed in England a few days ago). Looking at the bottom of the water I can clearly see the color spectrum splitting in the caustic patterns on the floor of the pool. I guess I am talking about future generations of rendered for all these features. As far as I know Manuka (Weta's render) is the only one that can render spectrally on the CPU (I think there are some commercial products that can do this on the GPU).

Best

Mark
User Avatar
Member
603 posts
Joined: July 2013
Offline
tinyhawkus
In regards to the OG question. I'd choose Arnold over RS for your volumes and FLIP fluid meshes.
It's much more mature, and has all the control you want, scattering, etc. You also won't hit memory
limits with your volumes either.

RS Volume tech lacks Multiscatter, so, it cant get that depth look right now - fair point.

tinyhawkus
If you are looking at pyro stuff, well, 3delight is around 5x faster

Yes, ever since 3Delight dropped, FB has been flooded with VDB renders, but that was until Emergen stole that spotlight. The next drop of Embergen in the next few weeks will cement its place in the lead for VDB sim(s) at least. They just added HDR and other realtime render improvements, but, I dunno how viable those are, since I havent tested it.

tinyhawkus
Daryl, GPU is not the future of rendering, it is great for what it is targeted at, but in no way is it
coming to any decent sized production any time soon.

Could not disagree with you more - all of the industry is moving that way, to the GPU. The biggest hurdle is VRAM, but, the next PCIe lane drop will lower that barrier - where you'll begin to see synergy between RAM/VRAM.

tinyhawkus
Daryl, regarding hydra delegates. The notion of using hydra as a path to final frame renders is insane, and not
something anyone is seriously contemplating at all. Treat hydra delegates as nice viewport IPR to get a feel for
your scene, and general look dev. Then it's off to final frame rendering via the usual path.

That's not my understanding of what USD represents in its totality to the Industry, and the reason SideFX invested so heavily in creating this new context called LOPs.

That is EXACTLY what SideFX ( and several other Hydra compliant engines ) are contemplating - final frame rendering via USD (more specifically Hydra). If, as you say, production GPU is not the end goal, they why create Karma at all? SideFX already has a CPU engine!

The entire point of USD/Hydra is the ‘U’ in USD, which is the same reason we have a crap ton of render engine options right now in Houdini, more than we've ever had - Hydra!

Its so that you can author fully described 3D scenes in USD, and preview them in the Hydra viewport (along with all your AOVs, right there in the same viewport) and render them on the command line, thru the Hydra interface for final frame rendering. The same Hydra interface I preview the scene with, is the exact same Hydra interface that is used to render the final frame on the command line - and that's the point, and that's what justifies all this upheaval that LOPs brought.

The best indicator of what's to come on GPU, is Storm, its the only Hydra engine that supports GPU USD (directly resolving the USD stage on the GPU) - it is lightening fast, and it recently got Volume support (bye bye 3Delight, even more). I asked SideFX when they'd update to the new USD spec. and they said the next major Houdini version drop - so we'll have to wait on that to play with it in Houdini.

In the end, use whatever you want to create your pixels.
Edited by TwinSnakes007 - March 23, 2020 10:43:54
Houdini Indie
Karma/Redshift 3D
User Avatar
Member
603 posts
Joined: July 2013
Offline
I mean think about it, why is Mantra abandoned?, why is MPlay abandoned? …simple…Hydra!
Houdini Indie
Karma/Redshift 3D
User Avatar
Member
833 posts
Joined: Jan. 2018
Offline
Daryl Dunlap
The best indicator of what's to come on GPU, is Storm


I can't find any info on Storm, are you referring to FStorm?
>>Kays
For my Houdini tutorials and more visit:
https://www.youtube.com/c/RightBrainedTutorials [www.youtube.com]
User Avatar
Member
603 posts
Joined: July 2013
Offline
Midphase
Daryl Dunlap
The best indicator of what's to come on GPU, is Storm


I can't find any info on Storm, are you referring to FStorm?

No, the “Storm” that's listed on the Solaris Desktop as an option, right up under Karma. Its the reference delegate from the USD spec - you'll have to comb the USD spec release notes to find out what features they added in the last drop….and any further you'd have to read the USD API to get more information.

But, its basically a reference GPU USD delegate you get for free from the spec. - the only one that I know of. Load up a scene on the stage, switch to Karma, tumble the viewport, switch to Storm, tumble the viewport.

The responsiveness you see there from Storm is because, Storm is resolving the stage on the GPU.

The USD spec says that the stage is always in state that it can be consumed without much traversal, and its also dirty aware, and so, that's why Storm flies, the stage is resolved on the GPU.
Houdini Indie
Karma/Redshift 3D
User Avatar
Member
31 posts
Joined: May 2018
Offline
I used to be a realism snob, still am tbh, but as a hobbyist/tinkerer I've kind of fallen in love with Redshift. It's super fast and (someone correct me if I'm wrong!) has the best Houdini integration.

But man, it crashed a lot, in fact my sessions more often than not end with a Redshift crash.
Edited by asm - May 26, 2020 07:59:31
User Avatar
Member
236 posts
Joined: March 2013
Offline
Daryl, no. Storm is just the USD Hydra delegate that has been kicking around in usd from the get go.
Pixar just renamed it. Storm is openGL, has been since the start.

No again regarding Embergen, also with GPU being the future of offline.
Maybe in your environment GPU will deliver everything you need and that is defo a reality.
But not at feature film level, not even close to being a reality. You're more than welcome to disagree,
but if you're not coming from a feature level production, then that aspect of the conversation is moot.

Mantra is abandoned as you put it, simply because the code base is so old, that getting her competitive
requires too much effort, and without Karma they have no USD hydra reference to showcase LOPs.


We have been using USD for final frame offline for 5yrs or so, that's not what I'm talking about.
I am talking about using hydra as a pathway to do that, I think you're confusing the two. A Renderer munching
USD Vs going through hydra are two different things. Again, there are indeed renderers evaluating final frame
through it, but there are many many pitfalls, so I won't hold my breath to see the first renderer doing it.

Embergen does indeed look really cool, for certain types of sims, but as far as production level requirements, it's
not really the level we would require. Also, 3delight didn't drop, it's been in use for years, you only got to see
posts from people finally stumbling across it dude.
I'm not lying, I'm writing fiction with my mouth.
  • Quick Links