Questions on Mantra materials and megascans

   2677   6   1
User Avatar
Member
14 posts
Joined: May 2016
Offline
Hi everyone,
I have a few questions regarding shading with Mantra, been playing around with displacements and megascans materials, trying to understand the various setups(long time v-ray user), I would be very grateful for some help from the experts here

1. Is there a reason why the principled shader gives me a different render when its inside a material builder or outside in the general “mat” page? I thought the material builder is like a way to package the material to have a more tidy workplace(attaching images to show the issue) - this issue seems to be linked to the object render tab -dicing -full predicing, but I don't understand why its an issue only when inside a material builder…

2. using megascans, I'm trying to setup the shader manually(quixel triplaner, couldn't get Bridge to work), what are the maps needed for export usually? albedo, roughness, specular, displacement and normal?
whats the difference between normal and normalbump?

3. going inside the quixel triplaner, I can see all maps are fed through a gamma adjustment(down to 0.454), is this something I should be doing with mantra in general?(when working with gamma 2.2)

4. for normals to work I should pipe the normal texture through a displace along normal, I thought this was also needed for geo displacements using the displace node, but just using the texture as is seems to work?

5. to adjust the strength of certain maps I can use a multiply constant node? I find it strange that all these parameters aren't built into the quixel main node…

Sorry for the long post, hopefully I can get some answers to ease my mind, overall its looking good but would like to know I'm not missing anything important,
Thanks!

Attachments:
Capture1.JPG (148.1 KB)
Capture2.JPG (129.7 KB)
quixelTri_inside.JPG (64.0 KB)
displace.JPG (164.2 KB)

User Avatar
Member
14 posts
Joined: May 2016
Offline
Almost forgot, when inside a material builder, why cant I render only a specific texture when dragging it into the IPR render view, while there are no issues doing it when the same material is not packed into a material builder?
Is it a bug? or my understanding of the material builder is flawed?

Attachments:
noise1.JPG (82.6 KB)
noise2.JPG (133.4 KB)

User Avatar
Member
6470 posts
Joined: Sept. 2011
Online
Guy Tourgeman
1. Is there a reason why the principled shader gives me a different render when its inside a material builder or outside in the general “mat” page? I thought the material builder is like a way to package the material to have a more tidy workplace(attaching images to show the issue) - this issue seems to be linked to the object render tab -dicing -full predicing, but I don't understand why its an issue only when inside a material builder…

The principled shader comes with some render properties built-in, such as disabling displacement when not used, and the displacement bounds when appropriate. Mantra finds the properties by looking for them on the node that is assigned as the material. When the principled shader is inside a matnet, then it is no longer the node used for the assignment and all the properties it carries are ignored. To fix the issue, properties must be manually added to either the material builder or a properties node connected to the collect. I'm not sure why dragging isn't working from within a material builder, since I don't use that feature. I prefer to connect the ‘test’ channel to the Ce output of the material.

Guy Tourgeman
2. using megascans, I'm trying to setup the shader manually(quixel triplaner, couldn't get Bridge to work), what are the maps needed for export usually? albedo, roughness, specular, displacement and normal?
whats the difference between normal and normalbump?

I'm not sure about normal vs normalmap. Maybe it's object or world vs tangent space. Normal maps are somewhat superfluous when you have a displacement map, as modern renderers compute the normal per-sample based on the displaced mesh. The normal map would be used by a game engine.

Guy Tourgeman
3. going inside the quixel triplaner, I can see all maps are fed through a gamma adjustment(down to 0.454), is this something I should be doing with mantra in general?(when working with gamma 2.2)
It depends. Are the maps meant to be gamma corrected? Are they in a format that is automatically gamma corrected? Usually non-color maps are saved without gamma correction even in 8bit so no gamma correction should be applied when reading into a shader. But I can't say that every source of textures is going to be doing the right thing.

Guy Tourgeman
4. for normals to work I should pipe the normal texture through a displace along normal, I thought this was also needed for geo displacements using the displace node, but just using the texture as is seems to work?
When using displacement, usually the normal map would be left disconnected and ignored. But if you want to use it instead of the displacement, it would be connected to a displace node set to normal map mode, not a displace along normal, which is for displacement maps. To use both, the shader would have to be setup to either apply on top of the normal computed form displacement, which is probably wrong, or to apply the normal map to the pre-displaced normal, discarding the displacement shader's normal, which is somewhat complicated.

Guy Tourgeman
5. to adjust the strength of certain maps I can use a multiply constant node? I find it strange that all these parameters aren't built into the quixel main node…
This would be controlled differently with every kind of map, depending on what it means numerically.
User Avatar
Member
14 posts
Joined: May 2016
Offline
Thanks for the detailed answer!
I didn't know that a normal map needs to connect to a displace node, that fixed it for me, and with further testing when using displacement there is no need for the normal map.

I noticed that the megascans displacement maps have a strange range(depends on asset), so a fit range can get most of the details back, but perhaps it's simply better to use the high poly source model as it has all the details already in it.
Thanks for the help
User Avatar
Member
6470 posts
Joined: Sept. 2011
Online
Guy Tourgeman
I noticed that the megascans displacement maps have a strange range(depends on asset), so a fit range can get most of the details back, but perhaps it's simply better to use the high poly source model as it has all the details already in it.
Thanks for the help

The range on a displacement texture is -∞ to ∞, or whatever largest floating point numbers are, since the displacement is a distance value. A meteor that is hundreds of meters wide will have displacement values in the meters or tens of meters. An ant head will have displacement in the micrometers. No range fitting should be required, but make sure the scale on the displacement is 1 or whatever the unit conversion is that has been applied to the model.
User Avatar
Member
14 posts
Joined: May 2016
Offline
Thanks jsmack,
Your right, the displace node gives the same results as when using a fit node…(meaning there is no need to fit)
I was thrown off by the default values, and thought it would look correct out of the box.
In some of the early tests I was using a JPG image for the displacements, switching over to the EXR version fixed the problems.
Still, looks like there is always a certain amount of tweaking needed on our part, especially if I adjust the scale of the model etc

Attachments:
displace.JPG (194.8 KB)
fit_displace.JPG (221.8 KB)

User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
Hi, Guy,

if I may join in: Regarding your comment about “having to tweak especially when scaling the model” - jsmack answered that in the previous comment:

A displacement “map” (texture file) usually is treated as a scaling FACTOR. A JPEG won't work well here, since it's only 8 bit and would therefore only give you 255 different “distance” values. Even EXR, when used with half-precision, will often give you “jaggies” due to lack of “intermediate values”. Anyway, the value read from the displacement “map” will usually be something between 0 and 1 (grey scale), which is “mapped” to -1 to +1 (halving its precision by the conversion). If you apply this “-1 to +1” directly to a displacement (without any scale), you end up with displaced geometry that can be offset by 1 unit at maximum.

Now, if you SCALE your model, this displacement does not get scaled with the model (if you haven't set the displacement up to respect model scale). If your model was, say, one meter in size and your displacement “correctly” had a maximum offset of 1 meter, scaling the model to 10m in size will, obviously, make that displacement way too small.

That's where the read-out value from a displacement “map” will usually be treated as a factor: It's to be multiplied by your (model dependent) “size factor” (or “max value”, depending on your naming scheme). 0.5 times your model scale will give you a “consistent” displacement outcome.
If you set up your pipeline accordingly, any scaling of the model will be respected by the displacement and it will always yield physically correct results. Which, unfortunately, may not be what you need for the job assigned, since LOOKS often don't go along with physical correctness :-)

As for gamma: A proper colour pipeline setup will define whether you have to correct a non-color-corrected input “map” (diffuse, albedo etc) or not. Applying a 0.5454 gamma curve to a texture means that the texture is expected to have a 2.2 gamma “baked in” (which shouldn't be the case in a “professional” setup). “Linear” textures are textures that do NOT have a gamma curve baked in but can be represented as (again) color values (per channel) from 0 to 1. Those linear colors do not have any specific “color space” assigned - the color space is a conversion that creates “human readable” or “understandable” colour output from pure mathematic linear colour values (hinting at a brute-force gamma curve of 0.5454 being most often insufficient, since red/green/blue most likely need different curves applied for human-friendly colour spaces).
In short: It makes a lot of sense to take an hour and read about colour spaces, linear colour space and the different options of colour correction models. Even if you don't need this knowledge “constantly” in your job, having a basic understanding of what colour actually IS will help you tremendously with setting up render pipelines.


Marc
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
  • Quick Links