How to do face weighted normals in Houdini?
10344 13 4- HenrikVilhelmBerglund
- Member
- 23 posts
- Joined: April 2016
- Offline
Hi. I'm a newbie and I'm looking for a way to do face weighted normals in Houdini for game assets. (http://wiki.polycount.com/wiki/Face_weighted_normals) [wiki.polycount.com]
The left box is a box made in Houdini with a polybevel and nothing else. The right box is a box made in Blender, beveled, then the normals were edited with a face weighted normal script.
The left box is a box made in Houdini with a polybevel and nothing else. The right box is a box made in Blender, beveled, then the normals were edited with a face weighted normal script.
- malexander
- Staff
- 5201 posts
- Joined: July 2005
- Offline
- HenrikVilhelmBerglund
- Member
- 23 posts
- Joined: April 2016
- Offline
That method gives me hard edges which I don't really want, but it could be useful for some other things I guess. What I'm really looking for are smooth edges that kind of look like a high poly (subdivision) mesh when it's really just a simple mesh.
Here's another example that maybe makes more sense. The left mesh is the face weighted normal mesh and the mesh to the right uses a normal node with cusp angle 30. The geometry is the same.
If there's no such thing built in I guess I could write a script for it instead.
Here's another example that maybe makes more sense. The left mesh is the face weighted normal mesh and the mesh to the right uses a normal node with cusp angle 30. The geometry is the same.
If there's no such thing built in I guess I could write a script for it instead.
- derrick
- Staff
- 329 posts
- Joined: July 2005
- Offline
HenrikVilhelmBerglund
That method gives me hard edges which I don't really want, but it could be useful for some other things I guess. What I'm really looking for are smooth edges that kind of look like a high poly (subdivision) mesh when it's really just a simple mesh.
I don't think any built-in node provides the behaviour you described but a custom tool could be built using the Attribute Wrangle SOP. Attached is an example implementation.
- HenrikVilhelmBerglund
- Member
- 23 posts
- Joined: April 2016
- Offline
- derrick
- Staff
- 329 posts
- Joined: July 2005
- Offline
HenrikVilhelmBerglund
That looks great, thanks! Now I just need to find out what makes that work… :shock:
Wrangle nodes let you run a snippet of VEX code on your data. The Attribute Wrangle SOP in this example runs over points so the code is evaluated for each point. Also, the “v@N” means a vector attribute called “N” on our data (in this case points). Here is an easier to read version of the code:
vector nml = { 0, 0, 0 };
// iterate over each primitive that references the current point
foreach(int pr; pointprims(@OpInput1, @ptnum))
{
// use the primitive's area as its weight in the weighted sum
float w = primintrinsic(0, “measuredarea”, pr);
// accumulate the primitve's normal at the primitive's center scaled by the weight
nml += w * prim_normal(@OpInput1, pr, 0.5, 0.5);
}
// normalize the vector
v@N = normalize(nml);
Edited by - May 2, 2016 16:35:13
- Doudini
- Member
- 333 posts
- Joined: Oct. 2012
- Offline
- anon_user_37409885
- Member
- 4189 posts
- Joined: June 2012
- Offline
- HenrikVilhelmBerglund
- Member
- 23 posts
- Joined: April 2016
- Offline
derrick
Wrangle nodes let you run a snippet of VEX code on your data. The Attribute Wrangle SOP in this example runs over points so the code is evaluated for each point. Also, the “v@N” means a vector attribute called “N” on our data (in this case points). Here is an easier to read version of the code:
vector nml = { 0, 0, 0 };
// iterate over each primitive that references the current point
foreach(int pr; pointprims(@OpInput1, @ptnum))
{
// use the primitive's area as its weight in the weighted sum
float w = primintrinsic(0, “measuredarea”, pr);
// accumulate the primitve's normal at the primitive's center scaled by the weight
nml += w * prim_normal(@OpInput1, pr, 0.5, 0.5);
}
// normalize the vector
v@N = normalize(nml);
Thank you. Makes a lot of sense now!
- neil_math_comp
- Member
- 1743 posts
- Joined: March 2012
- Offline
It won't be in Houdini 15.5, but in the major release of Houdini after that, there'll be an option in the Normal SOP for this.
There'll be a new parameter “Weighting Method”, which you can set to “By Face Area”, instead of the default “By Vertex Angle” or the near-equivalent of what the Facet SOP does “Each Vertex Equally”. It can't be backported, because there are a bunch of changes in that code, e.g. parallelizing it, that make it produce very slightly different results from previous versions of Houdini. Slightly changing normals tends to significantly change results for some types of simulations.
There'll be a new parameter “Weighting Method”, which you can set to “By Face Area”, instead of the default “By Vertex Angle” or the near-equivalent of what the Facet SOP does “Each Vertex Equally”. It can't be backported, because there are a bunch of changes in that code, e.g. parallelizing it, that make it produce very slightly different results from previous versions of Houdini. Slightly changing normals tends to significantly change results for some types of simulations.
Writing code for fun and profit since... 2005? Wow, I'm getting old.
https://www.youtube.com/channel/UC_HFmdvpe9U2G3OMNViKMEQ [www.youtube.com]
https://www.youtube.com/channel/UC_HFmdvpe9U2G3OMNViKMEQ [www.youtube.com]
- Anthony_Gregory
- Member
- 3 posts
- Joined: July 2018
- Offline
- jflynnxyz
- Member
- 1 posts
- Joined: Sept. 2021
- Offline
The calculations provided don't give the completely desired result: to have the vertex normals facing the direction of the largest face.
Taking the largest isn't always the desired case though, primarily for corner pieces they should take into account all the faces as they are the same size.
The best way is to take the weighted normals of all the faces and use the faces above a certain area threshold compared to the largest face(I use 0.666) so as you approach a more even shape the normals will revert back to being a face average
Here is my code for that:
Taking the largest isn't always the desired case though, primarily for corner pieces they should take into account all the faces as they are the same size.
The best way is to take the weighted normals of all the faces and use the faces above a certain area threshold compared to the largest face(I use 0.666) so as you approach a more even shape the normals will revert back to being a face average
Here is my code for that:
vector nml = { 0, 0, 0 }; vector offsets[]; float lengths[]; // iterate over each primitive that references the current point foreach(int pr; pointprims(@OpInput1, @ptnum)) { // use the primitive's area as its weight in the weighted sum float w = primintrinsic(0, “measuredarea”, pr); vector offset = w * prim_normal(@OpInput1, pr, 0.5, 0.5); // accumulate the primitve's normal at the primitive's center scaled by the weight append(offsets, offset); append(lengths, w); } float mx = max(lengths); for (int i=0; i<len(offsets); ++i) { float ratio = lengths[i] / mx; if (ch("snap") < 1 || ratio >=ch("tolerance")) { nml += offsets[i]; } } vector N = normalize(nml); v@N = N;
- Anthony_Gregory
- Member
- 3 posts
- Joined: July 2018
- Offline
- psychoboy852
- Member
- 12 posts
- Joined: March 2022
- Offline
jflynnxyz
The calculations provided don't give the completely desired result: to have the vertex normals facing the direction of the largest face.
Taking the largest isn't always the desired case though, primarily for corner pieces they should take into account all the faces as they are the same size.
The best way is to take the weighted normals of all the faces and use the faces above a certain area threshold compared to the largest face(I use 0.666) so as you approach a more even shape the normals will revert back to being a face average
Here is my code for that:vector nml = { 0, 0, 0 }; vector offsets[]; float lengths[]; // iterate over each primitive that references the current point foreach(int pr; pointprims(@OpInput1, @ptnum)) { // use the primitive's area as its weight in the weighted sum float w = primintrinsic(0, “measuredarea”, pr); vector offset = w * prim_normal(@OpInput1, pr, 0.5, 0.5); // accumulate the primitve's normal at the primitive's center scaled by the weight append(offsets, offset); append(lengths, w); } float mx = max(lengths); for (int i=0; i<len(offsets); ++i) { float ratio = lengths[i] / mx; if (ch("snap") < 1 || ratio >=ch("tolerance")) { nml += offsets[i]; } } vector N = normalize(nml); v@N = N;
Amazing, thanks a lot for this !
-
- Quick Links