Subdivision Workflow

   6001   12   5
User Avatar
Member
5 posts
Joined: Dec. 2018
Offline
Hello everyone,

I am curious how other people approach the subdivision workflow in Solaris or what sidefx thinks the workflow should look like.

There are other posts [www.sidefx.com] about this explaining the basics but it doesn't seem very user friendly and I ran into some issues.

Here's my very simple workflow:
  1. Create a model in maya. The model is made of many objects all of them with different subdivision settings. Some of the objects I just set the subdivision levels to 0 that way they won't get any subdivision at all.
  2. Export selected model with multiple objects as alembic file. In the export settings I include the attributes for subdivision levels and other settings I need. This creates primitive attributes for every individual object when importing into Houdini.
  3. Create a Component Builder Setup and load the alembic in as render geometry.
  4. Using the Treat Polygons as Subdivision Surfacesenables subdivision on the whole model.


Here are some issues I have with this workflow:
  • To enable subdivision in the viewport I go to the Display Optionsunder Geometry and set the Level of Detail to 2. I then have to switch to Karma and back to get it to work. This is not documented in the docs.
  • This will show a preview of the geometry in the viewport in Houdini GL but this won't recalculate the normals. (If I enable subd in Maya, I get something that is equivalent of a subdivision node with a normal node. In the HoudiniGL viewport I get something that looks like the subdivision node, but without the normal node after.)
  • Now we have a subdivided surface in Karma, except that it has all the tiny slivers of holes in the mesh between the faces. Let's come back to that later.
  • Once I have the mesh subdivided in Karma, and I change a setting in my material Karma recalculates the whole subdivision geometry again.
  • To speed this I try to lower the quality but I don't know how to control this at all. Can I control the quality per mesh? I have the subdivision levels per mesh from Maya, can I somehow use those levels to control the quality in Houdini?
  • To set a mesh to render as subdivision one can use the checkbox on the import, but this doesn't give you any other settings. So there is the mesh_edit node with more settings. But to adjust the quality, I then have to use a third different node, the render geometry node where I can control how much subdivision is used with a setting that has a completely different name "dicing" and a parameter that has no explanation at all.
  • To make lookdev possible, I tried to lower the subd quality. But lowering the quality across the whole object (since we can't control individual meshes) now brings back even bigger holes between the faces.
  • I did end up downloading the ALab [animallogic.com] to take a look at their setup and they do use subdivision surfaces. For production renders, are we just supposed to crank this dicing quality until the holes disappear?
  • Even if I split up my model into all the individual meshes, how do I set the settings on the mesh_edit node based on attributes in my alembic?


All this being said, I'm really just wondering, what is the proper workflow for this basic subd setup?
Are all the issues I'm running into just bugs, or are they only happening to me?
Are the points I raised not issues in other people's eyes?


Thanks,
Beat

Attachments:
subd_karma.png (1011.4 KB)
subd_maya.png (467.0 KB)

User Avatar
Member
273 posts
Joined: Nov. 2013
Offline
I'm not sure if this is true for all workflows, but in general production renders don't attempt to manually manage subdivision levels on a per object basis. There's typically too many objects and the 'ideal' quality changes depending on the camera and how large the object is on screen. The dicing control is trying to refine the subdivision surface into quads/triangles until a particular criteria is met. It can be measured in pixels or world/local units. The cracking can occur when the achieved dicing level changes between faces. Usually the cracking is minimal when the dicing level is sufficiently small (around a pixel). Some renderers have a "water tight" feature that fills the cracks but I'm not sure about Karma.
Edited by antc - March 16, 2022 16:16:31
User Avatar
Member
273 posts
Joined: Nov. 2013
Offline
Oh and I guess if you did want to target specific subdivision levels in final renders you could always pull the meshes into sops via a sopimport lop and use a subdivide sop set to what ever your attribute value is.

On the way back into lops you would need to set the ‘subdivisionScheme’ attribute on each mesh to ‘none’ on n order to avoid further refinement at render time.
User Avatar
Member
236 posts
Joined: March 2013
Offline
You should be able to do this though. Arnold in OUT context will respect the incoming subD tag on Alembic objects.
They are not continuous polygon meshes so the sub division algo should be able to work no problem.

You are now in USD, so the tagging of subdivision might indeed be different, it may be a case of you renaming those
primitive attributes to be in the correct schema for USD.


L
I'm not lying, I'm writing fiction with my mouth.
User Avatar
Member
5 posts
Joined: Dec. 2018
Offline
antc
I'm not sure if this is true for all workflows, but in general production renders don't attempt to manually manage subdivision levels on a per object basis.
Okay that sounds great in theory, but my 750k polygon car now goes from rendering in 16 seconds for a subdivision node in SOPs for a level 2 to a 1m50s load time.
Additionally, every time I adjust the shader, it has to do that calculation phase again.

antc
On the way back into lops you would need to set the ‘subdivisionScheme’ attribute on each mesh to ‘none’ on n order to avoid further refinement at render time.
That works on a per object level but nor per "primitive". All those are detail attributes. Would I have every screw as a "Mesh" primitive type in usd? Or would this be the time to use "GeomSubset"?

lewis_T
You should be able to do this though. Arnold in OUT context will respect the incoming subD tag on Alembic objects.
They are not continuous polygon meshes so the sub division algo should be able to work no problem.
Yes Arnold works the same way as Maya with Subdivision iterations, that would make sense for me, I'd just control each mesh the same way I was controlling it in Maya with different subdivision levels. I'm asking for Karma unfortunately.
Edited by beatreichenbach - March 18, 2022 22:40:28
User Avatar
Member
273 posts
Joined: Nov. 2013
Offline
Hmm those times do seem super slow, assuming the bulk of the time is for the subdivision. I can't repro your car scene exactly obviously but I tried with 750K polys in a ton of spheres and got the following subdivide/dicing times:

Subdivide Sop, 2 levels = 4 seconds
Karma, dicing quality scale 1.0 (default) = 13 seconds
Karma, dicing quality scale 0.1 = 5 seconds

Those are more in line with what I'd expect as it should be possible to run subdivision on tens or even hundreds of millions polygons without waiting hours. I attached the scene I was using for reference.

As for recooking the subdivision when changing material properties, is the subdivision in a time dependent bit of network by any chance?

Image Not Found
Edited by antc - March 20, 2022 19:27:39

Attachments:
dicing.hipnc (171.0 KB)

User Avatar
Member
273 posts
Joined: Nov. 2013
Offline
Also, I'm not sure if Karma dicing is taking the camera into account. Distant spheres in the example hip seem to be getting diced as finely as ones close to the camera. Maybe sidefx can weigh in here as the dicing section of the docs is just a stub:/
User Avatar
Member
53 posts
Joined: March 2014
Offline
Hi beat,

Here is my take on this. I am not fully familiar with how it works in Karma, but I too come from Arnold and was disoriented when I arrived in Solaris.

So, as I understand it, in USD there is no subdivision level. It's either on or off. You can control the dicing quality in the Karma render settings, but you cna also control it per mesh in the RenderGeometrySettings lop (at the bottom).

One workflow I have seen is to name the meshes with a certain naming convention to indicate which ones whould get subdivided and which ones should not. You can use an attribute instead. Then in Solaris, you can use the EditProperties lop to set the subdivisionSchemeattribute value to catmullClarkonly on those meshes. Don't hesitate to ask me if you need help with the EditProperties lop... it's very unintuitive. I wish SideFX made it simpler to set subdivs.

May I ask a question? Why do you want to see the subdivision result in the viewport? I get it you are used to seeing it in Maya, but I believe previewing the subdivision in the viewport is only useful for the modeler and sometimes (very rarely) for the animator who wants to make some contacts right. The shading artist should only work in the renderer. And everybody else in the pipeline who is working in Solaris should view assets as their unsubdivided state, even as their proxy resolution, IMO.

Cheers,

F
Edited by flord - March 23, 2022 20:01:39
User Avatar
Member
5 posts
Joined: Dec. 2018
Offline
Thank you all for the great information, I really appreciate it.

One workflow I have seen is to name the meshes with a certain naming convention to indicate which ones whould get subdivided and which ones should not. You can use an attribute instead. Then in Solaris, you can use the EditProperties lop to set the subdivisionSchemeattribute value to catmullClarkonly on those meshes. Don't hesitate to ask me if you need help with the EditProperties lop... it's very unintuitive. I wish SideFX made it simpler to set subdivs.
Understood, that makes sense. I wish the settings were easier to access as well. Having them in the mesh_edit node and also in the rendergeometrysettings node or alternatively with the unintuitive EditProperties node is a bit confusing for new comers!

Why do you want to see the subdivision result in the viewport?
Purely out of habit and just to see if it was possible. It is totally understandable that this is not something that fits in this workflow, so I'm fine with this not being supported. The way it is accessible now is just very strange.

As for the holes in my geo:
I also attached a render. Is it normal to get holes at lower qualities? I tried my best to keep the scenes as simple as possible, even tried Fuse nodes, but unless I crank the dicing quality I always have some little slices of holes in the geometry. If this is just how dicing and subd in Karma works, that is fine. I just don't know if dicing at lower quality is supposed to produce water tight meshes. I'd love to hear an answer from SideFX about this
I attached a file where I applied subd on a cube and the resulting render produces tiny holes in my mesh that mess up the alpha. Again, better quality will remove it but also increase render time quite a bit.

And finally I was able to find out why the render would recalculate the subdivision phase. When I use the component builder, the material gets merged into the layer. I don't know which step exactly but something in the component builder makes it so that on every material change, it also has to recalculate the subd. If I just apply the material with a materiallibrary node, this is not happening! Not sure yet if this is a bug or feature.
Edited by beatreichenbach - March 24, 2022 00:09:51

Attachments:
subd_graph.png (82.2 KB)
subd.hipnc (1.0 MB)
subd_holes_karma.PNG (7.4 KB)

User Avatar
Member
53 posts
Joined: March 2014
Offline
The holes problem is very weird. I had never seen this before but it happens on my side when I lower the dicing quality. We should ask Mark Elendt about this.
User Avatar
Member
273 posts
Joined: Nov. 2013
Offline
The tiny holes are probably where the dicing rate changes between face boundaries. When a mesh is subdivided N times, the edges between the subdivided faces will line up nicely. However dicing each face separately and at a different rate can lead to cracks. RenderMan has a water tight option that creates triangles in the cracks but it can be expensive and so I believe isn't enabled by default. I'm not sure about Karma or other renderers though.

And yes it's true that afaik USD doesn't have any kind of tessellation API to provide dicing control in a universal way. Since renderers support different algorithms and associated knobs for this stuff I guess such an API would be hard to pin down. Arnold might have it's own API that supports fixed subdivision iterations though.
User Avatar
Staff
4159 posts
Joined: Sept. 2007
Offline
FWIW, "Complexity" in USD (usdview, husk; it's Level of Detail in HoudiniGL's display settings) is a global subdivision level (at least in the basic sense; there may be subtleties to OpenSubdiv). Roughly, it appears that the complexities map to subdiv levels (at least in Storm - I think Karma/Prman are probably very high / adaptive all the time):

  • Low - None
  • Medium - 1 level
  • High - 2 levels
  • Very High - 3 levels

Hope this helps a bit; but @antc is right that it'd be hard to generalize to every renderer. Some delegates ignore the built-in subdivision attributes of meshes, and just look for their own properties on the prims.

Attachments:
Screenshot from 2022-03-23 21-21-04.png (145.1 KB)

I'm o.d.d.
User Avatar
Member
5 posts
Joined: Dec. 2018
Offline
Okay some more great updates:
The holes are never supposed to be there, even with low dicing quality. This was confirmed by SideFX and they are working on a fix for this!

In regards to the level of detail: That sounds roughly right, except that lod=1 is no subd? I only start seeing differences when moving the lod up to level 2. However it only appears when switching to Karma and back, if it wasn't for the tokeru wiki that would be very hard to figure out. It would be great if that information was also in the docs somewhere
  • Quick Links