The approach to composing shaders in general has changed to support more physically correct results. That has caused these methods older methods to be deprecated. That and a bunch of other features added in H16.
To support modern material construction methods (structs, methods, pre-compiled shader materials with overrides) the way you construct shaders has been refactored to use the layer struct type.
Konstantin's shader is the way to do it.
Wire in order:
Diffuse -> Reflect -> other BSDFs with base struct input -> Layer Unpack VOP -> Collect VOP.
Notice how the shader is layered in a logical manner where you first put down the PBR Diffuse layer output in to the PBR Reflect layer base input. Internally this uses composite_bsdf() to properly composite the two Layer structs together based on the BSDF energy from the reflection lobe (from the BSDF evaluation of the current ray) and return proper light conserving output.
Contrast this with the naive approach where you simply add the two BSDF's together (top network in the image). This would be kind of equivalent to the older method from the old material tutorial.
The naive approach has a completely different result where you get indirect bounce light contribution:
Also notice the number of VOPs you need to compose the older shader from the old scene vs. the newer shader in H16. Lots of convenience options have been added to do the right thing and streamline the creation of shaders.
An interesting thought (because I haven't tested this) but I am assuming the new options on the Mantra ROP to control the quality levels of Diffuse, Reflect, Refract and SSS Quality settings would tie in to correctly constructed shaders.
Found 2146 posts.
Search results Show results as topic list.
Houdini Indie and Apprentice » where is PBR specular? following Magnus tutorial
- old_school
- 2540 posts
- Offline
Technical Discussion » Is there a way to change the Current Transform through scripting?
- old_school
- 2540 posts
- Offline
I don't think we have the ability to use scripts to manipulate the various options on a handle.
There are the hscript om* commands that can bind handles to nodes but no commands to change the handle itself.
Please submit an RFE to have scripting control over the various options on a handle.
There are the hscript om* commands that can bind handles to nodes but no commands to change the handle itself.
Please submit an RFE to have scripting control over the various options on a handle.
Technical Discussion » Houdini not starting on Mac OS X 10.9.5
- old_school
- 2540 posts
- Offline
I had a similar DisplayAlert error using OSX 10.9.5 with H16.5 crashing on launch and I only use shells on the Mac so I can not verify if double clicking on the app works… I
Upgrading to High Sierra OSX 10.13.3 fixed the problem for me. Don't want to force an OS update on anyone but it is good to stay somewhat current on the Mac. That is if your other apps are supported with the upgrade as well.
Upgrading to High Sierra OSX 10.13.3 fixed the problem for me. Don't want to force an OS update on anyone but it is good to stay somewhat current on the Mac. That is if your other apps are supported with the upgrade as well.
Houdini Indie and Apprentice » Font node to NURBS Curve
- old_school
- 2540 posts
- Offline
Font Sop set to output Beziers Only. Then append a Convert SOP and convert to NURBs Curves should do it.
Although I noticed that some of the holes in the fonts weren't holing wiht Beziers… “e”s seemed to not have a hole if that matters.
Although I noticed that some of the holes in the fonts weren't holing wiht Beziers… “e”s seemed to not have a hole if that matters.
Houdini Indie and Apprentice » Export Displaced Geometry
- old_school
- 2540 posts
- Offline
The Side Effects Games GitHub repository has a nice SOP Baker ROP that should do what you want. It is meant to bake out maps and geometry for use in a Principled shader workflow (think Substance) and to be brought in to Unreal and Unity.
https://github.com/sideeffects/GameDevelopmentToolset [github.com]
If you are using H16.5 recent builds, you can go to the Games shelf and there is a single tool should be called something like Install Games Tools. Press that button and Houdini will download from that GitHub repository above and install the latest version of the tools. In that package should be the baker.
https://github.com/sideeffects/GameDevelopmentToolset [github.com]
If you are using H16.5 recent builds, you can go to the Games shelf and there is a single tool should be called something like Install Games Tools. Press that button and Houdini will download from that GitHub repository above and install the latest version of the tools. In that package should be the baker.
Technical Discussion » randomizing fields using completly random values?(like world position?)
- old_school
- 2540 posts
- Offline
Incorporate world position in to point position for random seed generator…
One way is to promote a vector parmeter and use the vorigin() or more specificialy vtorigin() hscript expression to crack the world transform to this parent object.
tx: vtorigin(“”, “../”)
ty: vtorigin(“”, “../”)
tz: vtorigin(“”, “../”)
where “” is world space and “../” is my parent's position or my object container in this case.
The trick is knowing that vtorigin() returns a vector and you can use the square brackets to index in to the result of the vector counting from 0.
You can also use vorigin which returns an array of 6 elements tx-tz and rx-rz where you use to to fetch the translates and rotates for even more randomability.
See the example file along with the same notes above as a comment in SOPs beside the Attribute VOP.
One way is to promote a vector parmeter and use the vorigin() or more specificialy vtorigin() hscript expression to crack the world transform to this parent object.
tx: vtorigin(“”, “../”)
ty: vtorigin(“”, “../”)
tz: vtorigin(“”, “../”)
where “” is world space and “../” is my parent's position or my object container in this case.
The trick is knowing that vtorigin() returns a vector and you can use the square brackets to index in to the result of the vector counting from 0.
You can also use vorigin which returns an array of 6 elements tx-tz and rx-rz where you use to to fetch the translates and rotates for even more randomability.
See the example file along with the same notes above as a comment in SOPs beside the Attribute VOP.
Technical Discussion » Lenovo P51? (Specifically the Quadro M1200)
- old_school
- 2540 posts
- Offline
I just got that very same laptop with dual Xeon and the Quadro M2200 a couple months ago and this spec has been duplicated with a few other peeps at SideFX and they also like this laptop.
I agree there is a lot to like with this laptop.
I agree there is a lot to like with this laptop.
Technical Discussion » Simulate a cruise moving on the ocean
- old_school
- 2540 posts
- Offline
The velocity is found on the Trail SOP when building the ship as a collider. You can add a Trail SOP and compute the point velocities. But I don't think that is the issue.
Scale? Is your ship in Metres? Average cruise ship is approx. 200M long and 28M wide with a draft of around 6.8M. The draft looks good on your ship wrt the fluid surface.
I would set this up using the Guided Ocean Layer shelf tool to create a transient flip container that follows the ship. It's the selection prompt asking you to select the object to follow as you work through the Guided Ocean Layer tool in the viewport.
Now I wonder if the initial push of water is just that. You take a boat that is instantaneously moving at a given velocity and then you push it through a static fluid, you may just get that initial wave pushing forward. I wonder what it looks like if you run the simulation up with the boat going through the flip fluid and then get to your point at say frame 100 and then start from there. But in my example below I didn't have that issue…
I attached a hastily created scene file with a very poorly modeled cruise ship hull and used the Guided Ocean Layer shelf tool to set up the tank around the ship. But made sure it was to scale in Metres.
I then adjusted the tank using padding around the tank centre inside the object that supplies the fluid container in to DOPs. I added notes and text to guide you to the correct part of the network. I added a switch to switch between two containers: a test container around the bow of the ship and a full container around the entire ship.
Hope this helps you out.
Scale? Is your ship in Metres? Average cruise ship is approx. 200M long and 28M wide with a draft of around 6.8M. The draft looks good on your ship wrt the fluid surface.
I would set this up using the Guided Ocean Layer shelf tool to create a transient flip container that follows the ship. It's the selection prompt asking you to select the object to follow as you work through the Guided Ocean Layer tool in the viewport.
Now I wonder if the initial push of water is just that. You take a boat that is instantaneously moving at a given velocity and then you push it through a static fluid, you may just get that initial wave pushing forward. I wonder what it looks like if you run the simulation up with the boat going through the flip fluid and then get to your point at say frame 100 and then start from there. But in my example below I didn't have that issue…
I attached a hastily created scene file with a very poorly modeled cruise ship hull and used the Guided Ocean Layer shelf tool to set up the tank around the ship. But made sure it was to scale in Metres.
I then adjusted the tank using padding around the tank centre inside the object that supplies the fluid container in to DOPs. I added notes and text to guide you to the correct part of the network. I added a switch to switch between two containers: a test container around the bow of the ship and a full container around the entire ship.
Hope this helps you out.
Technical Discussion » How to make a digital asset function/button out of a chop-export flag?
- old_school
- 2540 posts
- Offline
To elaborate on jsmack, HDA's don't support any network changes. Unfortunately node flags fall under that umbrella: Locked HDA's do not support changing of node flags unless you unlock-change_flag-lock but then you have a new definition of your HDA.
So yes use the Export CHOP and use a Switch CHOP to switch between different branches if you have to and tie the switch parameter to the top of the asset.
So yes use the Export CHOP and use a Switch CHOP to switch between different branches if you have to and tie the switch parameter to the top of the asset.
Technical Discussion » Trouble displaying pyro smoke object in Geometry level
- old_school
- 2540 posts
- Offline
Did you change the names of any of the nodes? That will invalidate the automatic paths used by some of the nodes setup from the shelf.
Or you have a display bug. For the display bug, close the viewport and create a new viewport tab display and see if that forces things to refresh…
In a standard DOP Pyro shelf setup, an object is created to fetch the pyro object from the DOP simulation network for display and rendering.
This object is usually called pyro_import.
Inside the pyro_import object, there are two SOPs: DOP I/O and DOP Import.
The DOP Import is used to fetch the visualization data using this python expression:
`dopobjscreatedby(“/obj/pyro_sim/pyro”)`
This means that if you changed the name of the source object then you are not going to get a proper result. You can just use the absolute path to the pyro object.
Something like:
/obj/pyro_sim/pyro
where pyro_sim is the name of the dop network in /obj used to do the pyro simulation and pyro is the name of the Smoke Object DOP used to initialize the smoke object.
About the name of the Smoke Object DOP… Ultimately you should use the Geometry Spreadsheet (Details View) in DOPs to see the actual name of the pyro object and use that. But that name is created by the Smoke Object DOP using
The Shelf Tool Pyro setup puts a $OS in the Object Name parameter where $OS returns the name of the current node. Change the name of this node, no more auto-sourcing by the DOP Import SOP.
Blow away the expressions and type in the names does work…
Or you have a display bug. For the display bug, close the viewport and create a new viewport tab display and see if that forces things to refresh…
In a standard DOP Pyro shelf setup, an object is created to fetch the pyro object from the DOP simulation network for display and rendering.
This object is usually called pyro_import.
Inside the pyro_import object, there are two SOPs: DOP I/O and DOP Import.
The DOP Import is used to fetch the visualization data using this python expression:
`dopobjscreatedby(“/obj/pyro_sim/pyro”)`
This means that if you changed the name of the source object then you are not going to get a proper result. You can just use the absolute path to the pyro object.
Something like:
/obj/pyro_sim/pyro
where pyro_sim is the name of the dop network in /obj used to do the pyro simulation and pyro is the name of the Smoke Object DOP used to initialize the smoke object.
About the name of the Smoke Object DOP… Ultimately you should use the Geometry Spreadsheet (Details View) in DOPs to see the actual name of the pyro object and use that. But that name is created by the Smoke Object DOP using
The Shelf Tool Pyro setup puts a $OS in the Object Name parameter where $OS returns the name of the current node. Change the name of this node, no more auto-sourcing by the DOP Import SOP.
Blow away the expressions and type in the names does work…
Houdini Lounge » UI/UX: Share your screenshots & tips & tricks & ideas
- old_school
- 2540 posts
- Offline
I don't think we even have a repository for .desk files on line to share…
xilophoton care to share your .desk file? Would love to try it out.
I am always rebuilding my desktops to suit what I am doing then save off in to my $HOME/houdini/desktop folder when I have one I like. Many are crap. The odd desks see more use.
Then there are single monitor desktops vs. dual monitor desktops… If we are to share desktops maybe indicate this in the desk name?
technical_dm.desk dm = dual monitor… Dunno. I suck at names.
technical_sm.desk sm = single monitor…
Any users have a naming convention for desktops?
There is the Houdini creation desk that is used by some educators to introduce new students to Houdini available here:
http://houdinicreationdesk.ipage.com/download.html [houdinicreationdesk.ipage.com]
There are some nice instructions for Linux, Windows and Mac on the site to show you how to install desktops in to your Houdini environment without using the desk manager UI in Houdini.
I also agree about hiding the shelf once you are comfortable using Houdini for TD work.
I would also unhide the toolbars and run through them for every new major release. Especially around DOPs workflows… The shelf tools are kept up to date and used to encode the latest setups.
Use your favourite raw text editor on one of the .desk files to see what is actually captured when you save a desktop. Pretty interesting…
xilophoton care to share your .desk file? Would love to try it out.
I am always rebuilding my desktops to suit what I am doing then save off in to my $HOME/houdini/desktop folder when I have one I like. Many are crap. The odd desks see more use.
Then there are single monitor desktops vs. dual monitor desktops… If we are to share desktops maybe indicate this in the desk name?
technical_dm.desk dm = dual monitor… Dunno. I suck at names.
technical_sm.desk sm = single monitor…
Any users have a naming convention for desktops?
There is the Houdini creation desk that is used by some educators to introduce new students to Houdini available here:
http://houdinicreationdesk.ipage.com/download.html [houdinicreationdesk.ipage.com]
There are some nice instructions for Linux, Windows and Mac on the site to show you how to install desktops in to your Houdini environment without using the desk manager UI in Houdini.
I also agree about hiding the shelf once you are comfortable using Houdini for TD work.
I would also unhide the toolbars and run through them for every new major release. Especially around DOPs workflows… The shelf tools are kept up to date and used to encode the latest setups.
Use your favourite raw text editor on one of the .desk files to see what is actually captured when you save a desktop. Pretty interesting…
Houdini Indie and Apprentice » pcopen returns a handle of type int, what is that int referencing?
- old_school
- 2540 posts
- Offline
A handle in programming (I aint no programmer btw) is an abstract reference to a resource. In the case of point clouds, that resource would be referencing blocks of memory that contain the point cloud.
It is opaque by definition so you will not be able to see that remote resource until you query it through the functions provided along with the given handle. The pc functions require that handle to “find” the remote point cloud data.
Point Clouds were a god send in the old days of micropolygon rendering where you had no way of peering beyond the current shade point for various Global Illumination strategies and for baking illumination for performance.
Point Clouds were pre-computed to contain lighting information for the surface and saved prior to rendering. When the render is running you can open up a point cloud (a remote resource in memory) and then with the provided pc functions query that remote point cloud with the handle returned from pcopen() usually from the current position to “see” beyond the current shade point. Very useful! Subsequent pc functions require that handle to find the point cloud to both read and write in to the separate memory.
Point Clouds quickly spread throughout Houdini via VEX and gave us the ability to create a very fast way to query neighbor points to a given position for all kinds of useful tasks. Wet Maps, convolutions (blurs/sharpens), and more. Well beyond their original implementation within VEX and Mantra.
Point clouds are still quite useful as they are sparse by nature. VDB grids can also perform the same tasks as well as volumes. VDBs and volumes come with extra features such as fast integration methods to do stuff like gradient from normal with SDF and more.
Lots of choices these days.
It is opaque by definition so you will not be able to see that remote resource until you query it through the functions provided along with the given handle. The pc functions require that handle to “find” the remote point cloud data.
Point Clouds were a god send in the old days of micropolygon rendering where you had no way of peering beyond the current shade point for various Global Illumination strategies and for baking illumination for performance.
Point Clouds were pre-computed to contain lighting information for the surface and saved prior to rendering. When the render is running you can open up a point cloud (a remote resource in memory) and then with the provided pc functions query that remote point cloud with the handle returned from pcopen() usually from the current position to “see” beyond the current shade point. Very useful! Subsequent pc functions require that handle to find the point cloud to both read and write in to the separate memory.
Point Clouds quickly spread throughout Houdini via VEX and gave us the ability to create a very fast way to query neighbor points to a given position for all kinds of useful tasks. Wet Maps, convolutions (blurs/sharpens), and more. Well beyond their original implementation within VEX and Mantra.
Point clouds are still quite useful as they are sparse by nature. VDB grids can also perform the same tasks as well as volumes. VDBs and volumes come with extra features such as fast integration methods to do stuff like gradient from normal with SDF and more.
Lots of choices these days.
Houdini Lounge » Concept Art and Houdini
- old_school
- 2540 posts
- Offline
Hi Kristoffer,
Just saw your ArtStation home page. Impressive work!
Full disclosure I'm a SideFX employee of 24+ years and have seen a lot of users very successfully use Houdini for many types of modeling. In the past it's mainly procedural modeling.
We are constantly improving modeling tools, especially of the procedural type. For example your Alien Environment concept could have been bashed together with Houdini very quickly.
If you give Houdini the same amount of time to learn as say Maya and Modo and you get working with the nodes, you are definitely in the zone of Houdini thinking, then this crystal prototype would take only a couple hours to bash together and continue to refine. Much of the other work seems better suited to Z Brush or 3D Coat or Blender for rapid prototyping and not so much at all Maya or Modo. With 3D Coat and Z-Brush you don't even have to worry about topology as you rapidly sculpt out your ideas. Maya and Modo force you to futz with topology right from the beginning and that isn't rapid prototyping sculpts imho.
Houdini would definitely be a tool to put under your belt as it is the ONLY one that represents a proper first class procedural approach to modeling using free-form SOP geometry nodes in very playful ways that is time proven. Apprentice is free and fully functional from the modeling point of view so the only commitment is time. I highly recommend the Entagma videos if you are inspired by procedural systems.
I'll let others chime in here as well.
—-
aRtye your comment is so general as to be dismissed as a casual passing remark. Please remember that this is a professional question from a very talented artist so a respectful answer would be much appreciated. Perhaps revisit this and clarify specifically just what you mean. Actually so as not to troll this thread, fire up a new thread outlining your ideas for improvements. We are always looking to improve the modeling workflow. H16.5 was yet another good move forward.
I have grown to love the radial menus as finally we have tool under cursor and tumble under cursor as it is always the little things that improve workflow.
The Boolean SOP is unfreakingbelievable in H16 and loving the UV Layout in to 3D Coat for texturing myself. More to come as well.
Just saw your ArtStation home page. Impressive work!
Full disclosure I'm a SideFX employee of 24+ years and have seen a lot of users very successfully use Houdini for many types of modeling. In the past it's mainly procedural modeling.
We are constantly improving modeling tools, especially of the procedural type. For example your Alien Environment concept could have been bashed together with Houdini very quickly.
If you give Houdini the same amount of time to learn as say Maya and Modo and you get working with the nodes, you are definitely in the zone of Houdini thinking, then this crystal prototype would take only a couple hours to bash together and continue to refine. Much of the other work seems better suited to Z Brush or 3D Coat or Blender for rapid prototyping and not so much at all Maya or Modo. With 3D Coat and Z-Brush you don't even have to worry about topology as you rapidly sculpt out your ideas. Maya and Modo force you to futz with topology right from the beginning and that isn't rapid prototyping sculpts imho.
Houdini would definitely be a tool to put under your belt as it is the ONLY one that represents a proper first class procedural approach to modeling using free-form SOP geometry nodes in very playful ways that is time proven. Apprentice is free and fully functional from the modeling point of view so the only commitment is time. I highly recommend the Entagma videos if you are inspired by procedural systems.
I'll let others chime in here as well.
—-
aRtye your comment is so general as to be dismissed as a casual passing remark. Please remember that this is a professional question from a very talented artist so a respectful answer would be much appreciated. Perhaps revisit this and clarify specifically just what you mean. Actually so as not to troll this thread, fire up a new thread outlining your ideas for improvements. We are always looking to improve the modeling workflow. H16.5 was yet another good move forward.
I have grown to love the radial menus as finally we have tool under cursor and tumble under cursor as it is always the little things that improve workflow.
The Boolean SOP is unfreakingbelievable in H16 and loving the UV Layout in to 3D Coat for texturing myself. More to come as well.
Technical Discussion » CreateProcess failed for 'mantra'
- old_school
- 2540 posts
- Offline
Windows Execution Permissions for the Mantra process? Did you get a dialog to allow the Mantra process on Windows?
Houdini Indie and Apprentice » How to draw splines with curves AND corners?
- old_school
- 2540 posts
- Offline
Bezier curves are what you want. But as what Tamte said above the bezier spline editing controls are not very sophisticated as you would find in Illustrator where you can tie and untie tangents on edit points and more.
Please submit an RFE.
One way to approach drawing spline type curves quickly is to draw multiple curves with point snapping on.
- Set up the viewport Tool Option button menu set to “Create in Context”. Thats the tiny button on the top viewport toolbar right hand side just left of the ? help button.
- Once that is set, you can dive in to an object and tab type Curve once and draw your first NURBs curve.
- Hit “Q” to repeat curve when you want a sharp corner and continue on drawing the next NURBs curve with point snapping on to start from the previous point.
Then use a Join SOP after merging the curves together with Multiplicity on.
You can then convert to polygons without duplicate points on the corners.
Please submit an RFE.
One way to approach drawing spline type curves quickly is to draw multiple curves with point snapping on.
- Set up the viewport Tool Option button menu set to “Create in Context”. Thats the tiny button on the top viewport toolbar right hand side just left of the ? help button.
- Once that is set, you can dive in to an object and tab type Curve once and draw your first NURBs curve.
- Hit “Q” to repeat curve when you want a sharp corner and continue on drawing the next NURBs curve with point snapping on to start from the previous point.
Then use a Join SOP after merging the curves together with Multiplicity on.
You can then convert to polygons without duplicate points on the corners.
Houdini Indie and Apprentice » Copy to Points does not transfer groups?
- old_school
- 2540 posts
- Offline
There is. Use attributes. They are faster too. But if you want to use groups and work with Copy to Points, you can convert your group in to an 8 bit integer type attribute of same name. This will come over nicely in the Copy To Points SOP.
After that, re-create your group using an Attribute Expression or Attribute Wrangle using
A bit more work but certainly way more efficient than using a foreach block to do this.
After that, re-create your group using an Attribute Expression or Attribute Wrangle using
i@group_group1 = i@group1;
Technical Discussion » Efficiency of the Principled Shader vs Classic Core vs rolling your own simple shaders?
- old_school
- 2540 posts
- Offline
With what you want I'd say it's not so much which shader you use, but which render engine you choose to use in Mantra has the greatest effect with simplistic shading where only primary samples are required (no raytrace, no reflections, no indirect lighting).
There are three Mantra engines: PBR, Raytrace and Micropolygon.
If you want abstract stylized NPR NonPhotoReal type renders that only use primary samples and no secondary raytrace rays, or just straight diffuse and specular with no indirect lighting and use point lights, keep on using Principled material or any of the others and you can try out either raytrace or Micropolygon rendering engines as a first step.
This also means that you are using point lights with raytrace and micropolygon rendering! Did I say only use point lights? Only use point lights. Do NOT use area lights or geometry lights with raytrace or Micropoly renders! And if you use Environment lights, set them to Ambient Occlusion and use that environment light's samples to control quality. Exact opposite of PBR which is why it can be confusing for many lighting artists when choosing render engines and optimizing scenes.
For plain old diffuse and spec, micropoly or raytrace is good. Adding in fresnel reflections would be raytrace only.
If you need area lights and more complicated lighting, then go PBR as it is very fast compared to raytrace and micropolygon engines in these more complex lighting scenarios. But as you said, you don't use that. Right?
You can go quite far with principle shader and raytrace/micropolygon engines for quick snappy renders. Micro poly doesn't scale very well as it needs to pre-shade all the vertices but wouldn't you know there's a render property to control that per object. Plus you miss out on indirect lighting. Not as easy to set up as PBR these days.
All the materials that you use in the Material network should work with the Micropolygon and Raytrace render engines. If the materials in there suit your needs that is.
Warning!!!
- IPR in Preview mode will use raytracing even if you use micropolygon mode. You need to turn off the preview option (one of the little icons on the top IPR bar) in order to render with micro-polygons in IPR and see the correct result.
As for all the shenanigans going on in inside the principled shader, the compiler will strip all the needless stuff out and you are left with a fast executing shader, regardless of chosen Mantra render engine. In H16.5 even loose VOPs are pre-compiled for even more efficiency.
Regarding Principled shader vs. Classic, there really is no difference in terms of compilation or execution. They both use the same setups internally accessing the same surface models. Both will work with all three Mantra render engines. Principled has a bunch of internal biasing of the top parameters to make shading of complicated surfaces easier for artists. For example Metallicity on Principled takes in to account the specular colouring and refraction changes for you. No need to manage this as with the Classic shader. You don't need to know the differences between dielectric, non-dielectric and metallic materials with Principled…
As for the shape shifting material and shader context changes and outdated tutorials, we should be all good moving forward beyond H16 for the foreseeable future. Moving from /shop to /mat was necessary. We do apologize for that but getting this right was crucial.
—-
With NPR style simplistic rendering, I am always finding myself creating my own shaders out of VOPs or even vex code and using the VCC compiler to create material HDA's. Choosing just what I need to get the job done but you need to know how to construct the older type shaders for microplygon and raytrace render engines. Which is why PBR style rendering has taken over for almost all rendering tasks these days. It is where SideFX is concentrating shader development to meet main stream needs and that is all most excellent.
Myself? I like simplifying the shaders for NPR so that I can go in there and add my own shading types with little complication. But these are specialized shaders not used very often so not included in Houdini. We still have all the older and updated building block VOPs in there such as the Lighting Model VOP and for even more hard core users, illuminance loops to build up these old school style shaders if you want to, but PBR is out of the picture.
If there is a desire to have tutorials on how to set up these older style shaders and rendering setups for NPR style looks and take advantage of Micropolygon and raytrace rendering, just like this post and we'll see…
There are three Mantra engines: PBR, Raytrace and Micropolygon.
If you want abstract stylized NPR NonPhotoReal type renders that only use primary samples and no secondary raytrace rays, or just straight diffuse and specular with no indirect lighting and use point lights, keep on using Principled material or any of the others and you can try out either raytrace or Micropolygon rendering engines as a first step.
This also means that you are using point lights with raytrace and micropolygon rendering! Did I say only use point lights? Only use point lights. Do NOT use area lights or geometry lights with raytrace or Micropoly renders! And if you use Environment lights, set them to Ambient Occlusion and use that environment light's samples to control quality. Exact opposite of PBR which is why it can be confusing for many lighting artists when choosing render engines and optimizing scenes.
For plain old diffuse and spec, micropoly or raytrace is good. Adding in fresnel reflections would be raytrace only.
If you need area lights and more complicated lighting, then go PBR as it is very fast compared to raytrace and micropolygon engines in these more complex lighting scenarios. But as you said, you don't use that. Right?
You can go quite far with principle shader and raytrace/micropolygon engines for quick snappy renders. Micro poly doesn't scale very well as it needs to pre-shade all the vertices but wouldn't you know there's a render property to control that per object. Plus you miss out on indirect lighting. Not as easy to set up as PBR these days.
All the materials that you use in the Material network should work with the Micropolygon and Raytrace render engines. If the materials in there suit your needs that is.
Warning!!!
- IPR in Preview mode will use raytracing even if you use micropolygon mode. You need to turn off the preview option (one of the little icons on the top IPR bar) in order to render with micro-polygons in IPR and see the correct result.
As for all the shenanigans going on in inside the principled shader, the compiler will strip all the needless stuff out and you are left with a fast executing shader, regardless of chosen Mantra render engine. In H16.5 even loose VOPs are pre-compiled for even more efficiency.
Regarding Principled shader vs. Classic, there really is no difference in terms of compilation or execution. They both use the same setups internally accessing the same surface models. Both will work with all three Mantra render engines. Principled has a bunch of internal biasing of the top parameters to make shading of complicated surfaces easier for artists. For example Metallicity on Principled takes in to account the specular colouring and refraction changes for you. No need to manage this as with the Classic shader. You don't need to know the differences between dielectric, non-dielectric and metallic materials with Principled…
As for the shape shifting material and shader context changes and outdated tutorials, we should be all good moving forward beyond H16 for the foreseeable future. Moving from /shop to /mat was necessary. We do apologize for that but getting this right was crucial.
—-
With NPR style simplistic rendering, I am always finding myself creating my own shaders out of VOPs or even vex code and using the VCC compiler to create material HDA's. Choosing just what I need to get the job done but you need to know how to construct the older type shaders for microplygon and raytrace render engines. Which is why PBR style rendering has taken over for almost all rendering tasks these days. It is where SideFX is concentrating shader development to meet main stream needs and that is all most excellent.
Myself? I like simplifying the shaders for NPR so that I can go in there and add my own shading types with little complication. But these are specialized shaders not used very often so not included in Houdini. We still have all the older and updated building block VOPs in there such as the Lighting Model VOP and for even more hard core users, illuminance loops to build up these old school style shaders if you want to, but PBR is out of the picture.
If there is a desire to have tutorials on how to set up these older style shaders and rendering setups for NPR style looks and take advantage of Micropolygon and raytrace rendering, just like this post and we'll see…
Houdini Indie and Apprentice » Terrain UV and texture export
- old_school
- 2540 posts
- Offline
You are converting the height field to a polygon surface to export to C4D… You can apply a UV Texture SOP and project uv's down the Y axis to create the uv's then export the polygon surface.
The HeightField Quick Shade SOP is an HDA Houdini Digital Asset. The HDA is basically a wrapper around an actual material for Mantra with texture map slots. You can dive in to the asset and look at how the material is created to set up a similar material in C4D.
From a brief look at the material inside the HeightField Quick Shade HDA, you supply textures which are using height fields as masks, then are simply summed up and added over a base texture map. You can generate texture maps from the height fields in COPs and then rebuild the setup in C4D.
If you want you can just bake out a set textures using the Material Bake Texture ROP as applied to the uv polygon terrain rendered through the HeightField Quick Shade SOP.
The HeightField Quick Shade SOP is an HDA Houdini Digital Asset. The HDA is basically a wrapper around an actual material for Mantra with texture map slots. You can dive in to the asset and look at how the material is created to set up a similar material in C4D.
From a brief look at the material inside the HeightField Quick Shade HDA, you supply textures which are using height fields as masks, then are simply summed up and added over a base texture map. You can generate texture maps from the height fields in COPs and then rebuild the setup in C4D.
If you want you can just bake out a set textures using the Material Bake Texture ROP as applied to the uv polygon terrain rendered through the HeightField Quick Shade SOP.
Houdini Indie and Apprentice » Houdini crashed, where are the auto-saved files located?
- old_school
- 2540 posts
- Offline
$HOUDINI_TEMP_DIR is where Houdini writes cache files and undo files.
When Houdini crashes it writes to $TEMP. So it's $TEMP you want for crash files and crash logs. Btw, please please try to reproduce the crash and send in a Bug report to support with the steps to recreate the crash, the scene file and the .txt crash log file saved to $TEMP.
For new users the easiest way to see what these variables are set to is to fire up Houdini, open a Textport and type:
——–
You can echo any system environment variable inside Houdini if you wish. When it fires up, it sees all the system environment variables using the hscript unix command. Btw hscript unix works on Windows and is used to access any shell command. Yes windows has a shell including the defaults shipped with Houdini. Several in fact: shell, powershell, bash and if you want to install cygwin. You can access any of these system environment variables in any parameter or code snippet in Houdini btw preceded with a $.
Again open up Houdini and open up a textport then type unix printenv to print all the system variables seen by Houdini (and note in that list is TEMP and HOUDINI_TEMP_DIR):
When Houdini crashes it writes to $TEMP. So it's $TEMP you want for crash files and crash logs. Btw, please please try to reproduce the crash and send in a Bug report to support with the steps to recreate the crash, the scene file and the .txt crash log file saved to $TEMP.
For new users the easiest way to see what these variables are set to is to fire up Houdini, open a Textport and type:
/ -> echo $HOUDINI_TEMP_DIR /tmp/houdini_temp / -> echo $TEMP /tmp
——–
You can echo any system environment variable inside Houdini if you wish. When it fires up, it sees all the system environment variables using the hscript unix command. Btw hscript unix works on Windows and is used to access any shell command. Yes windows has a shell including the defaults shipped with Houdini. Several in fact: shell, powershell, bash and if you want to install cygwin. You can access any of these system environment variables in any parameter or code snippet in Houdini btw preceded with a $.
Again open up Houdini and open up a textport then type unix printenv to print all the system variables seen by Houdini (and note in that list is TEMP and HOUDINI_TEMP_DIR):
/ -> unix printenv job=/work/user/job TERM_PROGRAM=Apple_Terminal HOST=mymachine.local TERM=xterm-256color SHELL=/bin/bash HOUDINI_TEMP_DIR=/tmp/houdini_temp CLICOLOR=1 HOUDINI_OS=MacOS TMPDIR=/var/folders/8y/n9vxx_kj1t17b1j5yhf87zl00000gn/T/ H=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources hou10dir=cd /Users/jeff/Library/Preferences/houdini/10.0 sqa=/n/iowa2/tmp/jeff/user/sqa HOUDINI_BUILD_KERNEL=15.6.0 Apple_PubSub_Socket_Render=/private/tmp/com.apple.launchd.aiBLuTVS7c/Render HOUDINI_BUILD_VERSION=717 HDSO=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/../Libraries TERM_PROGRAM_VERSION=361.1 HT=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/toolkit TERM_SESSION_ID=7A1EDE4D-83F5-4ADB-B803-303F84819980 comp=/work//user/comp previs=/work//user/previs render=/work//user/render HH=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/houdini user=/work//user USER=jeff TEMP=/tmp hj=/Users/jeff/houdini.jeff HSB=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/houdini/sbin HOUDINI_BUILD_PLATFORM=Mac OS X 10.11.6 HIP=/private/tmp vfx=/work//user/vfx char=/work//user/char demos=/work//Demos/demo_repository HOUDINI_MINOR_RELEASE=0 SSH_AUTH_SOCK=/private/tmp/com.apple.launchd.n13VDH4EY3/Listeners __CF_USER_TEXT_ENCODING=0x1F5:0x0:0x0 HB=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/bin ACTIVETAKE=Main HOUDINI_DESKTOP_DIR=/Users/jeff/Desktop HD=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/demo HOUDINI_USER_PREF_DIR=/Users/jeff/Library/Preferences/houdini/16.0 PATH=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/binApplications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/houdini/sbinUsers/jeff/binusr/local/binusr/binbinusr/sbinsbin HOUDINI_VERSION=16.0.717 work=/work/ glue=/work//user/glue _=/usr/bin/printenv POSE=/Users/jeff/Library/Preferences/houdini/16.0/poselib PWD=/tmp proj=/work/proj JOB=/private/tmp JAVA_HOME=/Library/Java/Home jwi=/n/iowa2/tmp/jeff EDITOR=/usr/bin/vim jeff=~ LANG=en_CA.UTF-8 XPC_FLAGS=0x0 HOUDINI_BUILD_COMPILER=7.3.0 HFS=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources HOUDINI_MAJOR_RELEASE=16 XPC_SERVICE_NAME=0 prog=/work//user/prog HOME=/Users/jeff SHLVL=2 HHC=/Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/houdini/config LOGNAME=jeff CLASSPATH=.Applications/Houdini/Houdini16.0.717/Frameworks/Houdini.framework/Versions/16.0.717/Resources/houdini/scripts/java/sesi.jar HIPFILE=/private/tmp/untitled.hip HIPNAME=untitled sesi=/work/sesi HOUDINI_PATH=~/houdini.jeff:& SECURITYSESSIONID=186aa
Technical Discussion » Purpose/Application of Properties VOP
- old_school
- 2540 posts
- Offline
vm_displacebound is a mantra render property required to be defined using the property inheritance rules in order for Mantra to evaluate the surrounding surface and properly displace and fill in any holes and tears in the geometry. In shaders it makes sense to either add this property in/on the material (most sense), the object or the Mantra ROP.
The units are in camera scene space and should be set to the maximum anticipated displacement value and then dial back until you don't see holes if you wish.
Not setting the render property vm_displacebound will in effect set it to 0 and therefore none of the surrounding geometry is considered. You get tears and holes in your geometry around the displaced buckets as none of the surrounding surface is evaluated.
The units are in camera scene space and should be set to the maximum anticipated displacement value and then dial back until you don't see holes if you wish.
Not setting the render property vm_displacebound will in effect set it to 0 and therefore none of the surrounding geometry is considered. You get tears and holes in your geometry around the displaced buckets as none of the surrounding surface is evaluated.
-
- Quick Links