How to reference external objects within HEngine (Fur) procedural

   1613   3   1
User Avatar
Member
7835 posts
Joined: 9月 2011
Offline
External references don't seem to be possible for the hengine procedural, anything passed to the HDA needs to be a object level parameter, with values derived from the instance point attributes. This suffices for passing most data, but how does one pass non-geometry/numeric data? I would like to reference object transform and camera view transform in vex in my procedural. The object transform can be passed, rather cumbersomely, as a matrix using a 16 float parameter, but the camera view transform requires rather arcane math to get working properly, and the helper functions ‘toNDC’ and ‘fromNDC’ cannot be used.

Thanks,

Jonathan Mack
User Avatar
Member
7835 posts
Joined: 9月 2011
Offline
Using an Object Network inside the Fur Object, I placed a camera, which channel referenced camera properties on the fur object. To pass the camera properties to the procedural, I grabbed the values and matrix in python, storing the properties and cracked transform as attributes on the instance point. This successfully is able to pass a clone of the camera external to the fur object to the procedural. Channel references to the stashed camera work on sops in the procedural, and optransform() in vex will return the object transform matrix. However, ptransform and to/fromNDC functions still fail to work in the procedural. I will submit a bug.

Jonathan Mack
User Avatar
スタッフ
641 posts
Joined: 8月 2013
Offline
I believe we currently don't have access to OP transforms. optransform() fails as well.

You can access external geometry, which may be more convenient for passing various data. The object containing the geometry has to be rendered (and should probably be set to invisible via vm_renderable).

This should work from within as long as /obj/geo1 is rendered as well:

matrix transform = point("op:/obj/geo1/OUT", "transform", 0);

Using this you could at least store the camera properties on geometry without the need for a ton of parameters.

You could also think about adding all the data as detail attributes on the guides, then you have instant easy access.
Edited by KaiStavginski - 2017年4月13日 05:43:49
Kai Stavginski
Senior Technical Director
SideFX
User Avatar
Member
7835 posts
Joined: 9月 2011
Offline
Thanks for the response Kai. Putting the camera properties and transform on a guide detail is a good idea. Passing them as parameters on the object also worked, but attributes would be simpler. I still need some way to be able to access the camera's view transform, so that I can transform the points into NDC space, to compute screen space size. I attempted recreating a camera and object within an object network inside the fur object. The parameters from the fur are successfully being passed to the camera, and I am able to copy the curves into the geo object. What is curious is that even with a fully self contained object network, ptransform/vtransform/toNDC etc functions still do not work. I verified the objects are being created with transforms, as the optransform function is able to pull the transform into vex, and the camera properties are able to copied onto a point attribute on the fur as Cd for example. I am able to rotate the color value using the matrix fetched with optransform. The only missing piece is the ability to apply the view transform. Is there a python method in the HOM for creating a view matrix from a transform and camera properties? I tried looking up the math, but I can't quite make the transform reversible.

The purpose of this exercise is to create curves that render with constant width in pixels, for wireframe renders. If there is another method of setting width of curves in screen space, I would like to know. I know it must be computed by mantra, since vm_geofilterwidth would have to know it to clamp the minimum size.

Thanks again,

Jonathan Mack
  • Quick Links