Hi Tomas.
I was missing the "m = invert (m);"
Your edited code works. Thank you! And thank you too, animatrix.
Attached is a simple hip of part of the larger setup i'm working on.
If you open the file and scrub to frame 100, you'll see it gathers up 100 bounding boxes, all looking at the target defined (a point) and still properly bounding the underlying geo (in this sample, randomly-rotated pig heads).
From here i'm proceeding onto the rest of the system, but this was what i was getting hung up on.
Found 13 posts.
Search results Show results as topic list.
Technical Discussion » Orient a bounding box to look-at a target?
- BrandonRiza
- 13 posts
- Offline
Technical Discussion » Orient a bounding box to look-at a target?
- BrandonRiza
- 13 posts
- Offline
This definitely fixes the up direction issue, but the described (below) iterations of this code seem to break the look-at:
Commenting out the "vector cdir" and "dir = normalize" lines yields the same results as leaving them in.
Swapping the commenting status on the matrix lines also yields the same results.
GIF for visual reference.
What am i missing?
vector up = set(0,1,0); vector source = point ( 1, "P", 0 ); vector target = point ( 1, "P", 1 ); vector dir = normalize ( source - target ); vector cdir = normalize ( cross ( up, dir ) ); dir = normalize ( cross ( cdir, up ) ); matrix3 m = maketransform ( dir, up ); //matrix3 m = maketransform ( normalize(dir), normalize(up) ); setdetailattrib ( 0, "M", m ); @P *= m;
Commenting out the "vector cdir" and "dir = normalize" lines yields the same results as leaving them in.
Swapping the commenting status on the matrix lines also yields the same results.
GIF for visual reference.
What am i missing?
Technical Discussion » Orient a bounding box to look-at a target?
- BrandonRiza
- 13 posts
- Offline
Here's a comparison of the end result. No up vector defined (first option) vs defining the up vector (second option).
I'll keep digging; thanks for the tips.
I'll keep digging; thanks for the tips.
Edited by BrandonRiza - 2023年4月13日 15:58:34
Technical Discussion » Orient a bounding box to look-at a target?
- BrandonRiza
- 13 posts
- Offline
Hi, animatrix...
This is an interesting solution and i've rolled it into my system successfully (thanks!), but i'm unclear on how to implement the up vector. I do indeed want a fixed orientation on each bounds (this is running through on a per-frame basis and creating bounds for many objects.) I've tried adding v@up = set(0,1,0); in various places to no avail.
How would you accomplish this, if you were to remove your float angle = radians ( ch("angle") ); functionality?
End result of the total bounding boxes needs to (essentially) look like the attached image.
jsmack ... thanks for your input; i'm attempting this as well.
This is an interesting solution and i've rolled it into my system successfully (thanks!), but i'm unclear on how to implement the up vector. I do indeed want a fixed orientation on each bounds (this is running through on a per-frame basis and creating bounds for many objects.) I've tried adding v@up = set(0,1,0); in various places to no avail.
How would you accomplish this, if you were to remove your float angle = radians ( ch("angle") ); functionality?
End result of the total bounding boxes needs to (essentially) look like the attached image.
jsmack ... thanks for your input; i'm attempting this as well.
Technical Discussion » Orient a bounding box to look-at a target?
- BrandonRiza
- 13 posts
- Offline
All...
I'm trying to get the bounds of a given object, then orient that bounding box so that the +Z normal looks at a target, but still properly bounds the object.
I can get the bounds to look at (or away from) a target with a Copy To Points node, feeding a point into port0 and a target into port1 of an attribute wrangle with the following VEX code:
vector @N;
vector source = point (0, "P", @ptnum);
vector target = point (1, "P", @ptnum);
@N = normalize(source-target);
v@up = set(0,1,0);
...but of course that invalidates the bounds.
Any tips on how to get the bounds of an object, leverage the code above to look-at a target and maintain proper bounds, all in a single attribute wrangle? I can't figure it out. Neither can ChatGPT.
I'll attach a GIF and a HIP.
Thank you in advance.
I'm trying to get the bounds of a given object, then orient that bounding box so that the +Z normal looks at a target, but still properly bounds the object.
I can get the bounds to look at (or away from) a target with a Copy To Points node, feeding a point into port0 and a target into port1 of an attribute wrangle with the following VEX code:
vector @N;
vector source = point (0, "P", @ptnum);
vector target = point (1, "P", @ptnum);
@N = normalize(source-target);
v@up = set(0,1,0);
...but of course that invalidates the bounds.
Any tips on how to get the bounds of an object, leverage the code above to look-at a target and maintain proper bounds, all in a single attribute wrangle? I can't figure it out. Neither can ChatGPT.
I'll attach a GIF and a HIP.
Thank you in advance.
Image Not Found
Edited by BrandonRiza - 2023年4月13日 11:19:37
Houdini Indie and Apprentice » Mantra save-to-disk Vs. Render View speeds...
- BrandonRiza
- 13 posts
- Offline
Hello,
Houdini Indie 18.0.499
I have a heavy static mesh with 130 8k UDIMs.
When i open Render View and render (Mantra), the scene takes about 5 mins to calculate all the data.
The render itself takes about 8 seconds. (in this case.)
When i frame-forward, Render View renders the next frame in 8 seconds. Etc for any other frame.
When i render the Mantra node to disk, it re-calculates the geometry on every frame.
So, 20 hours to render to disk vs. 32 minutes to manually click through every frame and save the Render View buffer to disk.
I've tried using IFDs (with Save Inline Geometry off), a Stash node, a Cache node (caching only 1 frame), i've tried locking the final SOP node in my chain, i've tried packing the geometry, i've tried setting the Cache Frames to 1 on the File SOP loading the data, i've tried increasing the Cache Memory Ratio to a percentage value large enough to encompass the scene on the Mantra/Rendering/Render tab…
None of those work; Mantra still reloads all the data on every frame. (And i've tried Redshift, but it crashes Hou to the desktop in an empty scene when i open the Redshift render view…maybe a .499 issue?)
So what am i missing? How can i instruct Mantra save-to-disk to just render the data it already has in memory?
Any help is much appreciated. I'm over here setting the Render View Toggle Auto Save timer to nSeconds and hitting the forward arrow hundreds of times…
Houdini Indie 18.0.499
I have a heavy static mesh with 130 8k UDIMs.
When i open Render View and render (Mantra), the scene takes about 5 mins to calculate all the data.
The render itself takes about 8 seconds. (in this case.)
When i frame-forward, Render View renders the next frame in 8 seconds. Etc for any other frame.
When i render the Mantra node to disk, it re-calculates the geometry on every frame.
So, 20 hours to render to disk vs. 32 minutes to manually click through every frame and save the Render View buffer to disk.
I've tried using IFDs (with Save Inline Geometry off), a Stash node, a Cache node (caching only 1 frame), i've tried locking the final SOP node in my chain, i've tried packing the geometry, i've tried setting the Cache Frames to 1 on the File SOP loading the data, i've tried increasing the Cache Memory Ratio to a percentage value large enough to encompass the scene on the Mantra/Rendering/Render tab…
None of those work; Mantra still reloads all the data on every frame. (And i've tried Redshift, but it crashes Hou to the desktop in an empty scene when i open the Redshift render view…maybe a .499 issue?)
So what am i missing? How can i instruct Mantra save-to-disk to just render the data it already has in memory?
Any help is much appreciated. I'm over here setting the Render View Toggle Auto Save timer to nSeconds and hitting the forward arrow hundreds of times…
Houdini Indie and Apprentice » Promoting Poly Count to Object Level?
- BrandonRiza
- 13 posts
- Offline
Hello.
I'd like to allow the users of a tool i'm writing see the current poly count the tool is producing. Obviously at the Geo level, you can middle-click and get that info, but that info is not available at the Obj level. Same holds true for Viewport Messages and Display Options/Guides/Geometry Info = Always On. My first thought was to somehow store the poly count in a Detail attribute and display it in a Tag Scene Visualizer. That would probably the cleanest solution, so that's my first question; how can i store the current poly count of a piece of geometry into a Detail attribute? My second thought was to display that same detail attrib as a Descriptive Parameter on the HDA node via Type Properties/Node tab, but i don't really want to wrap this up in an HDA. The best solution of all would be to simply have the ability to middle-click on the Obj level master node and get the poly count info. Is that possible? What's the best way to accomplish this?
Thanks…
I'd like to allow the users of a tool i'm writing see the current poly count the tool is producing. Obviously at the Geo level, you can middle-click and get that info, but that info is not available at the Obj level. Same holds true for Viewport Messages and Display Options/Guides/Geometry Info = Always On. My first thought was to somehow store the poly count in a Detail attribute and display it in a Tag Scene Visualizer. That would probably the cleanest solution, so that's my first question; how can i store the current poly count of a piece of geometry into a Detail attribute? My second thought was to display that same detail attrib as a Descriptive Parameter on the HDA node via Type Properties/Node tab, but i don't really want to wrap this up in an HDA. The best solution of all would be to simply have the ability to middle-click on the Obj level master node and get the poly count info. Is that possible? What's the best way to accomplish this?
Thanks…
Houdini Indie and Apprentice » Naming objects based on their UDIM tile?
- BrandonRiza
- 13 posts
- Offline
Houdini Indie and Apprentice » Naming objects based on their UDIM tile?
- BrandonRiza
- 13 posts
- Offline
Thanks for the replies, guys. I appreciate the input.
I've made a sample file and verified that the name attrib is jiving with the number of UDIM tiles (reads 4 unique strings…duping the tiles and feeding them into the merge yields 8…), but how would you guys go about spitting out these four tiles into a sequence of bgeo files?
1001.bgeo
1002.bgeo
1003.bgeo
1004.bgeo
Cheers, and thanks.
I've made a sample file and verified that the name attrib is jiving with the number of UDIM tiles (reads 4 unique strings…duping the tiles and feeding them into the merge yields 8…), but how would you guys go about spitting out these four tiles into a sequence of bgeo files?
1001.bgeo
1002.bgeo
1003.bgeo
1004.bgeo
Cheers, and thanks.
Houdini Indie and Apprentice » Naming objects based on their UDIM tile?
- BrandonRiza
- 13 posts
- Offline
Hello.
I'm splitting very dense meshes up with an Attribute VOP, re-UVing the resulting mesh sections, then arraying those UVs into UDIM space. I'm outputting both a re-welded single mesh in addition to all of the mesh sections themselves as a sequence of OBJ files. I need to name each OBJ file in that sequence based on what UDIM tile the mesh occupies; such as “Mesh_1001.obj”, “Mesh_2005.obj”, etc. The attached images show the meshes arrayed in UDIM space and the Subnet i'm (currently) using to both control the output amount and name the meshes. I'm not sure how to go about pulling in the UDIM tile data and assigning it to the mesh name. Is this something i'd do in my current Attribute Wrangle? Any pointers would be much appreciated.
Thanks…
I'm splitting very dense meshes up with an Attribute VOP, re-UVing the resulting mesh sections, then arraying those UVs into UDIM space. I'm outputting both a re-welded single mesh in addition to all of the mesh sections themselves as a sequence of OBJ files. I need to name each OBJ file in that sequence based on what UDIM tile the mesh occupies; such as “Mesh_1001.obj”, “Mesh_2005.obj”, etc. The attached images show the meshes arrayed in UDIM space and the Subnet i'm (currently) using to both control the output amount and name the meshes. I'm not sure how to go about pulling in the UDIM tile data and assigning it to the mesh name. Is this something i'd do in my current Attribute Wrangle? Any pointers would be much appreciated.
Thanks…
Technical Discussion » Amazon Cloud Rendering...
- BrandonRiza
- 13 posts
- Offline
Technical Discussion » Amazon Cloud Rendering...
- BrandonRiza
- 13 posts
- Offline
I know there have been posts concerning this topic in the past and I've read through them all, but they all just sort of trail off with no concrete solution.
I've read the docs and set up my account as instructed to the best of my knowledge and when I try to submit a render via HQ to the cloud, absolutely nothing happens.
PreFlight and the identical diag that pops up when you hit “Render” in HQ yield different results. (Attached.)
I'm on a Mac, OSX 10.9.5 running Houdini Indie.
Anything glaringly obvious that i'm missing?
Anybody have this working?
I've read the docs and set up my account as instructed to the best of my knowledge and when I try to submit a render via HQ to the cloud, absolutely nothing happens.
PreFlight and the identical diag that pops up when you hit “Render” in HQ yield different results. (Attached.)
I'm on a Mac, OSX 10.9.5 running Houdini Indie.
Anything glaringly obvious that i'm missing?
Anybody have this working?
Houdini Lounge » Exporting the FLIP surface visible in the Visualization tab.
- BrandonRiza
- 13 posts
- Offline
Hello…
Is there a way to create an alembic cache (with velocity) of the surface preview visible in FLIPtank>Guides>Visualization (check Surface)?
I realize I can import the points via DOP I/O and do a vdbfromparticlefluid, or a particlefluidsurface, but both of these methods are basically reconstructing the surface that is already created. FLIPtank>Guides>Visualization (check Surface) allows me to see the surface, and it is looks better than I can achieve with a VDB or particle conversion.
So can i just spit out the mesh I'm seeing in FLIPtank>Guides>Visualization?
Optionally and perhaps better, how can I achieve the same look using either vdbfromparticlefluid or particlefluidsurface?
Thanks…
Is there a way to create an alembic cache (with velocity) of the surface preview visible in FLIPtank>Guides>Visualization (check Surface)?
I realize I can import the points via DOP I/O and do a vdbfromparticlefluid, or a particlefluidsurface, but both of these methods are basically reconstructing the surface that is already created. FLIPtank>Guides>Visualization (check Surface) allows me to see the surface, and it is looks better than I can achieve with a VDB or particle conversion.
So can i just spit out the mesh I'm seeing in FLIPtank>Guides>Visualization?
Optionally and perhaps better, how can I achieve the same look using either vdbfromparticlefluid or particlefluidsurface?
Thanks…
-
- Quick Links