Found 64 posts.
Search results Show results as topic list.
Solaris and Karma » Solaris camera import question
- BradThompson
- 64 posts
- Offline
It looks like it was me that was confused. I have nearly zero experience with USD/Solaris/Kama. Something I read said that for animation to come though a scene import, "Import all time samples" should be selected. That's what I did and it appeared to work, but with a big lag on scene load. I don't remember where I read that. I disabled that setting and wrote the note as a convenience to you, so you aren't waiting forever for the file to load. I never tested it any other way. I guess I don't understand the purpose of the Import all time samples setting yet.
Solaris and Karma » Solaris camera import question
- BradThompson
- 64 posts
- Offline
Thanks Rob. Here's a stripped down version. Hopefully this packaged up correctly. I left some notes on the STAGE. The file is too big to upload to the forum, so I posted it here: https://we.tl/t-AulkC1x6wS [we.tl]
Solaris and Karma » Solaris camera import question
- BradThompson
- 64 posts
- Offline
Nice! That appears to work. Thanks. I thought the camera lop just created a camera. I didn't know you could use it to set properties on an imported one. I have a lot to learn.
Solaris and Karma » Solaris camera import question
- BradThompson
- 64 posts
- Offline
I have a camera that is controlled by riveting it to an animated surface. The animated surface is an a time-offset and smoothed copy of an ocean evaluate, so basically the camera follows the motion of the waves, but lagged, smoothed, and offset in Y.
I need to get this camera animation into LOPs, but I also need to use a LOPs specific camera because cameras that come in through scene-import nodes don't respect SHOP CVEX lens shaders, which is a requirement for this shot. So basically, I need to import my /obj camera hierarchy's animation and constrain a LOPs camera to it. What is the best way to do this?
Apologies if this is basic. I'm just starting to investigate Solaris/USD/Karma now that lens shaders can be made to work with it.
I need to get this camera animation into LOPs, but I also need to use a LOPs specific camera because cameras that come in through scene-import nodes don't respect SHOP CVEX lens shaders, which is a requirement for this shot. So basically, I need to import my /obj camera hierarchy's animation and constrain a LOPs camera to it. What is the best way to do this?
Apologies if this is basic. I'm just starting to investigate Solaris/USD/Karma now that lens shaders can be made to work with it.
Houdini Indie and Apprentice » Wire Glue Constraints - Tree Hierarchy
- BradThompson
- 64 posts
- Offline
I know this post is old, but since I just ran into the same issue and haven't gotten around it yet, I thought I'd post.
The problem seems to be that the glue constraints are expecting ordered groups, but it's not respecting the order, even if you create the group outside of DOPS. It seems like it's reordering things by @ptnum. For me, this works fine until it doesn't. Usually when a small branch (high @ptnums) is supposed to glue to a part of the tree with low ptnums, like the trunk.
I would love to know if there is a way to feed the constrained and goal point arrays into the wire glue constraint nodes without loosing the order. Maybe a dictionary detail attribute or something? That field only seems to accept expressions that result in a group, from what I can tell.
The problem seems to be that the glue constraints are expecting ordered groups, but it's not respecting the order, even if you create the group outside of DOPS. It seems like it's reordering things by @ptnum. For me, this works fine until it doesn't. Usually when a small branch (high @ptnums) is supposed to glue to a part of the tree with low ptnums, like the trunk.
I would love to know if there is a way to feed the constrained and goal point arrays into the wire glue constraint nodes without loosing the order. Maybe a dictionary detail attribute or something? That field only seems to accept expressions that result in a group, from what I can tell.
Technical Discussion » Houdini 18 Karma Custom Fisheye Lens Shader Revisited
- BradThompson
- 64 posts
- Offline
I just want to add my voice to this thread. Custom shaders are critical for the fulldome/planetarium work I do as well. I see Houdini 18.5 is nearing release. I wonder if there are any updates on this front.
Thanks!
Brad T.
Thanks!
Brad T.
Houdini Indie and Apprentice » How to add atmosphere in Houdini?
- BradThompson
- 64 posts
- Offline
If someone who knows more about shader writing than me wants to try this, I stumbled across this recent (2020) paper from someone at Epic Games. It could be a good start.
https://sebh.github.io/publications/egsr2020.pdf [sebh.github.io]
https://sebh.github.io/publications/egsr2020.pdf [sebh.github.io]
Technical Discussion » Remote Desktop Connection
- BradThompson
- 64 posts
- Offline
Windows remote desktop won't use openGL unless you have a quadro card. We're using Parsec instead. There are some issues related to logging in and task manager, but nothing insurmountable.
Houdini for Realtime » Houdini Viewport - HMD Connection - Oculus Rift / HTC Vive
- BradThompson
- 64 posts
- Offline
Very cool Vladislav! It looks like you've got tracking data working inside Houdini. Is the viewport also being rendered into the HMD?
Technical Discussion » Match background orientation between cameras?
- BradThompson
- 64 posts
- Offline
rich_lord, your suggestion is working for me after all. It just took a bit for me to understand. Thanks again!
Here's the procedure:
1 - Go to Frame Number where match will take place
2 - Create Null and check “Keep Position When Parenting”
3 - Link BG-1 (BG to be matched) to null
4 - link null to camera-1 (view to be matched)
5 - uncheck “Keep Position when Parenting” on null
6 - relink null to camera-2
It would still be nice to have a more programmatic way of doing this.
Here's the procedure:
1 - Go to Frame Number where match will take place
2 - Create Null and check “Keep Position When Parenting”
3 - Link BG-1 (BG to be matched) to null
4 - link null to camera-1 (view to be matched)
5 - uncheck “Keep Position when Parenting” on null
6 - relink null to camera-2
It would still be nice to have a more programmatic way of doing this.
Technical Discussion » How to stop nodes rendering?
- BradThompson
- 64 posts
- Offline
There may be better ways, depending on why you want to do this.
One way might be to use material nodes on each primitive, prior to the merge. Assign a shadow matte material to the box with transparency set to 100 and shadow set to 0. Assign a different material to the sphere. Both primitives will be active, but the box will render nothing.
Another approach would be to connect the box to a switch node, then a null on the switch's other input. That would let you switch between the box and a null going into your merge.
One way might be to use material nodes on each primitive, prior to the merge. Assign a shadow matte material to the box with transparency set to 100 and shadow set to 0. Assign a different material to the sphere. Both primitives will be active, but the box will render nothing.
Another approach would be to connect the box to a switch node, then a null on the switch's other input. That would let you switch between the box and a null going into your merge.
Technical Discussion » Match background orientation between cameras?
- BradThompson
- 64 posts
- Offline
Sorry for the delay in responding and thank you for the suggestion. It's possible that I'm not understanding the suggestion, but I don't think this is what I'm looking for.
I need to copy just the offset between camera-A and background-A onto camera-B:background-B. Here's an example with fake eulerXYZ rotations (for simplicity):
if
Background-A = {0,0,0}
Camera-A = {45,0,0}
and
Camera-B = {100,0,0}
then
Background-B = {145,0,0}
The offset between Camera-A and Background-A is 45 degrees in X. To make camera-B's background match, I'd have to rotate background-B 145 degrees in X.
The thing that makes this complicated for me, is that the cameras are linked to a series of nulls, so I have to get the world-space orientations, rather than just grab channel values directly.
I need to copy just the offset between camera-A and background-A onto camera-B:background-B. Here's an example with fake eulerXYZ rotations (for simplicity):
if
Background-A = {0,0,0}
Camera-A = {45,0,0}
and
Camera-B = {100,0,0}
then
Background-B = {145,0,0}
The offset between Camera-A and Background-A is 45 degrees in X. To make camera-B's background match, I'd have to rotate background-B 145 degrees in X.
The thing that makes this complicated for me, is that the cameras are linked to a series of nulls, so I have to get the world-space orientations, rather than just grab channel values directly.
Technical Discussion » Match background orientation between cameras?
- BradThompson
- 64 posts
- Offline
I have two HIP files with animated cameras. I need the background (environment light with HDR map) orientation to match on a particular frame. It seems simple, but it's giving me trouble.
Put another way, I need to get the world-space orientation of Camera-A and Background-A, find the angular difference between the two, then rotate Background-B by that angle/axis relative to Camera-B.
It seems like there should be a VEX or VOPS way to do this, but quats and matrixes aren't my strong point. Also, note that the cameras are children of animated hierarchies, so I can't directly grab the euler rotation values. Any help?
Thanks!
Put another way, I need to get the world-space orientation of Camera-A and Background-A, find the angular difference between the two, then rotate Background-B by that angle/axis relative to Camera-B.
It seems like there should be a VEX or VOPS way to do this, but quats and matrixes aren't my strong point. Also, note that the cameras are children of animated hierarchies, so I can't directly grab the euler rotation values. Any help?
Thanks!
Technical Discussion » POPs - How to generate 1 particle per collision
- BradThompson
- 64 posts
- Offline
Well, that's more simple than I thought. Thanks AndyW!
It turns out that my non-stripped-down scene had another problem that was leading me to think it wasn't working like that. Also, the manual defines Impulse Count as “Number of particles to emit each time the node cooks”. I was interpreting that to mean that if there are 5 collisions in a substep, you'd only get one particle, since the node is only cooking once per substep (I think?).
Anyway, it's working as you describe AndyW, so thanks again.
It turns out that my non-stripped-down scene had another problem that was leading me to think it wasn't working like that. Also, the manual defines Impulse Count as “Number of particles to emit each time the node cooks”. I was interpreting that to mean that if there are 5 collisions in a substep, you'd only get one particle, since the node is only cooking once per substep (I think?).
Anyway, it's working as you describe AndyW, so thanks again.
Technical Discussion » Lightning effect scale
- BradThompson
- 64 posts
- Offline
Have you tried adding a pscale attribute? Drop an attribute create node before your add node. Set the following values:
Name: pscale
Class: point
Type: float
Precision: float32
size: 1
Write Values: checked
Value: This is the size at render time. Try .01 first, then go larger or smaller to fine tune.
Name: pscale
Class: point
Type: float
Precision: float32
size: 1
Write Values: checked
Value: This is the size at render time. Try .01 first, then go larger or smaller to fine tune.
Technical Discussion » POPs - How to generate 1 particle per collision
- BradThompson
- 64 posts
- Offline
Hi!
I have some particles colliding with a static object sphere. I'm trying to create a new SINGLE particle at each point of impact. I've got it mostly working, using a PopReplicate node with the group set to collidegroup. I can't figure out what kind of expression I need in the impulse count to generate a SINGLE particle for each collision though.
Attached, find a hip file where each collision is generating a stack of particles at each collision point.
This is just a stripped down example. My eventual goal is to have a planetoid moving through a field of asteroids. Each time an asteroid collides with the planet, I want to stamp an impact mark. I've got a pretty good start on this, but creating a single impact scar is driving me mad.
Thanks!
I have some particles colliding with a static object sphere. I'm trying to create a new SINGLE particle at each point of impact. I've got it mostly working, using a PopReplicate node with the group set to collidegroup. I can't figure out what kind of expression I need in the impulse count to generate a SINGLE particle for each collision though.
Attached, find a hip file where each collision is generating a stack of particles at each collision point.
This is just a stripped down example. My eventual goal is to have a planetoid moving through a field of asteroids. Each time an asteroid collides with the planet, I want to stamp an impact mark. I've got a pretty good start on this, but creating a single impact scar is driving me mad.
Thanks!
Edited by BradThompson - 2018年4月3日 09:36:03
Technical Discussion » What would cause Fluid Compress node to delete ALL points?
- BradThompson
- 64 posts
- Offline
Solved. I wasn't creating all of the fields that the fluid compress node needed to work correctly. I switched to creating my flip fluid using a “fluid source” node and all is well. It seems to create all the fields I needed.
Technical Discussion » What would cause Fluid Compress node to delete ALL points?
- BradThompson
- 64 posts
- Offline
I've got a FLIP sim built mostly from shelf tools, but modified somewhat. It seems to work, except that the fluid compress node doesn't seem to be retaining any of the particles.
The geometry spreadsheet details view of the fluid compress node says:
-fluidcompress_particlescompressed: 0
-fluidcompress_particlesuncompressed: 0
I've tried turning cull bandwidth on and off and all sorts of values, but nothing works.
If I bypass the fluid compress node, surfacing doesn't seem to work at all, so it's possible that the problem is in my Dop I/O or previous. I'm lost though.
Any insight would be appreciated.
The geometry spreadsheet details view of the fluid compress node says:
-fluidcompress_particlescompressed: 0
-fluidcompress_particlesuncompressed: 0
I've tried turning cull bandwidth on and off and all sorts of values, but nothing works.
If I bypass the fluid compress node, surfacing doesn't seem to work at all, so it's possible that the problem is in my Dop I/O or previous. I'm lost though.
Any insight would be appreciated.
Houdini Lounge » Thoughts on mantra
- BradThompson
- 64 posts
- Offline
I think I made my point regarding IFD's, but here's a real-world example I just encountered where the two-step rendering pipeline cost me a day's time. If you know a better approach, I'd love to hear it. Here is what I did.
I'm working on a sim with a pool of bubbling lava. It's a 900 frame long, reasonably high-detail sim. My workstation has been meshing it since the night before, resulting in a several TB geometry cache. Most of the day has been spent getting my material to look good. I'm hoping to render it overnight. It's late in the day, but I finally get the shader looking good enough for a test render, so I generate my IFD's and send them to Afanasy. A few minutes later, I load up the few frames I've gotten back from the farm, and notice that the UV's are unstable and twitching about. A little bit of research indicates that adding a UVSmooth node to the mesh might solve the problem.
The thing is, I can't just add the UVSmooth node and resubmit to render. I've gotta unpack that geometry, apply the UVSmooth, and re-cache it. All that, and I'm still not sure it will fix the problem. With several TB of mesh data, that didn't finish in time for me to resubmit to the render farm before I had to leave, putting me a day behind.
I don't mean to belabor the point. Yes, spending money for a few Engine licenses would ease the problem, so we might do that. It's still an extra complication that could be hidden away and automated so that I can spend more time being creative and less time administrating processes.
I'm working on a sim with a pool of bubbling lava. It's a 900 frame long, reasonably high-detail sim. My workstation has been meshing it since the night before, resulting in a several TB geometry cache. Most of the day has been spent getting my material to look good. I'm hoping to render it overnight. It's late in the day, but I finally get the shader looking good enough for a test render, so I generate my IFD's and send them to Afanasy. A few minutes later, I load up the few frames I've gotten back from the farm, and notice that the UV's are unstable and twitching about. A little bit of research indicates that adding a UVSmooth node to the mesh might solve the problem.
The thing is, I can't just add the UVSmooth node and resubmit to render. I've gotta unpack that geometry, apply the UVSmooth, and re-cache it. All that, and I'm still not sure it will fix the problem. With several TB of mesh data, that didn't finish in time for me to resubmit to the render farm before I had to leave, putting me a day behind.
I don't mean to belabor the point. Yes, spending money for a few Engine licenses would ease the problem, so we might do that. It's still an extra complication that could be hidden away and automated so that I can spend more time being creative and less time administrating processes.
Houdini Lounge » Thoughts on mantra
- BradThompson
- 64 posts
- Offline
blackpixel
Yes, but all of the “other integrated renderers” require a seperate license to render or eat a host app license. Mantra tokens are free.
I'm coming from a 3DS Max background where up until last year, both the integrated renderers (Scanline and MentalRay), and FinalRender included 999 network render licenses for no additional charge. The removal of free network rendering is one of the primary reasons we are trying to move away from Autodesk and toward Houdini.
I'm not suggesting making Houdini Engine licenses free, only the IFD generation and probably some of the geo prep stuff so that I don't have to mess about with remembering to pack/cache every little thing, or troubleshoot why IFD generation is taking so long. Let the farm do that so that I can get back to iterating. I acknowledge that I'm new to Houdini, but so far, I don't see the benefit of having to manually manage that process most of the time.
-
- Quick Links