I'm at a total loss here. Have now tried installing Houdini 20.5.410 on three different machines on our render farm, and it's the same thing on all of them (but working fine on my own local machine).
Have tried completely uninstalling all versions of Houdini and the launcher and starting all over again, still not working. Just no idea where to go from here since it's not throwing any errors.
Any help much appreciated here, as on a deadline and need to get this working asap.
Found 58 posts.
Search results Show results as topic list.
Technical Discussion » Houdini 20.5.410 won't launch
-
- localstarlight
- 58 posts
- Offline
Technical Discussion » Houdini 20.5.410 won't launch
-
- localstarlight
- 58 posts
- Offline
Technical Discussion » Houdini 20.5.410 won't launch
-
- localstarlight
- 58 posts
- Offline
I have been runnning Houdini 19.5.716 but have just upgraded to 20.5.410 and it just won't launch. The splash screen appears for a while, and then just disappears. No errors in the console or elsewhere, no crash logs, it just doesn't load. With the lack of errors or logs, how can I even diagnose the issue here?
Has anyone experienced anything similar? Is there any way to figure out what's happening here?
Has anyone experienced anything similar? Is there any way to figure out what's happening here?
Edited by localstarlight - Jan. 13, 2025 18:11:06
Solaris and Karma » Lens simulation - is it possible, and is it worth it?
-
- localstarlight
- 58 posts
- Offline
I’ve just come across this lens simulation plugin for Blender: https://blendermarket.com/products/lens-sim [blendermarket.com]
Conceptually, I love the idea of simulating light passage through a vintage lens, for example, using real-world data for the glass elements. I have two questions:
1. Would something like this be possible using the Karma lens shader? I don’t see why not, but I may be missing something important in the implementation that doesn’t translate.
2. More importantly, is something like this even worth it? Without question, it would add overhead to the render time. And I’d be curious to hear from experienced people in the film/VFX/compositing world whether all of this is something 100% achievable in post, with no benefits whatsoever in trying to bake the lens characteristics into the render itself?
In my opinion/experience it’s usually better to have flexibility and more freedom in comp, but I know there are also instances (I have heard this discussed with regards to in-camera vs. post motion blur, for example) where greater realism can be attained by baking something into the render itself. So I’m wondering whether this is one of those occasions.
Curious to hear if anyone has any experience with this kind of thing, or any insight. Thanks!
Conceptually, I love the idea of simulating light passage through a vintage lens, for example, using real-world data for the glass elements. I have two questions:
1. Would something like this be possible using the Karma lens shader? I don’t see why not, but I may be missing something important in the implementation that doesn’t translate.
2. More importantly, is something like this even worth it? Without question, it would add overhead to the render time. And I’d be curious to hear from experienced people in the film/VFX/compositing world whether all of this is something 100% achievable in post, with no benefits whatsoever in trying to bake the lens characteristics into the render itself?
In my opinion/experience it’s usually better to have flexibility and more freedom in comp, but I know there are also instances (I have heard this discussed with regards to in-camera vs. post motion blur, for example) where greater realism can be attained by baking something into the render itself. So I’m wondering whether this is one of those occasions.
Curious to hear if anyone has any experience with this kind of thing, or any insight. Thanks!
Solaris and Karma » Correct way to create variants based on activation?
-
- localstarlight
- 58 posts
- Offline
I have a USD of a boat, with two different options for masts in it. I want to create a variant for each. I thought maybe I could run an initial prune node (using deactivate) to get rid of one of them (so this could be the default). And then create a branch with using configure primitive nodes to turn off the default one and activate the other one, and then feed that into an add variant node.
However, when I try this, and then use set variant, it doesn't do anything. This is my first time actually using Solaris without following a tutorial so it's clear there's some basic USD stuff I don't quite understand yet.
What would be the correct way to go about building variants that are essentially different iterations of different components being activated or not?
However, when I try this, and then use set variant, it doesn't do anything. This is my first time actually using Solaris without following a tutorial so it's clear there's some basic USD stuff I don't quite understand yet.
What would be the correct way to go about building variants that are essentially different iterations of different components being activated or not?
Solaris and Karma » Best approaches for the Sun in the new Karma Atmosphere
-
- localstarlight
- 58 posts
- Offline
Ah, OK, thanks for letting me know about that, I didn't realise that was an option.
That said, it's still tricky to get the sun looking correct when working with the default settings.
Here is an example of the kind of 'orbital sunrise' I am looking for:

Here is the default sun geometry from the Karma Physical Sky as seen from orbit:

I can push and pull the atmosphere parameters around to try and get something working for this particular instance, but it breaks the look of everything else, and I assume the default values for the atmosphere are based on real values for an Earth-like atmosphere.
Previously I have been doing this in Redshift, where I use an actual VDB with values ramped according to real-world values, and it works. It's slow and cubmersome because of the size of the volume, but it works. I'm hoping to replace Redshift with Karma, but we do a lot of space stuff, so it would be great if this worked as expected.
That said, it's still tricky to get the sun looking correct when working with the default settings.
Here is an example of the kind of 'orbital sunrise' I am looking for:

Here is the default sun geometry from the Karma Physical Sky as seen from orbit:

I can push and pull the atmosphere parameters around to try and get something working for this particular instance, but it breaks the look of everything else, and I assume the default values for the atmosphere are based on real values for an Earth-like atmosphere.
Previously I have been doing this in Redshift, where I use an actual VDB with values ramped according to real-world values, and it works. It's slow and cubmersome because of the size of the volume, but it works. I'm hoping to replace Redshift with Karma, but we do a lot of space stuff, so it would be great if this worked as expected.
Solaris and Karma » Best approaches for the Sun in the new Karma Atmosphere
-
- localstarlight
- 58 posts
- Offline
No, I’m talking about the new atmosphere model: https://www.sidefx.com/docs/houdini/nodes/lop/karmaskyatmosphere.html [www.sidefx.com]
Which is a much better atmospheric/sky model than the older Karma Physical Sky. And while the images on the documentation show it with a sun in the sky, there’s actually no sun by default.
I’m fairly sure you’re not supposed to just combine the two approaches. The nice thing about the new atmosphere is that it works from space as well. Much like the atmosphere in Unreal Engine it’s actually simulating a planetary atmosphere.
I’m just asking about the best approach to adding a sun to this very nicely physical atmosphere model.
Which is a much better atmospheric/sky model than the older Karma Physical Sky. And while the images on the documentation show it with a sun in the sky, there’s actually no sun by default.
I’m fairly sure you’re not supposed to just combine the two approaches. The nice thing about the new atmosphere is that it works from space as well. Much like the atmosphere in Unreal Engine it’s actually simulating a planetary atmosphere.
I’m just asking about the best approach to adding a sun to this very nicely physical atmosphere model.
Solaris and Karma » Best approaches for the Sun in the new Karma Atmosphere
-
- localstarlight
- 58 posts
- Offline
I'm loving the possibilities with the new Karma atmosphere, but wondering the best approaches to having a sun in the scene. The old Karma sky included settings for the appearance of the sun, but it seems in the new atmosphere we are supposed to provide the sun ourselves, but I'm not sure what the intended approach would be.
If I add a distant light, it appears the light the atmosphere correctly, but there is no actual sun in the sky. I have tried creating a spherical light and messing around with making it very large and very distant and very intense, and it sort of works, but feels like a lot of guesswork to get something which feels correct. It also doesn't seem to interact with the atmosphere to create the look of a sunset on the sun (making it redder as it sets).
I am interested in getting something which is physically correct both on the ground level / within the atmosphere, as well as from the perspective of being in space / low Earth orbit.
Does anyone have any suggestions for how to set this up? There is not much mention in the Karma Sky Atmosphere documentation about the sun, which seems like a crucial oversight.
If I add a distant light, it appears the light the atmosphere correctly, but there is no actual sun in the sky. I have tried creating a spherical light and messing around with making it very large and very distant and very intense, and it sort of works, but feels like a lot of guesswork to get something which feels correct. It also doesn't seem to interact with the atmosphere to create the look of a sunset on the sun (making it redder as it sets).
I am interested in getting something which is physically correct both on the ground level / within the atmosphere, as well as from the perspective of being in space / low Earth orbit.
Does anyone have any suggestions for how to set this up? There is not much mention in the Karma Sky Atmosphere documentation about the sun, which seems like a crucial oversight.
Solaris and Karma » No UI feedback from Karma 'Render to Disk'
-
- localstarlight
- 58 posts
- Offline
I'm probably missing something really obvious here, but when I click 'Render to Disk' in Karma/Solaris, there doesn't appear to be any feedback whatsoever: no windows pops up showing progress, there's no command-line type output I can follow the rendering on, nothing. In fact, the only way I can know anything is happening at all is by opening the task manager and seeing if husk.exe is running and doing anything.
This is the kind of behaviour I would expect from a button saying 'Render to Disk in Background', but not from 'Render to Disk'. Is there an output log/progress bar that I need to enable somehow?
This is the kind of behaviour I would expect from a button saying 'Render to Disk in Background', but not from 'Render to Disk'. Is there an output log/progress bar that I need to enable somehow?
Solaris and Karma » Resolution issues with Karma / Solaris
-
- localstarlight
- 58 posts
- Offline
We've just finished a very high resolution project (24K x 12K) using Redshift in standard Houdini. We are now investigating starting to move to Solaris/USD and whether we can use Karma instead of Redshift.
I'm running some very basic tests and already hitting an issue with even attempting to render at that resolution, as well as significantly slower render times. I'm testing a very basic scene:
- HDRI dome light
- spherical camera
- various resolutions (4K, 8K, 12K, 24K)
My benchmarks (on a 4080 super) for this scene using Redshift in a standard (non-Solaris) way are as follows:
4K
- 1.7 seconds
8K
- 5.5 seconds
12K
- 9.3 seconds
24K
- 38.3 seconds
Using KarmaXPU in Solaris:
4K
- 6 seconds
8K
- 25 seconds
12K
- 51 seconds
24K
- WON'T RENDER
First up, I'm surprised that there is such a difference in render time with such a simple scene. This is just with one primary sample on Karma, but perhaps I'm missing some really obvious render settings which should bring this down. But no matter what I try, trying to render 24K just totally crashes every time. Is there just a hard resolution limit with Karma?
I'm running some very basic tests and already hitting an issue with even attempting to render at that resolution, as well as significantly slower render times. I'm testing a very basic scene:
- HDRI dome light
- spherical camera
- various resolutions (4K, 8K, 12K, 24K)
My benchmarks (on a 4080 super) for this scene using Redshift in a standard (non-Solaris) way are as follows:
4K
- 1.7 seconds
8K
- 5.5 seconds
12K
- 9.3 seconds
24K
- 38.3 seconds
Using KarmaXPU in Solaris:
4K
- 6 seconds
8K
- 25 seconds
12K
- 51 seconds
24K
- WON'T RENDER
First up, I'm surprised that there is such a difference in render time with such a simple scene. This is just with one primary sample on Karma, but perhaps I'm missing some really obvious render settings which should bring this down. But no matter what I try, trying to render 24K just totally crashes every time. Is there just a hard resolution limit with Karma?
Technical Discussion » Best way to tell Karma not to render certain pixels?
-
- localstarlight
- 58 posts
- Offline
Ah yes, I should have mentioned that the whole intention here is to use Karma XPU as a replacement for Redshift. We’re not in a position to switch to a CPU renderer. Thanks anyway though.
Technical Discussion » Best way to tell Karma not to render certain pixels?
-
- localstarlight
- 58 posts
- Offline
Interesting. From reading the documentation, that makes sense conceptually. But I'm having trouble getting it to work.
I've created a totally black image, with a depth AOV as specified in the documentation:
For the area I want to hide, I've tried setting the depth.Z value to something low like 0.1 and for the area I want to see, I've set it beyond the far clip of the camera (eg. 100000). So from what I understand, the camera should depth test this, and if it's closer than anything else, insert the image pixels, but otherwise render the scene.
Except it doesn't seem to work like that, and I can't find any examples of its usage like this.
I've created a totally black image, with a depth AOV as specified in the documentation:
Draw a foreground image over the geometry in the viewer when looking through this camera. If the image contains a depth AOV, this is used to z-composite the image into the scene. This can be used to replace large portions of the scene with a pre-rendered image, saving memory and processing time.
For the area I want to hide, I've tried setting the depth.Z value to something low like 0.1 and for the area I want to see, I've set it beyond the far clip of the camera (eg. 100000). So from what I understand, the camera should depth test this, and if it's closer than anything else, insert the image pixels, but otherwise render the scene.
Except it doesn't seem to work like that, and I can't find any examples of its usage like this.
Technical Discussion » Best way to tell Karma not to render certain pixels?
-
- localstarlight
- 58 posts
- Offline
We are rendering for a very particular dome format. The frame itself is a latlong/equirectangular (2:1), but because of the shape of the screen we are only required to render a certain portion of the frame. See attached image - the white areas are active parts of the screen we need to render, black areas we do not need to render.
Thus far we have been rendering in Redshift, where I created a sphere geo with the active area cut out, and then parented it to the camera. The material applied was a pure black emissive material with GI disabled. It works, but there is still some rendering time on the black areas. It's pretty quick to be honest, but we are rendering huge frames and high framerate (24K x 12K @ 60fps) so it does actually add up over the length of rendering a project.
We are now exploring a switch to Karma, and I'm curious what the best way to do this might be. I feel like a rendered integrated into Houdini might have a better/smarter answer than what we ended up with in Redshift. I have tried using the blocking geometry and setting it to matte via a render geometry settings node. Oddly, the render time with the matte geometry was four times slower than not having the geometry at all and just rendering the whole frame. I assume there's a good reason for this, but I had naively imagined that having a matte object right on front of the camera would very quickly and easily tell Karma 'don't render this pixel at all'. But apparently not.
Does anyone know if there is a super cheap way to do this? To just tell Karma to basically completely ignore a pixel?
Thus far we have been rendering in Redshift, where I created a sphere geo with the active area cut out, and then parented it to the camera. The material applied was a pure black emissive material with GI disabled. It works, but there is still some rendering time on the black areas. It's pretty quick to be honest, but we are rendering huge frames and high framerate (24K x 12K @ 60fps) so it does actually add up over the length of rendering a project.
We are now exploring a switch to Karma, and I'm curious what the best way to do this might be. I feel like a rendered integrated into Houdini might have a better/smarter answer than what we ended up with in Redshift. I have tried using the blocking geometry and setting it to matte via a render geometry settings node. Oddly, the render time with the matte geometry was four times slower than not having the geometry at all and just rendering the whole frame. I assume there's a good reason for this, but I had naively imagined that having a matte object right on front of the camera would very quickly and easily tell Karma 'don't render this pixel at all'. But apparently not.
Does anyone know if there is a super cheap way to do this? To just tell Karma to basically completely ignore a pixel?
Technical Discussion » How to attach points to an alembic animation?
-
- localstarlight
- 58 posts
- Offline
Technical Discussion » How to attach points to an alembic animation?
-
- localstarlight
- 58 posts
- Offline
How can I scatter points onto an imported alembic animation in such a way that they keep the same position on the geometry throughout the animation?
Technical Discussion » Any way to render a 360/latlong flipbook?
-
- localstarlight
- 58 posts
- Offline
I am making an animatic for a dome show which is a 360/latlong format. It's extremely high resolution, so I'd love to take advantage of the flipbook function in Houdini for the animatic, but since it uses the viewer, even when using a camera setup to be 360, the output is still a standard perspective view. So I guess there's two questions:
1. Is there any way to get the scene viewer to show the camera view in the correct perspective (spherical/latlong) instead of perspective when using that kind of camera?
2. Is there any way to use a ROP node to render the spherical/latlong frame in a low-res OpenGL-style way that is as quick as using a flipbook?
1. Is there any way to get the scene viewer to show the camera view in the correct perspective (spherical/latlong) instead of perspective when using that kind of camera?
2. Is there any way to use a ROP node to render the spherical/latlong frame in a low-res OpenGL-style way that is as quick as using a flipbook?
Houdini Lounge » How to work with freelancers with Indie licenses?
-
- localstarlight
- 58 posts
- Offline
We have a Houdini FX license. We need to use various freelance Houdini artists for this project, but most of whom we encounter are using the Indie license. From what I've read, there is a 'one time only offer' where you can get a bunch of .hiplc and .hdalc files converted at once when you first purchase your license, but after that there's no easy way to interchange files between the versions.
How do other studios deal with this?
We can't buy full FX licenses for every freelancer we need to use, that would be both impractical and really expensive.
How do other studios deal with this?
We can't buy full FX licenses for every freelancer we need to use, that would be both impractical and really expensive.
Houdini Lounge » HSITE variable not being picked up
-
- localstarlight
- 58 posts
- Offline
Are people not really using the HSITE variable?
How are other people setting up shared a shared library of HDAs between multiple computers?
How are other people setting up shared a shared library of HDAs between multiple computers?
Houdini Lounge » HSITE variable not being picked up
-
- localstarlight
- 58 posts
- Offline
I have the following line in my houdini.env:
In that folder I have the following structure:
Houdini_Site
-houdini19.5
--otls
--- (various HDAs in here)
However, when I load up Houdini, none of the HDAs are accessible. The HSITE variable also doesn't appear in the variables panel (Edit->Aliases and Variables...->Variables).
What am I doing wrong?
HSITE = "D:/Dropbox/Houdini_Site"
In that folder I have the following structure:
Houdini_Site
-houdini19.5
--otls
--- (various HDAs in here)
However, when I load up Houdini, none of the HDAs are accessible. The HSITE variable also doesn't appear in the variables panel (Edit->Aliases and Variables...->Variables).
What am I doing wrong?
Technical Discussion » Matrix transformations in the VEX function 'fromNDC'
-
- localstarlight
- 58 posts
- Offline
I was trying to use the VEX function 'fromNDC' inside a snippet wrangle in COPS, but it doesn't seem to provide the same results as when used in a point wrangle in SOPS. The documentation for 'fromNDC' warns that NDC space may not be 'well defined' in every context, so I guess I'm up against that.
I would like to perform the calculation manually in that case, but I'm not exactly sure what transformations are involved. I am trying to convert points from NDC space to camera space. I can see that one part of it is to use the camera matrix like this:
That brings the points in the correct position/rotation in the scene, but there is something missing - I'm not sure if it is a screenspace or clip space or view space or something else transformation which also needs to occur to get the same result as the 'fromNDC' function.
I have attached a project showing the two methods (1) fromNDC, and (2) the incomplete method doing it by hand.
Can anyone help me?
I would like to perform the calculation manually in that case, but I'm not exactly sure what transformations are involved. I am trying to convert points from NDC space to camera space. I can see that one part of it is to use the camera matrix like this:
matrix cam_matrix = maketransform(0, 0, cam_position, cam_rotation); @P *= cam_matrix;
That brings the points in the correct position/rotation in the scene, but there is something missing - I'm not sure if it is a screenspace or clip space or view space or something else transformation which also needs to occur to get the same result as the 'fromNDC' function.
I have attached a project showing the two methods (1) fromNDC, and (2) the incomplete method doing it by hand.
Can anyone help me?
-
- Quick Links