Interesting. Have you filed a bug report yet?
I was having some issues with normals with material x nodes maybe I was seeing the same thing as you.
Found 119 posts.
Search results Show results as topic list.
Solaris and Karma » MaterialX Heightnormal Artifacts
- No_ha
- 122 posts
- Offline
Solaris and Karma » Keep animated texture in Cache
- No_ha
- 122 posts
- Offline
Hello everyone,
I'm currently animating a project which involves placing real-life characters inside my 3D scene. I do all layouts in Solaris and write out each branch into a USD file. This works quite well, except for the animated texture file from the cards that I placed the characters on.
I can't seem to get Houdini to actually keep the image sequence in memory. If I go into the Texture tab of the Display Settings, I can see how Houdini discards the texture again when I move on to another frame, I assume it only discards it from VRAM(?)
And this is the only thing holding me back from real-time playback.
There seems to be some caching going on because if I overwrite the image sequence it will flicker on some frames and still display the old image sequence. But unfortunately, this isn't enough to keep playback.
The cache LOP is also of no help here.
In some cases, it seems to work for a few frames and then I reach real-time but not for long until the data is discarded again.
I am animating the image sequence through an edit properties lop, changing the file path based on the Frame, as recommended outside the Material Builder. It also doesn't help to disable the card of the characters, I actually have to disable the material to gain performance.
I have tried increasing the max texture cache in the Display options but it never goes above 4% before it discards the previous frame.
Does anybody know of a way to force Houdini to keep image sequences in (V)RAM? Or any other way to deal with this?
I'm currently animating a project which involves placing real-life characters inside my 3D scene. I do all layouts in Solaris and write out each branch into a USD file. This works quite well, except for the animated texture file from the cards that I placed the characters on.
I can't seem to get Houdini to actually keep the image sequence in memory. If I go into the Texture tab of the Display Settings, I can see how Houdini discards the texture again when I move on to another frame, I assume it only discards it from VRAM(?)
And this is the only thing holding me back from real-time playback.
There seems to be some caching going on because if I overwrite the image sequence it will flicker on some frames and still display the old image sequence. But unfortunately, this isn't enough to keep playback.
The cache LOP is also of no help here.
In some cases, it seems to work for a few frames and then I reach real-time but not for long until the data is discarded again.
I am animating the image sequence through an edit properties lop, changing the file path based on the Frame, as recommended outside the Material Builder. It also doesn't help to disable the card of the characters, I actually have to disable the material to gain performance.
I have tried increasing the max texture cache in the Display options but it never goes above 4% before it discards the previous frame.
Does anybody know of a way to force Houdini to keep image sequences in (V)RAM? Or any other way to deal with this?
Edited by No_ha - April 25, 2022 01:01:24
Houdini Lounge » What holds yourself/studio from adopting Houdini more?
- No_ha
- 122 posts
- Offline
I definitely would say that I use Houdini as the main tool for my work but what I’m missing are some modeling and basic sculpting features. I don’t need Houdini to replace zBrush but most other 3D applications feature at least some sculpting toolset. (That you can usually also use on regular geometry)
Especially when you’re creating blendshapes for animation.
And in these cases it’s hard to justify doing the whole rigging an animation part in Houdini if you have to switch softwares to fix a blendshape.
This would be helpful in other aspects as well. For example if you’re dealing with photoscans and are doing some automatic cleanup in Houdini but then if you have to put in more manual work you end up switching softwares, to get some different modeling and sculpting tools.
I honestly don’t really expect to see something like that as I would have no idea how to make it procedural. It doesn’t have to be or maybe only parts of it are but I’m not sure if SideFX wants such a non procedural area in Houdini.
Especially when you’re creating blendshapes for animation.
And in these cases it’s hard to justify doing the whole rigging an animation part in Houdini if you have to switch softwares to fix a blendshape.
This would be helpful in other aspects as well. For example if you’re dealing with photoscans and are doing some automatic cleanup in Houdini but then if you have to put in more manual work you end up switching softwares, to get some different modeling and sculpting tools.
I honestly don’t really expect to see something like that as I would have no idea how to make it procedural. It doesn’t have to be or maybe only parts of it are but I’m not sure if SideFX wants such a non procedural area in Houdini.
Houdini Lounge » [KARMA] How to get decent Volumic Light
- No_ha
- 122 posts
- Offline
As far as I know the Fog Box uses VEX Shader (and maybe even houdini volumes and not vdbs).
But you can create your own fog box or modify the existing and then use the xpu Pyro Preview shader as a material.
But you can create your own fog box or modify the existing and then use the xpu Pyro Preview shader as a material.
Solaris and Karma » Solaris - EXR Matrix Metadata
- No_ha
- 122 posts
- Offline
You can take a look at this HDA that I made [noahhaehnel.com]
It should be able to add all data to the exrs files. I also struggled with getting readable camera matrixes so I ended up converting them to P, R, S vectors which is easily usable in other applications.
The vex code isn’t pretty but it works.
It should be able to add all data to the exrs files. I also struggled with getting readable camera matrixes so I ended up converting them to P, R, S vectors which is easily usable in other applications.
The vex code isn’t pretty but it works.
Houdini Lounge » is houdini now using weta digital's proprietary solvers?
- No_ha
- 122 posts
- Offline
WetaH will come at some point:
https://weta-h.com/ [weta-h.com]
But that will be its own service and as far as I know it will only run on AWS servers.
https://weta-h.com/ [weta-h.com]
But that will be its own service and as far as I know it will only run on AWS servers.
Houdini Lounge » creating nodes with a wacom
- No_ha
- 122 posts
- Offline
Maybe this is similar to some issues I had before I changed the pen to only register while actually pressing down on the tablet.
This was especially noticeable when adjusting values using the middle mouse menu over a parameter. I would press the middle mouse key on my pen with my pen floating above the tablet. When I would let go off the middle mouse button I would slightly change the position of the pen and screw up the changes I made.
I changed my tablet to only register clicks when the pen is actually touching the tablet. (The default setting is called Hover click I believe)
When it’s touching it, I’m less likely to change the position when letting go of the button. (Or from my hands shaking)
It takes some time getting used to this behavior especially when you’re used to navigate while the pen is in the air. But now it’s more convenient for me. Unfortunately, this setting is global and not app specific.
This was especially noticeable when adjusting values using the middle mouse menu over a parameter. I would press the middle mouse key on my pen with my pen floating above the tablet. When I would let go off the middle mouse button I would slightly change the position of the pen and screw up the changes I made.
I changed my tablet to only register clicks when the pen is actually touching the tablet. (The default setting is called Hover click I believe)
When it’s touching it, I’m less likely to change the position when letting go of the button. (Or from my hands shaking)
It takes some time getting used to this behavior especially when you’re used to navigate while the pen is in the air. But now it’s more convenient for me. Unfortunately, this setting is global and not app specific.
Edited by No_ha - March 11, 2022 04:53:21
Solaris and Karma » XPU (Alpha): Should we submit Bug reports?
- No_ha
- 122 posts
- Offline
I am currently unsure whether SideFX wants us to submit bug reports for the XPU Alpha.
I would consider most features to be missing and not a bug. And I would assume that they are already aware of that. But there are occasions (like UVs getting screwed up on XPU) where I'm pretty sure that those are bugs.
Was there some communication on how this should be treated? Or maybe only Houdini FX customers are supposed to report on this Alpha, as it is with the Beta program?
Obviously, I am not counting on XPU as a render engine I can use in actual production but there are many situations I could benefit from it (or render parts of my scene) when issues like the buggy UVs would be fixed. Because overall, most of what is properly supported does work very stable and incredibly fast on my machine.
Did somebody already submit bug reports for XPU or know what SideFXs policy is on it?
I would consider most features to be missing and not a bug. And I would assume that they are already aware of that. But there are occasions (like UVs getting screwed up on XPU) where I'm pretty sure that those are bugs.
Was there some communication on how this should be treated? Or maybe only Houdini FX customers are supposed to report on this Alpha, as it is with the Beta program?
Obviously, I am not counting on XPU as a render engine I can use in actual production but there are many situations I could benefit from it (or render parts of my scene) when issues like the buggy UVs would be fixed. Because overall, most of what is properly supported does work very stable and incredibly fast on my machine.
Did somebody already submit bug reports for XPU or know what SideFXs policy is on it?
Solaris and Karma » MaterialX limited nodes & roadmap
- No_ha
- 122 posts
- Offline
AndreasWeidman
The dream of having blender Eevee or unreal Lumens performance in the viewport in Houdini seems further away that ever tbh.
One thing to note is, that SideFX recently looked to hire somebody with Vulkan knowledge. The job offer was quite clear that this person was meant to bring the current OpenGL viewport to Vulkan. We have already seen them adding DoF and Bloom to the Viewport. Having a more modern backend would hopefully increase the quality and speed.
Solaris and Karma » Karma EXR data window
- No_ha
- 122 posts
- Offline
+1 for a native implementation but I quickly made an HDA for that:
https://noahhaehnel.com/blog/dynamic-data-window-and-overscan-in-solaris/ [noahhaehnel.com]
https://noahhaehnel.com/blog/dynamic-data-window-and-overscan-in-solaris/ [noahhaehnel.com]
Solaris and Karma » Certain AOVs are empty in Karma
- No_ha
- 122 posts
- Offline
Yes. Make sure to set both Data Type and Format to color3f:
You can also target only the contributions from a specific light like this:
BTW this is using the Mtlx Standard Surface. The SSS looks very similar to the Random Walk sss from the Principled Shader running on Karma. I wonder if it is the same. This also makes me wonder if we can expect to have different sss implementations in Mtlx Shaders. Like, does it support having several different kinds, or are we locked into a specific one because all hosts need to be able to implement it? I'm really curious how much flexibility this will allow in the future.
You can also target only the contributions from a specific light like this:
C<TD>.*'MyLPETag'
BTW this is using the Mtlx Standard Surface. The SSS looks very similar to the Random Walk sss from the Principled Shader running on Karma. I wonder if it is the same. This also makes me wonder if we can expect to have different sss implementations in Mtlx Shaders. Like, does it support having several different kinds, or are we locked into a specific one because all hosts need to be able to implement it? I'm really curious how much flexibility this will allow in the future.
Edited by No_ha - Dec. 9, 2021 17:17:49
Solaris and Karma » Certain AOVs are empty in Karma
- No_ha
- 122 posts
- Offline
Thank you that worked great! I don't think I ever would've thought to create a "custom" LPE for it as I never had to do that for Mantra. I guess I finally need to read through the LPE/OSL docs.
Weird that they added so many render vars to the Karma HDA that do nothing. I find myself unlocking and diving into the Karma HDA for pretty much every project anyway so I guess at this point I should simply make one myself with the render vars and settings I need.
Edit: Just saw the answer from a dev. Good to hear that this is being looked at. So it's probably just a bug and fixed in a future version.
Weird that they added so many render vars to the Karma HDA that do nothing. I find myself unlocking and diving into the Karma HDA for pretty much every project anyway so I guess at this point I should simply make one myself with the render vars and settings I need.
Edit: Just saw the answer from a dev. Good to hear that this is being looked at. So it's probably just a bug and fixed in a future version.
Edited by No_ha - Dec. 7, 2021 16:12:58
Solaris and Karma » Certain AOVs are empty in Karma
- No_ha
- 122 posts
- Offline
Hey everyone,
I have been trying to figure out why I can't seem to get any data in certain AOVs. Those include SSS but also Coat and Diffuse.
I assume I'm simply missing something like a wrong export name or I need to change something in the Shader since I haven't found any other posts mentioning this.
In the images, I am using the principled shader and Karma CPU. But I have also tested this with CPU and the MaterialX shader.
Also rendering to Disk or Mplay doesn't change it either.
Same result, no data in these AOVs.
Is there some trick to it that somebody could share?
Here I simply use a light behind tommy. All the red color is SSS.
I have been trying to figure out why I can't seem to get any data in certain AOVs. Those include SSS but also Coat and Diffuse.
I assume I'm simply missing something like a wrong export name or I need to change something in the Shader since I haven't found any other posts mentioning this.
In the images, I am using the principled shader and Karma CPU. But I have also tested this with CPU and the MaterialX shader.
Also rendering to Disk or Mplay doesn't change it either.
Same result, no data in these AOVs.
Is there some trick to it that somebody could share?
Here I simply use a light behind tommy. All the red color is SSS.
Edited by No_ha - Dec. 7, 2021 05:07:39
Houdini Lounge » XPU and memory usage, out of core
- No_ha
- 122 posts
- Offline
I recently tested XPU with this scene: https://twitter.com/NoahHaehnel/status/1458044379331973122?s=20 [twitter.com]
It took some time for the GPU to start but when it did it was fast. Without having any issues on my 8GBs RTX 3070. The memory metadata reached above 80GB although I'm not sure if this means truly 80Gbs being processed.
The grains are actual geo but instances of course.
Can you show a sample of the scene that caused these issues? IT would be interesting to see what amount is too much.
It took some time for the GPU to start but when it did it was fast. Without having any issues on my 8GBs RTX 3070. The memory metadata reached above 80GB although I'm not sure if this means truly 80Gbs being processed.
The grains are actual geo but instances of course.
Can you show a sample of the scene that caused these issues? IT would be interesting to see what amount is too much.
Solaris and Karma » USD Stitch several animated usd files to save storage space
- No_ha
- 122 posts
- Offline
Thank you for the detailed answer!
I don't actually think the data is massive because of USD. I just saw how much smaller the file could be because of the de-duplication in single USD files.
The Vellum Grains sim I'm using is around 27GBs of data as a .bgeo.sc sequence. I upressed point count by 5 and deleted all attributes except Cd, pscale, orient, and v. Then I instanced 3 rocks randomly on those points. That's still a lot of data to write so I don't think that all that would be much smaller as a bgeo sequence. But I'll try to figure out how to set the non-animating attributes to default.
Right now I'm using the 40Gbs usd file and it's working fine. OpenGL performance is somewhat sluggish and I haven't found a way to display the instances as points, but Karma CPU is still very responsive. Karma XPU struggles a bit until the GPU starts to render but even with the 4 minutes it takes each frame to initialize the GPU it's still 80% faster than CPU only. (Honestly, I would never have thought XPU could deal with such a scene at all. At least not in an Alpha).
I might test splitting up the files again and animating the sublayer mute checkbox. It's not the prettiest way but saving 50% of the data makes this an interesting choice. (Or maybe I'll get to a similar small file size if I set more attributes to be not animated, although I suppose I won't be able to save as much as I can't set non-moving points in my sim to not get flagged as animated when others are still moving.)
If there are other people working with sims (or big scenes in general) in LOPs reading this, it would be interesting to hear how you are preparing the usd files.
mtuckerThanks! This is what I did in the Configure Layer. Don't know why I didn't try it in the USD ROP, too. :S
Just don't put "$F" in any of your Save File Paths, and then enable the "flush data after each frame" toggle on the USD ROP.
mtucker
As for the massive file size, I expect what that means is that you are not giving Houdini enough hints about what data coming from SOPs is animated, and what data is not animated.
I don't actually think the data is massive because of USD. I just saw how much smaller the file could be because of the de-duplication in single USD files.
The Vellum Grains sim I'm using is around 27GBs of data as a .bgeo.sc sequence. I upressed point count by 5 and deleted all attributes except Cd, pscale, orient, and v. Then I instanced 3 rocks randomly on those points. That's still a lot of data to write so I don't think that all that would be much smaller as a bgeo sequence. But I'll try to figure out how to set the non-animating attributes to default.
Right now I'm using the 40Gbs usd file and it's working fine. OpenGL performance is somewhat sluggish and I haven't found a way to display the instances as points, but Karma CPU is still very responsive. Karma XPU struggles a bit until the GPU starts to render but even with the 4 minutes it takes each frame to initialize the GPU it's still 80% faster than CPU only. (Honestly, I would never have thought XPU could deal with such a scene at all. At least not in an Alpha).
I might test splitting up the files again and animating the sublayer mute checkbox. It's not the prettiest way but saving 50% of the data makes this an interesting choice. (Or maybe I'll get to a similar small file size if I set more attributes to be not animated, although I suppose I won't be able to save as much as I can't set non-moving points in my sim to not get flagged as animated when others are still moving.)
If there are other people working with sims (or big scenes in general) in LOPs reading this, it would be interesting to hear how you are preparing the usd files.
Solaris and Karma » USD Stitch several animated usd files to save storage space
- No_ha
- 122 posts
- Offline
I'm currently testing Karma (XPU) and working in LOPS. I stumbled upon something and wanted to make sure there aren't better ways.
If I understand it correctly, to actually see Motioblur I need time samples that usd can only access from native usd files (or adding a cache LOP). So if I import points from a simulation and instance geo on them using a Point Instancer, I don't see motionblur. But writing this Point Instancer to disk as a usd file gives me motion blur. (I instance in LOPs because it's a lot faster to instance random geo on points compared to for each loops)
Because I have millions of points the usd file is quite large and I need to resort to writing one frame at a time and using USD Stitch to combine them. Those are several steps more each taking quite some time and space.
I figured out a hacky(?) way of skipping this double step by putting a Configure Layer LOP with a Save Path before the USD ROP which is set to flush data each frame. It still only keeps the data of one frame in RAM but writes a single usd file for the Configure Layer LOP. Nice.
But while testing I noticed that writing a single usd file without flushing the data each frame will save massive amounts of data (as mentioned in the docs). My current file containing 300 frames is 40GBs. Having two separate files each containing 150 frames of the instances combine to roughly 20GBs. That is a massive reduction in file size and is probably good for I/O times as well.
I can use a sublayer LOP to import both of these files but of course, the second file's opinion is stronger and I don't see any animation before frame 151. I can manually animate the "mute" checkbox but I am wondering if there are better solutions?
Is this an intended workflow? Are there better ways to bring in massive amounts of data?
As far as I have seen, the USD Stitch ROP only accepts single-frame files, not files containing several frames.
In a perfect world, the USD ROP would automatically detect when RAM is running low and create a new file (and possibly combine them again). This would save the trial and error of finding out how much of the scene fits in your RAM and it would save the maximum amount of storage space.
If I understand it correctly, to actually see Motioblur I need time samples that usd can only access from native usd files (or adding a cache LOP). So if I import points from a simulation and instance geo on them using a Point Instancer, I don't see motionblur. But writing this Point Instancer to disk as a usd file gives me motion blur. (I instance in LOPs because it's a lot faster to instance random geo on points compared to for each loops)
Because I have millions of points the usd file is quite large and I need to resort to writing one frame at a time and using USD Stitch to combine them. Those are several steps more each taking quite some time and space.
I figured out a hacky(?) way of skipping this double step by putting a Configure Layer LOP with a Save Path before the USD ROP which is set to flush data each frame. It still only keeps the data of one frame in RAM but writes a single usd file for the Configure Layer LOP. Nice.
But while testing I noticed that writing a single usd file without flushing the data each frame will save massive amounts of data (as mentioned in the docs). My current file containing 300 frames is 40GBs. Having two separate files each containing 150 frames of the instances combine to roughly 20GBs. That is a massive reduction in file size and is probably good for I/O times as well.
I can use a sublayer LOP to import both of these files but of course, the second file's opinion is stronger and I don't see any animation before frame 151. I can manually animate the "mute" checkbox but I am wondering if there are better solutions?
Is this an intended workflow? Are there better ways to bring in massive amounts of data?
As far as I have seen, the USD Stitch ROP only accepts single-frame files, not files containing several frames.
In a perfect world, the USD ROP would automatically detect when RAM is running low and create a new file (and possibly combine them again). This would save the trial and error of finding out how much of the scene fits in your RAM and it would save the maximum amount of storage space.
Houdini Lounge » XPU Pyro Preview not working?
- No_ha
- 122 posts
- Offline
For me XPU sees the volume most of the time but only density.
I used an explosion I did before.
I knew it would work in Karma CPU because that's what I used back then to render but XPU only sees the destiny. In some cases it stays completely empty.
I used an explosion I did before.
I knew it would work in Karma CPU because that's what I used back then to render but XPU only sees the destiny. In some cases it stays completely empty.
Technical Discussion » Launcher does not work
- No_ha
- 122 posts
- Offline
Bumping this thread. Tried to use the H19 Launcher and unfortunately, it still doesn't work. Installing H19 through the launcher will give the error message that the QT Libraries aren't installed. Even repairing the installation through the launcher doesn't fix it. So I had to install Houdini again the normal way and everything worked.
I would love to see the launcher working. It's not difficult to manage Houdini installations without it but the general concept of the launcher is great.
I would love to see the launcher working. It's not difficult to manage Houdini installations without it but the general concept of the launcher is great.
PDG/TOPs » How do I use TOPs .CSV Input to drive Wedges?
- No_ha
- 122 posts
- Offline
I recently used a very small tops network to run through dozens of "handcrafted" sims that also had varying frame ranges which I imported through a CSV.
I essentially just read them in and used an Attribute copy to copy the values of the CSV onto the existing wedges. Combining, merging or doing something else will create a work item per CSV input.
I'm not a PDG expert and only use it very rarely but maybe this helps, or maybe you've already figured this out by now.
I essentially just read them in and used an Attribute copy to copy the values of the CSV onto the existing wedges. Combining, merging or doing something else will create a work item per CSV input.
I'm not a PDG expert and only use it very rarely but maybe this helps, or maybe you've already figured this out by now.
Edited by No_ha - July 12, 2021 09:41:14
Houdini Lounge » Which license to prepare a tool to be sold?
- No_ha
- 122 posts
- Offline
You don't even need the indie license. As long as you distribute it solely through orbolt you can create and get paid using the free apprentice license.
Source:
https://twitter.com/ambrosiussen_p/status/1365750105865814019?s=19 [twitter.com]
At some point you'll most likely want to upgrade to the indie license just to get rid of the watermark when rendering, but there's no requirement for it.
Source:
https://twitter.com/ambrosiussen_p/status/1365750105865814019?s=19 [twitter.com]
At some point you'll most likely want to upgrade to the indie license just to get rid of the watermark when rendering, but there's no requirement for it.
-
- Quick Links