Hello everyone,
I wanted to create a script that sets up Metadata for my renders. For example to write all the Camera Data into the Metadata which I can then use to create a camera in my compositing Software.
I know how I could reference the Channels on my Camera and write that out, but this wouldn't take into account any changes that happen after the camera.
For example, I have a camera moved to 0.5 in X. After that I use a Transform LOP and move the camera 0.1 in X.
Looking at the scene Graph I can see that Houdini somewhere does save the Data that I need:
There is pretty much all the other Data that I would need as well, I just don't know how to access it so I can write it into the Metadata of a Render Product LOP.
If I look at the generated USD code I can't seem to find the Local to World Transform either.
Does anybody know how to access this inside of Houdini? Thanks everyone
Found 119 posts.
Search results Show results as topic list.
Technical Discussion » Solaris Access Scene Data
- No_ha
- 122 posts
- Offline
Houdini Lounge » Is there any reason to switch to Karma? Is unbelievably slow
- No_ha
- 122 posts
- Offline
Midphase
However for many stand-alone artists, we would prefer to remain in the OBJ context while still having access to the more up-to-date render engine that Karma offers.
That is similar to how I feel as well. USD and Solaris are adding a level of complexity to scene building and management that wasn't there before.
You can't really start your scene from scratch and build and tweak everything on the go. USD/Solaris expects you to have pre-built assets, take great care with naming and collecting them, and wrap your head around terminology that changes between SOPS and LOPS.
I don't really want to import an asset from SOPS and tweak it in LOPS with a Sop Modify. But I also don't want to jump between the Obj and Stage context all the time.
That being said. I do like working in Solaris and I love the general idea of USD. Dreaming of something like USD, a standard that every major software will implement, that is literally the whole scene, was unthinkable a few years back. Adding the camera of your 3D scene into your compositing software with ease because it understands USD will be amazing. Sure FBX and Alembic can do that right now, but you'll be working in USD anyway and won't need to export it to something else. And it will enable more render engine variety across all software that supports it.
Then there is the node-based layer approach of Solaris which is just incredible compared to the loose assembly of nodes in the Obj context. It's much clearer and much more like working in SOPS.
Need a different light setup for the second shot of the sequence? Just branch it like you would when creating different versions of the same model in SOPS. This is also SO MUCH easier and clearer than using Takes. For me, this way of working feels much more “Houdini” to me than the Obj Context.
Then there are the benefits of easier reusing/importing scenes. Load the old USD file, prune everything you don't need. Then realize you need something else. You won't have to reopen the scene file and copy-paste again. The whole scene is referenced in the file you've imported at the beginning.
These are some of the reasons why I am going to force myself to use Solaris more (I can't really for actual jobs right now as I am using Mantra as my main render engine and Karma simply isn't ready yet, but I dip my toes into Solaris every chance I get).
Maybe I will be able to adjust my workflow to better fit into USD but I do think that it will be worth it for the benefits we can see right now and in the future.
We also shouldn't forget that one or two years ago almost nobody used USD besides the creators of it, Pixar. So it's to be expected that it's geared towards these kinds of companies and workflows. It's not been that long since the wide variety of audiences that Houdini serves have come into contact with it. I do think it's critical to make us heard, to assure that Pixar and SideFX are thinking about Solo Artists as well while adding to and changing USD and Solaris but I'm personally very optimistic that the future of Solaris will be good for my work. Maybe it will never be as easy as mashing together stuff in the Obj context but maybe the benefits will outweigh this.
Not sure if I gave anyone a valuable new viewpoint on this or maybe what I'm saying will simply not fit into your workflow, but in my (very subjective) view I see this as a chance.
Edited by No_ha - Oct. 28, 2020 20:21:23
Houdini Lounge » Is there any reason to switch to Karma? Is unbelievably slow
- No_ha
- 122 posts
- Offline
Could SideFX not have invested all this time into mantra instead? What was wrong with mantra besides it being a bit slower? Definitely karma so far seems to have made the speed issue much worse.
I believe the main answer is USD and using the opportunity to create a new renderer (almost) from scratch.
Mantra simply can't understand USD and the devs have said there are certain things in Mantra that hold it back.
But I can definitely understand your frustration. I love working in Solaris (when it works) because of the great light handles, physic-based placement, and Karma is really really responsive which is nice. But I also haven't found a project yet where I could use it.
I'm sure a big reason for that is simply my own missing knowledge to properly utilize USD. For example, one of my constant issues is that a scene with packed primitives in Mantra will be 12GB while in Karma it will reach over 50GB in my RAM. I guess I could use USD Instances but I haven't figured out how to have the same flexibility in scattering as in SOPS.
The other thing is that I never really get rid of is fireflies in Karma. But this could again be because I know what settings to tweak in Mantra compared to no knowledge in Karma.
There is stuff that I like about Karma that Mantra can't do, like the Random Walk SSS, which looks incredible.
I honestly didn't expect there to be a learning curve like that as I remember using Redshift for a day and pretty much getting everything to work as I wanted.
I guess we do need to give Karma some more time to grow although I am unsure how “beta” it actually is and what parts are “working as intended”. SideFX said it will stay a “beta” until it has feature parity with Mantra (at least all features that are possible).
But USD is here to stay and so is Karma. It's unfortunate that we don't have a native production-ready renderer for USD in Houdini right now, especially because we all know how great Mantra can be. But hey, we will get Karma GPU at some point, and I'm sure Karma will become Mantras younger and hipper sister at some point.
Solaris and Karma » Constant Traceback Errors
- No_ha
- 122 posts
- Offline
Thank you for your answer.
Weird that this isn't more common. It pretty much always starts if I delete or exchange a node upstream of the currently viewed one.
After that it will give me these errors for every click on a light or camera.
I'll guess I need to do some more investigating tomorrow.
Weird that this isn't more common. It pretty much always starts if I delete or exchange a node upstream of the currently viewed one.
After that it will give me these errors for every click on a light or camera.
I'll guess I need to do some more investigating tomorrow.
Edited by No_ha - Oct. 24, 2020 15:28:43
Solaris and Karma » Constant Traceback Errors
- No_ha
- 122 posts
- Offline
Hello everyone,
This was an issue for me in Houdini 18.0 as well, but I thought this was due to Solaris being so new. Unfortunately, this still happens in 18.5.
Pretty much every time I select a light or camera I get this Popup, usually several each time:
As this has to do with Python 2.7 I am wondering if I can force Houdini to use Python 3?
Or does anybody else have this issue?
This was an issue for me in Houdini 18.0 as well, but I thought this was due to Solaris being so new. Unfortunately, this still happens in 18.5.
Pretty much every time I select a light or camera I get this Popup, usually several each time:
As this has to do with Python 2.7 I am wondering if I can force Houdini to use Python 3?
Or does anybody else have this issue?
Solaris and Karma » Indie Karma switch to non commercial
- No_ha
- 122 posts
- Offline
Same for me. The second I rendered Karma to Mplay the console said it switched to a non-commercial session.
Solaris and Karma » Adding Custom Metadata to Rendered EXR
- No_ha
- 122 posts
- Offline
Thanks for confirming and saving me from spending more time trying to get it working!
Edited by No_ha - Oct. 10, 2020 12:07:15
Solaris and Karma » Adding Custom Metadata to Rendered EXR
- No_ha
- 122 posts
- Offline
Sorry to revive this thread, but Tim did you get it to work in Houdini 18.0?
I've been trying the same but can't even see Metadata added in the “Metadata”-Tab of the Karma LOP (for example Artist name).
I've been trying the same but can't even see Metadata added in the “Metadata”-Tab of the Karma LOP (for example Artist name).
Houdini Indie and Apprentice » Emitting fluid
- No_ha
- 122 posts
- Offline
Volume loss is usually happening because of the grid size not being able to resolve everything. Try lowering it to 1. I would then try to find the biggest number that still gives the desired result, as this setting can have a huge performance impact.
You also need to play around with particle separation, particle size and double check your collision geo, but usually this is connected to the grid size.
You also need to play around with particle separation, particle size and double check your collision geo, but usually this is connected to the grid size.
Houdini Indie and Apprentice » Animation Layers
- No_ha
- 122 posts
- Offline
Is this something that you can reproduce easily in another hip file? If yes then you should send this as a bug report to SideFX. They are usually quite fast with fixing these kinds of issues.
Houdini Indie and Apprentice » Volume difference between Render view and MPlay / to Disk
- No_ha
- 122 posts
- Offline
Thank you all very much.
It eases my mind, that you don't use shadow maps anymore, because I had already given up and am using raytraced shadows now.
I probably read too much outdated material.
It eases my mind, that you don't use shadow maps anymore, because I had already given up and am using raytraced shadows now.
I probably read too much outdated material.
Houdini Indie and Apprentice » Volume difference between Render view and MPlay / to Disk
- No_ha
- 122 posts
- Offline
Thank you for your answer.
I was under the impression that when you disable the preview button, mantra would use the same settings as if you would render to mplay.
Especially because it does generate shadow maps before starting to render which seems unnecessary if it doesn't use them.
But I still feel like I need to change something, since the handbook advises you to use shadow maps instead of raytraced shadows and I can't seem to get them to work properly.
I was under the impression that when you disable the preview button, mantra would use the same settings as if you would render to mplay.
Especially because it does generate shadow maps before starting to render which seems unnecessary if it doesn't use them.
But I still feel like I need to change something, since the handbook advises you to use shadow maps instead of raytraced shadows and I can't seem to get them to work properly.
Houdini Indie and Apprentice » Redeon Pro Vega not supported for particle simulation?
- No_ha
- 122 posts
- Offline
What kind of particle simulation are you talking about?
More information would be helpful. But in general, Houdini uses OpenCL so your graphics card should work.
You can check in the preferences if Houdini sees your GPU as an OpenCL capable device. If it shows up and GPU is selected then it should work.
If your scene has to cook a lot before it gets to the simulation, then this might be the reason because the GPU doesn't have anything to do until everything else is done.
More information would be helpful. But in general, Houdini uses OpenCL so your graphics card should work.
You can check in the preferences if Houdini sees your GPU as an OpenCL capable device. If it shows up and GPU is selected then it should work.
If your scene has to cook a lot before it gets to the simulation, then this might be the reason because the GPU doesn't have anything to do until everything else is done.
Houdini Indie and Apprentice » Volume difference between Render view and MPlay / to Disk
- No_ha
- 122 posts
- Offline
A few minutes later and I actually found the root of the problem. The lights are all set to Depth Map and not ray traced shadows.
When using ray traced shadows this difference doesn't happen.
Which makes me wonder if this is a bug and mantra messes something up with the shadow maps generation for the final render.
Maybe somebody could test and see if this only happens on my machine.
When using ray traced shadows this difference doesn't happen.
Which makes me wonder if this is a bug and mantra messes something up with the shadow maps generation for the final render.
Maybe somebody could test and see if this only happens on my machine.
Houdini Indie and Apprentice » Volume difference between Render view and MPlay / to Disk
- No_ha
- 122 posts
- Offline
Hello everyone,
I've come across a problem I've never encountered before. Unfortunately, all the forum posts I could find here and on odforce didn't fix this issue.
For some reason my volumes look and shade completely different in the Render View Pane compared to the actual render.
I am using the micropolygonal render engine for both. The preview render is off, so I'm not comparing raytracing to micropoly.
It seems like it interprets the lights differently. In my actual scene the render time in the render view is around 6 minutes and everything looks as it should, but when rendering to disk or MPlay the render time decreases and I get a horrible result.
I've tried to change the Gamma Settings but I never got the same result. I used a new mantra rop, same problem. Then, I wondered if my file was corrupted and imported my sim into a new scene. The same bad result.
Next I created a simple volume from Tommy and the same problem occurs.
I've attached the simple file and a screenshots that show my problem. You can also see the differences in render speed (although I used 1/3 resolution for the screenshots)
Does this happen to anybody else? Or does somebody know how to fix this?
I've come across a problem I've never encountered before. Unfortunately, all the forum posts I could find here and on odforce didn't fix this issue.
For some reason my volumes look and shade completely different in the Render View Pane compared to the actual render.
I am using the micropolygonal render engine for both. The preview render is off, so I'm not comparing raytracing to micropoly.
It seems like it interprets the lights differently. In my actual scene the render time in the render view is around 6 minutes and everything looks as it should, but when rendering to disk or MPlay the render time decreases and I get a horrible result.
I've tried to change the Gamma Settings but I never got the same result. I used a new mantra rop, same problem. Then, I wondered if my file was corrupted and imported my sim into a new scene. The same bad result.
Next I created a simple volume from Tommy and the same problem occurs.
I've attached the simple file and a screenshots that show my problem. You can also see the differences in render speed (although I used 1/3 resolution for the screenshots)
Does this happen to anybody else? Or does somebody know how to fix this?
Houdini Indie and Apprentice » Pose Library Captures Wrong Data
- No_ha
- 122 posts
- Offline
I meant to post this before but forgot until now.
It was indeed a bug. I submitted it to Sidefx and already got the confirmation that it will be fixed with the next release. It seemed to be a Windows specific bug.
Still surprised me that not more people stumbled onto this problem.
It was indeed a bug. I submitted it to Sidefx and already got the confirmation that it will be fixed with the next release. It seemed to be a Windows specific bug.
Still surprised me that not more people stumbled onto this problem.
Houdini Indie and Apprentice » Pose Library Captures Wrong Data
- No_ha
- 122 posts
- Offline
I haven't been able to fix the problem, but I know what one of the problems is:
I have a parameter for “foot curl” which goes from -10 to 10. My toe controls have the rotation connected to the slider by a simple fit expression.
Now the “foot curl” slider is set to 5 which means that the toe controler has a x rotation of 30 (for example).
When I use the Pose Library it doesn't record the value of the “foot curl” as 5 but uses the 30 of the toe controler.
I'm not quite sure if I'm missing something or if this is a bug. Maybe someone else know more.
Edit: I have tested it with other expression. As soon as there is more than a simple channel reference it will not capture correctly.
I have a parameter for “foot curl” which goes from -10 to 10. My toe controls have the rotation connected to the slider by a simple fit expression.
Now the “foot curl” slider is set to 5 which means that the toe controler has a x rotation of 30 (for example).
When I use the Pose Library it doesn't record the value of the “foot curl” as 5 but uses the 30 of the toe controler.
I'm not quite sure if I'm missing something or if this is a bug. Maybe someone else know more.
Edit: I have tested it with other expression. As soon as there is more than a simple channel reference it will not capture correctly.
Edited by No_ha - Oct. 24, 2019 08:34:58
Houdini Indie and Apprentice » Pose Library Captures Wrong Data
- No_ha
- 122 posts
- Offline
Hello everyone,
I have created a rig (and character). I'm trying to use the Pose Library to save my walk-cylce and other animations.
Sadly the captured data is all wrong. Parameters like the FK/IK Sliders in my HDA get captured wrong. Several Sliders that don't have any values for some reason get all assignes values.
I feel like it tries to capture more than the parameters of my HDA.
I have already tried the different capture settings from the Pose Library. It only works when I bake the keyframes, but that would be problematic to work with.
I'm on Windows 10 with Version 17.5.391
I sometimes get these Warnings in the Console:
Qt Warn: QSortFilterProxyModel: invalid inserted rows reported by source model
Qt Warn: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x70070.
Qt Warn: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x3100be.
I attached a file with the animated bones. Thank you all very much!
(Since this is my first rig in Houdini, I also appreciate every other improvement you might want to add)
I have created a rig (and character). I'm trying to use the Pose Library to save my walk-cylce and other animations.
Sadly the captured data is all wrong. Parameters like the FK/IK Sliders in my HDA get captured wrong. Several Sliders that don't have any values for some reason get all assignes values.
I feel like it tries to capture more than the parameters of my HDA.
I have already tried the different capture settings from the Pose Library. It only works when I bake the keyframes, but that would be problematic to work with.
I'm on Windows 10 with Version 17.5.391
I sometimes get these Warnings in the Console:
Qt Warn: QSortFilterProxyModel: invalid inserted rows reported by source model
Qt Warn: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x70070.
Qt Warn: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x3100be.
I attached a file with the animated bones. Thank you all very much!
(Since this is my first rig in Houdini, I also appreciate every other improvement you might want to add)
Image Not Found
Edited by No_ha - Oct. 22, 2019 06:28:12
Houdini Indie and Apprentice » Extra Image Planes (especially Shadow Pass Questions)
- No_ha
- 122 posts
- Offline
Hello everyone,
I'm a new Houdini Indie user, I'm coming from C4D + After Effects and want to exchange those two applications with Houdini, Fusion (and at a later time add Substance and a motion tracker to this workflow).
My problem right now is how to set up several render passes/ extra image planes correctly.
I have a scene with an object that disintegrates into particles and those particles make a new object. After searching for several days I have a setup that works for this particular project but I know that this won't be sufficient for future work and also I want to learn it correctly.
Right now I have three mantra nodes. One for my objects, one for my particles and one for the ground shadow matte. All render out the several channels to create a beauty pass and direct shadows. The ground has the Shadow Matte material applied. I tried doing it with no material assigned and simply use the direct shadow pass but it never worked properly.
Mantra seems to render the shadows into the Alpha Channel which I can use in Fusion but I can't change the colour of the shadows. If I use the direct Shadow pass Fusion won't use this correctly.
What I would like is having the objects and particles separate (It doesn't need to be separate files, could also be by using mattes) and a separate pass with all shadows. That way I can change the colour of the particles in Fusion and add different backgrounds. Basically what I have with the Shadow matte material except that I can manipulate the shadows more than transparency.
Every hint in the right direction would be appreciated. I tried reading the manual, searched through 10 years of Forums but couldn't find something that worked for me yet. (Maybe it's because I don't understand all the terminology of Houdini yet)
If I didn't make it clear about what I'm talking, or if this is simply a problem from Fusion or you don't have an answer to this particular problem, I still would love to hear how you set up your renders and what you use to composite.
I attached some screenshots so you get a better understanding of my scene.
Thank you all!
Edit 1: I believe I have found a way by using the Alpha or the shadow map as an alpha for a background in Fusion which I can then change the colour of. Nevertheless I'm still not sure if this is a “good” workflow or if I'm complicating things or increasing the render time unnecessary.
I'm a new Houdini Indie user, I'm coming from C4D + After Effects and want to exchange those two applications with Houdini, Fusion (and at a later time add Substance and a motion tracker to this workflow).
My problem right now is how to set up several render passes/ extra image planes correctly.
I have a scene with an object that disintegrates into particles and those particles make a new object. After searching for several days I have a setup that works for this particular project but I know that this won't be sufficient for future work and also I want to learn it correctly.
Right now I have three mantra nodes. One for my objects, one for my particles and one for the ground shadow matte. All render out the several channels to create a beauty pass and direct shadows. The ground has the Shadow Matte material applied. I tried doing it with no material assigned and simply use the direct shadow pass but it never worked properly.
Mantra seems to render the shadows into the Alpha Channel which I can use in Fusion but I can't change the colour of the shadows. If I use the direct Shadow pass Fusion won't use this correctly.
What I would like is having the objects and particles separate (It doesn't need to be separate files, could also be by using mattes) and a separate pass with all shadows. That way I can change the colour of the particles in Fusion and add different backgrounds. Basically what I have with the Shadow matte material except that I can manipulate the shadows more than transparency.
Every hint in the right direction would be appreciated. I tried reading the manual, searched through 10 years of Forums but couldn't find something that worked for me yet. (Maybe it's because I don't understand all the terminology of Houdini yet)
If I didn't make it clear about what I'm talking, or if this is simply a problem from Fusion or you don't have an answer to this particular problem, I still would love to hear how you set up your renders and what you use to composite.
I attached some screenshots so you get a better understanding of my scene.
Thank you all!
Edit 1: I believe I have found a way by using the Alpha or the shadow map as an alpha for a background in Fusion which I can then change the colour of. Nevertheless I'm still not sure if this is a “good” workflow or if I'm complicating things or increasing the render time unnecessary.
Edited by No_ha - March 2, 2019 04:25:07
-
- Quick Links