Hey all, wondering if this is possible in Karma:
I have a shader comprised of two dielectric BSDFs. The results of each are appearing in the combinedglossyreflections AOV.
I would LOVE if I could make each of the BSDF contributions pipe in to separate custom AOVs.
Reading the docs, and looking at the premade rendervar for SSS, I see there is some support for "BSDF Labels". Admitedly this is a mantra page but I was just trying to get my head around LPE syntax https://www.sidefx.com/docs/houdini/render/lpe.html [www.sidefx.com] and it does say "This can be used to separate contributions from BSDFs that fall under the same broad event category, but have different labels." Which sounds like exactly what I wanted, but I havent been able to successfully stumble into any syntax that does what I want right now.
Do we have any control over what a BSDF label is? or any ability to inspect which BSDFs have which label? Am I down the wrong track entirely?
Appreciate any help!
Found 13266 posts.
Search results Show results as topic list.
Solaris and Karma » Separating BSDFs, of the same type, to different AOV
- Slogbogicus
- 2 posts
- Offline
Solaris and Karma » polygon sides on guidedoceanlayer_fluidinterior_extended
- Crossply
- 4 posts
- Offline
Is it possible to get rid of the polygonal walls the ocean procedural puts on? At shallow camera angles it makes the whole thing flicker? I guess I could make the whole thing bigger, it just seems more logical to not render the walls
Thanks
Simon
Thanks
Simon
PDG/TOPs » ImageMagick Working, But Error!
- DJ_Dhrub
- 4 posts
- Offline
I am generating gifs from mp4 files.
My ImageMagick node generates GIFs, but still throws error showing
My ImageMagick node generates GIFs, but still throws error showing
convert: unable to open image 'convert': No such file or directory @ error/blob.c/OpenBlob/3569.
convert: no decode delegate for this image format `' @ error/constitute.c/ReadImage/746.
ERROR: Processed failed with exit code '1'
Any idea why and maybe how to fix it?
Edited by DJ_Dhrub - 2024年4月2日 07:49:41
Technical Discussion » heightfield display resolution
- Serg
- 511 posts
- Offline
How do I display heightfields at their actual resolution instead of what appears to be a 20x20 grid with a normal map on top?
PDG/TOPs » Normalized wedge attribute
- Soothsayer
- 856 posts
- Offline
Houdini for Realtime » Vat, keyframe animations and Bullet solver, interpolation
- bugajskimaciek
- 9 posts
- Offline
Hey so Im have to blend breaking glass, with glass hovering in space, to switch between those two effects. breaking glass I made with bullet solver, but hovering glass pieces I made with MOPS. They worked fine separatly but they didnt blend well in Unity. Either I animated first frame, and it was different then last frame of simulation, or I didnt animated first frame and some parts have had really strong glossy reflection(its supposed to be glass, so it have high smoothness material)
So I merged them into one VAT texture by atrib copy SOP (copying position) and later mops Pivot Sop with center Pivot mode(its temporaly stable) everything works fine in Houdini. In unity first time i play it on VFX (I use animation curves to drive frame display) and if I have interpolation on, the interpolation goes to top right corner of the simulation. Second simulation is fine
Does anyone know a solution?
So I merged them into one VAT texture by atrib copy SOP (copying position) and later mops Pivot Sop with center Pivot mode(its temporaly stable) everything works fine in Houdini. In unity first time i play it on VFX (I use animation curves to drive frame display) and if I have interpolation on, the interpolation goes to top right corner of the simulation. Second simulation is fine
Does anyone know a solution?
Image Not Found
Work in Progress » Monster Art - Monster University Character
- Gerardo Castellanos
- 39 posts
- Offline
A showcase of the use of my GroomToolKit and the GRM groomNoise, a hair noiser tool for Houdini.
Original character from "Monster University" by Pixar/Disney©
Modeling on ZBrush, Groom, Shading, Lighting and Rendering on Houdini 20 and Karma CPU.
I tried to submit it on the Galley, but it says page no found when I click on "submit".
Edited by Gerardo Castellanos - 2024年4月5日 07:23:21
Technical Discussion » Looking for tips for reducing memory usage with my workflow
- mrpdean
- 66 posts
- Offline
Hi,
I have a workflow which, in basic terms, generates corrective blendshapes for FBX characters by comparing the FBX against an Alembic version of the character.
Recently I been going through an optimisation pass to try to reduce memory usage when generating hundreds of these corrective blendshapes.
So far I've managed to reduce memory a bit just by ensuring that only the necessary data/attributes are passed into the workflow but I am hoping to be able to reduce it even further.
After a running a performance monitor one thing really jumped out at me, which is the fact that blasting or deleting points take a lot of memory and I can't understand why.
In one scenario I have a blast node which deleted primitive if they have a value of one for a certain attribute.. however, even if no primitives have a value of one, and nothing gets blasted, it still uses a lot of memory.
For for each blocks also use a lot of memory but I can kind of understand that, but the deleting/blasting seems odd to me. I would have though that reducing the number of points would reduce memory usage but it doesn't seem to work that way.
Any other optimisation tips would be most appreciated.
Thanks
I have a workflow which, in basic terms, generates corrective blendshapes for FBX characters by comparing the FBX against an Alembic version of the character.
Recently I been going through an optimisation pass to try to reduce memory usage when generating hundreds of these corrective blendshapes.
So far I've managed to reduce memory a bit just by ensuring that only the necessary data/attributes are passed into the workflow but I am hoping to be able to reduce it even further.
After a running a performance monitor one thing really jumped out at me, which is the fact that blasting or deleting points take a lot of memory and I can't understand why.
In one scenario I have a blast node which deleted primitive if they have a value of one for a certain attribute.. however, even if no primitives have a value of one, and nothing gets blasted, it still uses a lot of memory.
For for each blocks also use a lot of memory but I can kind of understand that, but the deleting/blasting seems odd to me. I would have though that reducing the number of points would reduce memory usage but it doesn't seem to work that way.
Any other optimisation tips would be most appreciated.
Thanks
Technical Discussion » hqueue wishlist
- jeffMc
- 125 posts
- Offline
I have been using Hqueue heavily the past two months. As the sole Houdini user in our facility I do my best to maximize our resources. I have two dedicated render clients and my laptop which I develop on. For what it is worth these are the issues I have struggled with. Would be grateful for helpful insights anyone could offer.
#1. Not an HQ issue but I cant use shared folders on our corporate net. I have to resort to manually copying folders to each machine. Is it possible that I could share Teams/Sharepoint or other folders for the client machines?
#2. With some scenes while generating the usd, the machine accumulates memory and tops out and crashes.
- can this job be restarted at the usd generation level? if I reschedule it seem to restart from the beginning clobbering all of the other frames and likely crashing at the same time location.
#3. it would be REALLY helpful to have more relevant info compressed into the main ui. Most if not all of the relevant job and client info should be available from the main screen. ie being able to see and select which machines are working on a job. Which machine is holding the FX or Engine licenses... Switching between the client and job ui are cumbersome and slow as well as not conveying relevant job info.
#4 Licensing. I have an FX license and only recently went to the web license mode which is a HUGE HUGE help. With my current problem from issue #1 I need for each machine to generate its own usd file for Karma rendering but often a machine seems to keep its FX license even though it is only doing the Karma job and I have to manually release it. Ugh. Again the license details would be great to have in the main Ui of Hqueue.
As with most of my Houdini questions I am often humbled by how simple the workflow or solution can be.
Apologies for the long thread. Just looking for workflow suggestions and insights.
JeffMc
#1. Not an HQ issue but I cant use shared folders on our corporate net. I have to resort to manually copying folders to each machine. Is it possible that I could share Teams/Sharepoint or other folders for the client machines?
#2. With some scenes while generating the usd, the machine accumulates memory and tops out and crashes.
- can this job be restarted at the usd generation level? if I reschedule it seem to restart from the beginning clobbering all of the other frames and likely crashing at the same time location.
#3. it would be REALLY helpful to have more relevant info compressed into the main ui. Most if not all of the relevant job and client info should be available from the main screen. ie being able to see and select which machines are working on a job. Which machine is holding the FX or Engine licenses... Switching between the client and job ui are cumbersome and slow as well as not conveying relevant job info.
#4 Licensing. I have an FX license and only recently went to the web license mode which is a HUGE HUGE help. With my current problem from issue #1 I need for each machine to generate its own usd file for Karma rendering but often a machine seems to keep its FX license even though it is only doing the Karma job and I have to manually release it. Ugh. Again the license details would be great to have in the main Ui of Hqueue.
As with most of my Houdini questions I am often humbled by how simple the workflow or solution can be.
Apologies for the long thread. Just looking for workflow suggestions and insights.
JeffMc
Edited by jeffMc - 2024年3月31日 21:29:42
Technical Discussion » Attribute from map - nearest neighbor interpolation?
- papsphilip
- 385 posts
- Offline
trying to read a sequence from disk with attribute from map, but i get artifacts blended in with the actual source colors like there is some sort of interpolation happening just not the one i need.
The original sequence is 1920x1080 and i am reading it on 64x64 point grid with Attribute From Map Filter set to Point.
i need the colors to stay the same as the source
Is there a way to fix this? As you can see in the attached images it creates a big problem when colors are simple and distinct.
also tried rawcolormap function in vex and i get the same issue
The original sequence is 1920x1080 and i am reading it on 64x64 point grid with Attribute From Map Filter set to Point.
i need the colors to stay the same as the source
Is there a way to fix this? As you can see in the attached images it creates a big problem when colors are simple and distinct.
also tried rawcolormap function in vex and i get the same issue
Solaris and Karma » Karma distant meshes flicker
- Htogrom
- 26 posts
- Offline
Hi
Rendering large scene (~5-10km from camera), and facing issues with distant objects flickering. Tried to change ray bias, convergence mode from variance to distributed. 128 samples should be enough, so running out of ideas how to fix it.
Also adjusted camera clipping planes to be from 4 to 12km. Nothing helps.
Unfortunately project is finishing soon so if we switch to Houdini 20, we have to be sure it works. Speaking of that, all of my shaders are principled shaders done in Houdni 19.0. When I render same scene in Houdini 20, all the trees become white. As I understood there were some changes in way shaders are packaged in Houdini 20. Did anyone encountered that, and is there way to fix it on the fly, without going over all usd assets and doing exports from Houdini 20?
Attached is the video showing kind of flickering we have:
https://drive.google.com/file/d/1ipkc_pQZm9DGokJG87Rcmyau5K5UAN7b/view?usp=sharing [drive.google.com]
Rendering large scene (~5-10km from camera), and facing issues with distant objects flickering. Tried to change ray bias, convergence mode from variance to distributed. 128 samples should be enough, so running out of ideas how to fix it.
Also adjusted camera clipping planes to be from 4 to 12km. Nothing helps.
Unfortunately project is finishing soon so if we switch to Houdini 20, we have to be sure it works. Speaking of that, all of my shaders are principled shaders done in Houdni 19.0. When I render same scene in Houdini 20, all the trees become white. As I understood there were some changes in way shaders are packaged in Houdini 20. Did anyone encountered that, and is there way to fix it on the fly, without going over all usd assets and doing exports from Houdini 20?
Attached is the video showing kind of flickering we have:
https://drive.google.com/file/d/1ipkc_pQZm9DGokJG87Rcmyau5K5UAN7b/view?usp=sharing [drive.google.com]
Houdini Lounge » Changing Color Scheme Settings .hcs with python
- edlgm
- 11 posts
- Offline
Just like the title, is there a way of setting this with python? I've been looking online for a while but cant seem to find any info on this. I was able to find the user pref variables but running this code doesnt seem to affect the UI.
Thanks!
color_scheme = hou.getPreference("colors.scheme") print(color_scheme) >>> Houdini Dark hou.setPreference("colors.scheme","Houdini Layout") print(color_scheme) >>> Houdini Layout
Thanks!
Houdini Lounge » Collect node
- habernir
- 83 posts
- Offline
hello
if i have "principled Shader" and "materialx" connected to "collect" node
how do i force karma CPU to use "principled Shader" when both "principled Shader" and "materialx" connected to "collect"?
because its always using materialx
if i have "principled Shader" and "materialx" connected to "collect" node
how do i force karma CPU to use "principled Shader" when both "principled Shader" and "materialx" connected to "collect"?
because its always using materialx
Technical Discussion » Crowd crawl over eachother
- angelicastroud
- 13 posts
- Offline
Does anyone know how I would achieve crowds crawling over eachother (not when they are ragdoll but still moving)
For reference: https://youtu.be/1dGfhe2fnDc?si=fdDQohmlu2GglUq1&t=43 [youtu.be]
For reference: https://youtu.be/1dGfhe2fnDc?si=fdDQohmlu2GglUq1&t=43 [youtu.be]
Houdini Indie and Apprentice » Reconnecting Original Geometry to VDB and Voronoi Fractured
- Burk
- 1 posts
- Offline
I ran into this problem a few months ago and threw up my hands but I'm back at it.
I am trying to simulate an aircraft sliding in an emergency landing and getting deformed, after I have the geometry breaking and deforming I have been unable to get my original UV unwrapped geometry to function properly as Render geometry over the deformed proxy geometry.
https://www.youtube.com/watch?v=xNRquhi2tNM&t=762 [www.youtube.com] I followed this tutorial (Metal Bending by The VFX school)
But he copies the name attributes and everything works for him.
So my questions is what attributes are needed to point VDB converted and fractured geometry back to the original geometry?
I am trying to simulate an aircraft sliding in an emergency landing and getting deformed, after I have the geometry breaking and deforming I have been unable to get my original UV unwrapped geometry to function properly as Render geometry over the deformed proxy geometry.
https://www.youtube.com/watch?v=xNRquhi2tNM&t=762 [www.youtube.com] I followed this tutorial (Metal Bending by The VFX school)
But he copies the name attributes and everything works for him.
So my questions is what attributes are needed to point VDB converted and fractured geometry back to the original geometry?
PDG/TOPs » FFmpeg extract images TOP - overwriting
- papsphilip
- 385 posts
- Offline
it seems ffmpeg extract images does not skip workitems that are already saved to disk. some videos have been converted to sequences but this node extracts them again and again each time i run it overwriting what is already there.
usually for example a fetch rop will not redo exports that are already on disk.
Is there a way to fix this?
the same happens if i use a custom command to extract the audio as well. each time i dirty these nodes and re-run it does not skip workitems that were already cooked previously
usually for example a fetch rop will not redo exports that are already on disk.
Is there a way to fix this?
the same happens if i use a custom command to extract the audio as well. each time i dirty these nodes and re-run it does not skip workitems that were already cooked previously
Technical Discussion » Best way to make a custom version of existing hda?
- animquer
- 10 posts
- Offline
Hello,
I am new to making HDAs. I customized an existing HDA (Labs Edge Damage in this case) and I want to save it as a new HDA. But I get an error: Can't write to the file. You may not have permission to edit this file.
I did change the name from the original HDA.
What is the best way to save my edit as a separate HDA?
Thank you. I need to learn more about making HDAs.
I am new to making HDAs. I customized an existing HDA (Labs Edge Damage in this case) and I want to save it as a new HDA. But I get an error: Can't write to the file. You may not have permission to edit this file.
I did change the name from the original HDA.
What is the best way to save my edit as a separate HDA?
Thank you. I need to learn more about making HDAs.
Technical Discussion » "Troubleshooting hbatch Not Found with Tractor"
- Stephen Davidson
- 14 posts
- Offline
Hi all,
I'm facing an issue where the hbatch executable cannot be found while trying to execute jobs via a Tractor blade. The environment is set with the PATH variable, but attempts to run hbatch with specific commands for rendering operations fail, reporting that the executable is not located. Is there a way to make sure hbatch is correctly recognised, so Tractor won't bug out?
My render blade is on Windows 10. My project path is on a NAS device, with $JOB set to Y:\templates\template_tracor. Also, is the fact that windows uses backslash instead of forward slashes in the path problematic for hbatch?
Thanks
Steve
I'm facing an issue where the hbatch executable cannot be found while trying to execute jobs via a Tractor blade. The environment is set with the PATH variable, but attempts to run hbatch with specific commands for rendering operations fail, reporting that the executable is not located. Is there a way to make sure hbatch is correctly recognised, so Tractor won't bug out?
My render blade is on Windows 10. My project path is on a NAS device, with $JOB set to Y:\templates\template_tracor. Also, is the fact that windows uses backslash instead of forward slashes in the path problematic for hbatch?
Thanks
Steve
Technical Discussion » OpenCL Error on Houdini Preview Procedurals
- WoutBelt16
- 1 posts
- Offline
Hello everyone,
I'm using the new Houdini Feather System, but I am facing a hurdle when using the (houdinipreviewprocedurals) and (feathersurface) nodes. Both are coming up with:
Error: OpenCL Exception: clBuildProgram (-9999)
I've attempted to troubleshoot this by adjusting my OpenCL Device setting to GPU and CPU. Additionally, I've experimented with different graphics drivers, but both haven't resolved the issue.
If anyone has encountered this problem before or knows of a solution, I would greatly appreciate any advice.
Thank you in advance.
I'm using the new Houdini Feather System, but I am facing a hurdle when using the (houdinipreviewprocedurals) and (feathersurface) nodes. Both are coming up with:
Error: OpenCL Exception: clBuildProgram (-9999)
I've attempted to troubleshoot this by adjusting my OpenCL Device setting to GPU and CPU. Additionally, I've experimented with different graphics drivers, but both haven't resolved the issue.
If anyone has encountered this problem before or knows of a solution, I would greatly appreciate any advice.
Thank you in advance.
Technical Discussion » Viewport problem on Centos 7.9 Virtual Machine
- cmonteagudo
- 1 posts
- Offline
Hello all,
I have a Virtual Machine with Centos 7.9 installed on VirtualBox 7.0 (running on Windows 11).
I managed to installed different versions of Houdini from 18.0 to 20.0 with no problem. I can launch all of them, the license is validated properly and Houdini opens.
The problem is with the viewport: it is black (the background settings do not seem to affect), all objects created appear also black making the interaction very difficult (see images centos_issue_1.png and centos_issue_2.png).
It feels like an issue with the virtualization or the OpenGL.
I checked the OpenGL version:
I thought this could be the reason as the OpenGL version should be 4.0. However, I have another VM with Rocky 9.2, using the same OpenGL version and I don't have viewport issues there (tested on 20.0.653), everything seems to work fine.
I did other checks related to the video card:
Is there an imcompatibility with anything in my VM configuration? Is it an OpenGL problem in Centos 7 specifically?
Thanks so much in advance!
Regards
Carlos
I have a Virtual Machine with Centos 7.9 installed on VirtualBox 7.0 (running on Windows 11).
I managed to installed different versions of Houdini from 18.0 to 20.0 with no problem. I can launch all of them, the license is validated properly and Houdini opens.
The problem is with the viewport: it is black (the background settings do not seem to affect), all objects created appear also black making the interaction very difficult (see images centos_issue_1.png and centos_issue_2.png).
It feels like an issue with the virtualization or the OpenGL.
I checked the OpenGL version:
OpenGL version string: 3.3 (Compatibility Profile) Mesa 18.3.4
I thought this could be the reason as the OpenGL version should be 4.0. However, I have another VM with Rocky 9.2, using the same OpenGL version and I don't have viewport issues there (tested on 20.0.653), everything seems to work fine.
I did other checks related to the video card:
$ lspci | grep VGA
00:02.0 VGA compatible controller: VMware SVGA II Adapter
$ find /dev -group video
/dev/fb0
/dev/dri/card0
/dev/dri/renderD128
$ glxinfo | grep -i vendor
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
Vendor: VMware, Inc. (0xffffffff)
OpenGL vendor string: VMware, Inc.
$ lspci -k | grep -EA3 'VGA|3D|Display'
00:02.0 VGA compatible controller: VMware SVGA II Adapter
Subsystem: VMware SVGA II Adapter
Kernel driver in use: vmwgfx
Is there an imcompatibility with anything in my VM configuration? Is it an OpenGL problem in Centos 7 specifically?
Thanks so much in advance!
Regards
Carlos
-
- Quick Links