It looks like when a mesh with old capture data attributes meet new skeleton with appended joints in a capture weight paint node, it might get confused and get weights on incorrect joints on various points.
Suppose I have a rigged character with simple body skeleton,
when I add detailed facial skeleton to its rest skeleton with a skeleton node, and then try to use a capture weight paint node to paint facial skin with modified skeleton node.
It will causing one of the facial joints automatically gaining weights at strange parts like arms, elbows, hips, belly.
Do I have to repaint the whole thing?
And even if I reduce weights of the facial joints on those strange skin regions, and when I reselect the goofy facial joints, weights on those off-facial areas suddenly reappears?
Found 75 posts.
Search results Show results as topic list.
Technical Discussion » Kinefx: Adding skeletons to rigged character problems
- goose7
- 75 posts
- Offline
Houdini Indie and Apprentice » How does "Shear" work in transform/edit/polyextrud?
- goose7
- 75 posts
- Offline
Might need to take a deep dive into linear algebra, but this is a simple case
This is shear: another element's value contribute to this element's position on this element's axis
Like sitting on an empy box and causing it its top to move horizontally first and then collapsed
I have this problem when I use edit tool and use scaling to tweak positions of two joints.
Then although the skeleton looks normal and even in rigvis node it is normal, when I plugin the skeleton into an IK chain node
the whole system explodes, joints start rotating to very wired angles.
What I did is to use skeleton tool to tweak position of joints.
Perhaps you could share a hip file containing the minimum system?
|1 1| X |x|=|x·1+y·1|=|x+y| |0 1| |y|=|x·0+y·1| |y|
Like sitting on an empy box and causing it its top to move horizontally first and then collapsed
I have this problem when I use edit tool and use scaling to tweak positions of two joints.
Then although the skeleton looks normal and even in rigvis node it is normal, when I plugin the skeleton into an IK chain node
the whole system explodes, joints start rotating to very wired angles.
What I did is to use skeleton tool to tweak position of joints.
Perhaps you could share a hip file containing the minimum system?
Edited by goose7 - Aug. 22, 2024 08:24:19
Technical Discussion » AMD Ryzen 9 7950X or Core i9 13900K?
- goose7
- 75 posts
- Offline
Houdini Indie and Apprentice » Hide hair not working
- goose7
- 75 posts
- Offline
Visibility node does not work for me either. Even if I put visibility node before the guide groom node
You could try a split node before the guide groom node.
After grooming each group of guide, merge them(guides) back
And just in case, it seems wrong to wire a guides out put into SKIN out put and SKINVDB out put
It will confuse following operators
You could try a split node before the guide groom node.
After grooming each group of guide, merge them(guides) back
And just in case, it seems wrong to wire a guides out put into SKIN out put and SKINVDB out put
It will confuse following operators
Edited by goose7 - Aug. 15, 2024 02:31:35
Houdini Indie and Apprentice » Hide hair not working
- goose7
- 75 posts
- Offline
Technical Discussion » Configure VSCode for Python (IntelliSense, code completion,
- goose7
- 75 posts
- Offline
Thank you for the work; we are absolutely looking forward to have a working intellisense and even auto complete for houdini's scripting system.
Will it support vex as well?
Will it support vex as well?
Technical Discussion » Cannot understand the elephant master class sample.
- goose7
- 75 posts
- Offline
Hi folks, I'm watching this https://www.youtube.com/watch?v=shGXQxx8du8&t=1158s [www.youtube.com] master class and investing in how to make this kind of rig
I've also downloaded the file here https://www.sidefx.com/community-main-menu/character-workflow-rig-animate-secondary-motion/ [www.sidefx.com]
but then I find out that this complexity is out of my reach.
I noticed that in the rig graph before the IK_ANIMATION node and IK_POLES node, there isn't an ik-chain node so the network does not explicitly assign goals/root/mid and pole vector. It looks like the limb IK with pole constraint is written manually?
1.
I start having difficulty understanding since this part.
From what I learned this part adds the control shape, and construct an inverse foot skeleton (for reasons I don't know). And I don't understand why the forearm and calf node has been offset. Is it been moved to mark the pole location?
2.Next is this part
I can understand that it blends input animation into the control skeleton ( the skeleton with control geometry). But the joint appears in apparent pole vector location is called L_forearm, and the joint in apparent real forearm location is called L_forearm(transfered to reparent control rig)
I test the upstream nodes and found that the foremarm and the calf node got "duplicated" in this Rig Attribute Wrangle node.
It seems that this wrangle only sets the affected point's transform matrix to world orientation but the viewport shows two joints for each of them. How did the duplicate happens? And if we use a skeleton node to check it we can find that the actual skeleton does not recognize the *_forearm_reconstructed_hireachy joint. What does this mean?
3.Then inside here.
I can see that the c_main_offset controller is pulled back while the foot controller's are animating normally in a walking pattern.
It feels like a channeled motion where the main_offset position decides the phase of a feet/hand's location, is that same in this case?
However the forearm and the calf joints are also pulled back, and I cannot understand this as well. Are they functioning as pole vectors and are pole vectors necessary to move along the main_offset controller? I thought it should move with the pelvis or COG?
4.finnaly, this formidable blob of spaghetti noodles.
I have no idea what it is computing.
From its parent's parent's layers behavior, it seems it is acting like an IK and a pole constraint system. For I did not even see an ik_chain node in the outer network.
But inside it, I have no idea what it does.
This fancy tutorial gives us great result, but it is very hard to understand.
I've also downloaded the file here https://www.sidefx.com/community-main-menu/character-workflow-rig-animate-secondary-motion/ [www.sidefx.com]
but then I find out that this complexity is out of my reach.
I noticed that in the rig graph before the IK_ANIMATION node and IK_POLES node, there isn't an ik-chain node so the network does not explicitly assign goals/root/mid and pole vector. It looks like the limb IK with pole constraint is written manually?
1.
I start having difficulty understanding since this part.
From what I learned this part adds the control shape, and construct an inverse foot skeleton (for reasons I don't know). And I don't understand why the forearm and calf node has been offset. Is it been moved to mark the pole location?
2.Next is this part
I can understand that it blends input animation into the control skeleton ( the skeleton with control geometry). But the joint appears in apparent pole vector location is called L_forearm, and the joint in apparent real forearm location is called L_forearm(transfered to reparent control rig)
I test the upstream nodes and found that the foremarm and the calf node got "duplicated" in this Rig Attribute Wrangle node.
It seems that this wrangle only sets the affected point's transform matrix to world orientation but the viewport shows two joints for each of them. How did the duplicate happens? And if we use a skeleton node to check it we can find that the actual skeleton does not recognize the *_forearm_reconstructed_hireachy joint. What does this mean?
3.Then inside here.
I can see that the c_main_offset controller is pulled back while the foot controller's are animating normally in a walking pattern.
It feels like a channeled motion where the main_offset position decides the phase of a feet/hand's location, is that same in this case?
However the forearm and the calf joints are also pulled back, and I cannot understand this as well. Are they functioning as pole vectors and are pole vectors necessary to move along the main_offset controller? I thought it should move with the pelvis or COG?
4.finnaly, this formidable blob of spaghetti noodles.
I have no idea what it is computing.
From its parent's parent's layers behavior, it seems it is acting like an IK and a pole constraint system. For I did not even see an ik_chain node in the outer network.
But inside it, I have no idea what it does.
This fancy tutorial gives us great result, but it is very hard to understand.
Edited by goose7 - Aug. 14, 2024 03:41:11
Technical Discussion » Learning APEX but unable to select control geometry
- goose7
- 75 posts
- Offline
Hi guys I'm watching the APEX master class but I run into problem very quickly
https://www.youtube.com/watch?v=-0KbPtoP5MU [www.youtube.com]
It is said the second autorigcomponent with a configurecontrols parameter will do the job and make the control geometry able to be selected.
But in fact I cannot replicate this step.When I attach the torus to the rigged object I am still unable to select the control geometry
Anything I am missing?
hip file here...
https://www.youtube.com/watch?v=-0KbPtoP5MU [www.youtube.com]
It is said the second autorigcomponent with a configurecontrols parameter will do the job and make the control geometry able to be selected.
But in fact I cannot replicate this step.When I attach the torus to the rigged object I am still unable to select the control geometry
Anything I am missing?
hip file here...
Image Not Found
Houdini Lounge » How to fake a facial rig by morphing standard face?
- goose7
- 75 posts
- Offline
So I see someone does this in Maya but I don't know Maya.
The idea is this
Setup
I have a standard rigged head mesh from metahuman, let's called it head S.
And I have a custom scanned/photogrammetry/sculpted head mesh(or even opened front face mesh only). Let's call it head C.
Their tricks
Morph, or project mesh S to C so that S match the shape of mesh C. (or at least its front part, the face match the shape of mesh C)
Result
You get a rigged head that looks exactly like head C.
how to do this in Houdini? Can we use Ray node? I tried but I got uneven and wild floaty polygons around the edge of the mesh's open border.
The idea is this
Setup
I have a standard rigged head mesh from metahuman, let's called it head S.
And I have a custom scanned/photogrammetry/sculpted head mesh(or even opened front face mesh only). Let's call it head C.
Their tricks
Morph, or project mesh S to C so that S match the shape of mesh C. (or at least its front part, the face match the shape of mesh C)
Result
You get a rigged head that looks exactly like head C.
how to do this in Houdini? Can we use Ray node? I tried but I got uneven and wild floaty polygons around the edge of the mesh's open border.
Technical Discussion » Is there a way to scale the position of kinefx joints?
- goose7
- 75 posts
- Offline
I don't know why the length limit of the title is so strict..
I am learning face rig and trying to "suck joints to their centroid" in kineFX
so simple setup
I have rig character's eye to eye joints.
Now I want to all points to move towards their centroid, or move towards their mean Z-position.
In 3DSMAX you can toggle the "use common centroid" button and then use scale to to pull the vertices together.
But I cannot find a way to do that here. I have to manually move those joints, which is not efficient and of bad quality.
I am learning face rig and trying to "suck joints to their centroid" in kineFX
so simple setup
I have rig character's eye to eye joints.
Now I want to all points to move towards their centroid, or move towards their mean Z-position.
In 3DSMAX you can toggle the "use common centroid" button and then use scale to to pull the vertices together.
But I cannot find a way to do that here. I have to manually move those joints, which is not efficient and of bad quality.
Houdini Lounge » Procedural, Parametric. Differences?
- goose7
- 75 posts
- Offline
Functions supporting two-way (undo-redo) parametric = Procedural
Edited by goose7 - April 7, 2023 04:48:39
Houdini Lounge » Will generative AI makes 3D DCC obsolete for artists?
- goose7
- 75 posts
- Offline
Hi guys I've just tried those products like Adobe firefly, ChatGPT and Midjourney.
I'm on a 3070ti and they can give me 512p images in less than 4 seconds, with correct prompt text and negative texts they can bring me good results.
With this small online demo, I can create 4 of this in 4 seconds, with just a few prompt text, Say if I want to do this in Houdini, or Maya, or any DCC, how many hours will I(you) use?
PS: You can try the efficiency of generative AI here
https://stablediffusionweb.com/#demo [stablediffusionweb.com]
What's your opinion on generative AI?
I've heard that Netease boss is in a hurry to push their concept artists to develop drawing AI in order to fire them.
I've asked some of my friends working in AI.
One of the NLP researchers said for sure AI is gonna make lots of artist lose their jobs because their bosses could just buy n RTX x090 card to do all the concept arts. 3D modelers will live longer, but just a little bit. Generative AI will handle 3D modeling later. And for 3D rendering? Why using it while we can generate 2D images without calculating where each of the rays the light emits and bounces by the objects? You can draw a masterpiece 2D image AI of photorealism without the knowledge of 3D.
Another data scientist Phd said that generative AI is just a lossy compression of the entire internet, what you generate is just an approximate guess of the theoretical truth, but with 80%~90% correctness. Therefore, in the future, those who can produce high quality non-AI generated materials will have their value. Moreover, since generative AI has less control in terms of strict elements like precise coordinates, those work which requires entering precise numbers will not be replaced (like modeling a specific type of a car).
What's your thought on that? Do you think traditional 3D-DCC and its workflow will be obsolete if you are an artist?
I'm on a 3070ti and they can give me 512p images in less than 4 seconds, with correct prompt text and negative texts they can bring me good results.
With this small online demo, I can create 4 of this in 4 seconds, with just a few prompt text, Say if I want to do this in Houdini, or Maya, or any DCC, how many hours will I(you) use?
PS: You can try the efficiency of generative AI here
https://stablediffusionweb.com/#demo [stablediffusionweb.com]
What's your opinion on generative AI?
I've heard that Netease boss is in a hurry to push their concept artists to develop drawing AI in order to fire them.
I've asked some of my friends working in AI.
One of the NLP researchers said for sure AI is gonna make lots of artist lose their jobs because their bosses could just buy n RTX x090 card to do all the concept arts. 3D modelers will live longer, but just a little bit. Generative AI will handle 3D modeling later. And for 3D rendering? Why using it while we can generate 2D images without calculating where each of the rays the light emits and bounces by the objects? You can draw a masterpiece 2D image AI of photorealism without the knowledge of 3D.
Another data scientist Phd said that generative AI is just a lossy compression of the entire internet, what you generate is just an approximate guess of the theoretical truth, but with 80%~90% correctness. Therefore, in the future, those who can produce high quality non-AI generated materials will have their value. Moreover, since generative AI has less control in terms of strict elements like precise coordinates, those work which requires entering precise numbers will not be replaced (like modeling a specific type of a car).
What's your thought on that? Do you think traditional 3D-DCC and its workflow will be obsolete if you are an artist?
Edited by goose7 - March 23, 2023 03:23:54
Work in Progress » Test of Kamar rendering human head skin
- goose7
- 75 posts
- Offline
So I am wondering if I can find a fast and good sss rendering techniques in Karma.
Previous experience in offline rendering with CPU educate us that if you enable SSS feature, your render time will go sky rocket..
I downloaded a free head texture and model set and made a very simple shader/renderer setting comparison
This one is using the principle shader and the random walk SSS using CPU renderer
I think it uses about 2 min
-----------------------
This one is using the skin shader core and CPU renderer
This cost 3.7 min.
-------------------
This one is using the materialX sss and XPU render
This cost only 20 seconds.
I feel that in a CPU rendering pipeline, maybe using random walk SSS is more cost efficient than using a skin shader core. It takes less time but produce a good looking result.
What I cannot judge is the XPU render using materialX sss. The parameter tweaking of is troublesome, I sense a strong non-linearity
when tweaking the sss parameters, a little value more will completely make the character looks like a big jade or has just born. Luckily XPU rendering is fast so we can see the result quickly and iterate on it.
My current suggestion is that principle shader is best for doing characters, or if you are very confident at tweaking materials, then you should try materialX SSS and XPU render.
But I am still wondering if there might be better options. I know that unreal engine uses something called subsurface profile and can achieve a good result without using textures. Unreal also uses something called burley subsurface model, and then it is combined with textures it will produce good result in real-time.
What's your thought on this field?
PS: I've just used basecolor, normal and spec texture and I haven't done any other combinations yet. I've also made a huge downgrade to the textures (from 8K to 2K)
The project file
Previous experience in offline rendering with CPU educate us that if you enable SSS feature, your render time will go sky rocket..
I downloaded a free head texture and model set and made a very simple shader/renderer setting comparison
This one is using the principle shader and the random walk SSS using CPU renderer
I think it uses about 2 min
-----------------------
This one is using the skin shader core and CPU renderer
This cost 3.7 min.
-------------------
This one is using the materialX sss and XPU render
This cost only 20 seconds.
I feel that in a CPU rendering pipeline, maybe using random walk SSS is more cost efficient than using a skin shader core. It takes less time but produce a good looking result.
What I cannot judge is the XPU render using materialX sss. The parameter tweaking of is troublesome, I sense a strong non-linearity
when tweaking the sss parameters, a little value more will completely make the character looks like a big jade or has just born. Luckily XPU rendering is fast so we can see the result quickly and iterate on it.
My current suggestion is that principle shader is best for doing characters, or if you are very confident at tweaking materials, then you should try materialX SSS and XPU render.
But I am still wondering if there might be better options. I know that unreal engine uses something called subsurface profile and can achieve a good result without using textures. Unreal also uses something called burley subsurface model, and then it is combined with textures it will produce good result in real-time.
What's your thought on this field?
PS: I've just used basecolor, normal and spec texture and I haven't done any other combinations yet. I've also made a huge downgrade to the textures (from 8K to 2K)
The project file
Image Not Found
Edited by goose7 - Jan. 19, 2023 09:02:46
Technical Discussion » Solaris Hair houdinihairprocedural node fails
- goose7
- 75 posts
- Offline
robp_sidefxHi Robp, I also get bad luck this time.. I just recovered from Omicron..
Yes, I agree that error message is not very helpful. The warnings ("node not found or wrong type") are a distraction, and the error ("Couldn't find integer attribute...") could be more verbose.
Ultimately I believe the issue with your setup is that the guide curves don't have skinprim & skinprimuvw attribute data on them. Have a look at the attached image for the change you can make on the /obj/hairgen1 node
I found that there might be a bug.
I've added "skinprim " and "skinprimuvw " to the prim attribute, but it does not help.
However, I created a new houdinihairprocedural node and I leave everything blank, than it works (of course the generation parameters does not work)
And furthermore, it looks like that the rendering speed of directly importing hair as sop geometry and the rendering speed of using houdinihairprocedural node is very similar.
Work in Progress » Destruction Simulation tests
- goose7
- 75 posts
- Offline
That looks great.
Just a little advice, building demolition explosions are not very much gasoline explosions with big fireballs.
Most of the effect is blast and smoke.
And, you are not using composing right? You rendered all this out with one shot?
Just a little advice, building demolition explosions are not very much gasoline explosions with big fireballs.
Most of the effect is blast and smoke.
And, you are not using composing right? You rendered all this out with one shot?
Technical Discussion » Groom from imported alembic scene file gets huge
- goose7
- 75 posts
- Offline
Just curious, how many guides are you using?
From my test, .bgeo.sc sim saves might consume gigabytes of storage for hundreds of frames vellum sim for thousands of guides.
But if there is no sim archives, there's little chance that a moderate weight groom can use such much space of storage.
From my test, .bgeo.sc sim saves might consume gigabytes of storage for hundreds of frames vellum sim for thousands of guides.
But if there is no sim archives, there's little chance that a moderate weight groom can use such much space of storage.
Edited by goose7 - Dec. 14, 2022 23:35:06
Technical Discussion » Solaris Hair houdinihairprocedural node fails
- goose7
- 75 posts
- Offline
robp_sidefxgoose7robp_sidefxGreetings sir, thanks for the help.
For the camera issue, I've replied to your other post (https://www.sidefx.com/forum/topic/81674/?page=2#post-378694).
I'm looking at the hair issue now.
I didn't know solaris has that difference than Mantra from usd saving files.
I managed to get a render in Mplay.
For the hair things,
I've got this error message after I did something.Failed to run preframe procedurals script: Traceback (most recent call last): File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 207, in <module> runProcedurals(stage, stage.GetSessionLayer(), File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 143, in runProcedurals new_layer = runProcedural(proc_path, prim, args_dict, proc_expand_dir) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 50, in runProcedural geo = fn(prim, args) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/husdplugins\houdiniprocedurals\invokegraph.py", line 48, in procedural invoke.execute(result, geos) File "C:\PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs\hou.py", line 79811, in execute return _hou.SopVerb_execute(self, dest, inputs) hou.OperationFailed: The attempted operation failed. Verb invokegraph generated errors: Warning: The node "<internal>: /stage/sop_graphs/hairgen/setup_hairlen_parms" was not found or was the wrong type for this operation. The node "/stage/sop_graphs/hairgen/setup_hairlen_parms" was not found or was the wrong type for this operation. The node "<internal>: /stage/sop_graphs/hairgen/setup_hairgen_parms" was not found or was the wrong type for this operation. The node "/stage/sop_graphs/hairgen/setup_hairgen_parms" was not found or was the wrong type for this operation. Invalid attribute specification: "<internal>: pscale not found". Error: <internal>: Couldn't find integer attribute for primitive number..
It is strange, it says it cannot find pscale.
I know that solaris changed traditional names of point attributes, but now the hair generate node in obj level is using @width as its parameter. Solaris uses width as well. How can it look for pscale?
Another thing I noticed that rendering in USD render rop node causing significant more time than rendering in viewport. Despite I've assigned same karma render settings via viewport and USD render rop.
Their sampler limit settings should be exactly same.
Is there any things missing for this difference?
Sorry for the delay, I was unfortunately ill the last few days. Can you please post your latest hip file for me to investigate?
Oh, I'm sorry to hear that. Hope you're feeling well.
Image Not Found
I've marked the node branch that causes this error in red.
The message of the error has been changed a little bit and I have no I idea how it becomes that.
Oh, and I've ticked a PrimID render var and it shows a new error
[01:49:44] Failed to run preframe procedurals script: Traceback (most recent call last): File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 207, in <module> runProcedurals(stage, stage.GetSessionLayer(), File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 143, in runProcedurals new_layer = runProcedural(proc_path, prim, args_dict, proc_expand_dir) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 50, in runProcedural geo = fn(prim, args) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/husdplugins\houdiniprocedurals\invokegraph.py", line 48, in procedural invoke.execute(result, geos) File "C:\PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs\hou.py", line 79811, in execute return _hou.SopVerb_execute(self, dest, inputs) hou.OperationFailed: The attempted operation failed. Verb invokegraph generated errors: Warning: The node "<internal>: /stage/sop_graphs/hairgen/setup_hairlen_parms" was not found or was the wrong type for this operation. The node "/stage/sop_graphs/hairgen/setup_hairlen_parms" was not found or was the wrong type for this operation. The node "<internal>: /stage/sop_graphs/hairgen/setup_hairgen_parms" was not found or was the wrong type for this operation. The node "/stage/sop_graphs/hairgen/setup_hairgen_parms" was not found or was the wrong type for this operation. Error: <internal>: Couldn't find integer attribute for primitive number.. [01:49:48] Pixel filter 'minmax' - Mode idcover requires PrimId channel
This error messgae is really hard to interpret because it does not point the path of the stage, instead it seams that it is using a secret path that is not visible in solaris network view
Edited by goose7 - Dec. 14, 2022 12:51:20
Technical Discussion » Get final render resolution from ROP?
- goose7
- 75 posts
- Offline
Can you try switch between different render settings in ROP? (I mean in solaris)
I mean that's some sort of indirect solution.
If your purpose is to get I think you can name the render setting with the resolution you are using, and then get that string from render ROP and slice it.
I mean that's some sort of indirect solution.
If your purpose is to get I think you can name the render setting with the resolution you are using, and then get that string from render ROP and slice it.
Edited by goose7 - Dec. 7, 2022 04:52:35
Technical Discussion » Solaris Hair houdinihairprocedural node fails
- goose7
- 75 posts
- Offline
robp_sidefxGreetings sir, thanks for the help.
For the camera issue, I've replied to your other post (https://www.sidefx.com/forum/topic/81674/?page=2#post-378694).
I'm looking at the hair issue now.
I didn't know solaris has that difference than Mantra from usd saving files.
I managed to get a render in Mplay.
For the hair things,
I've got this error message after I did something.
Failed to run preframe procedurals script: Traceback (most recent call last): File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 207, in <module> runProcedurals(stage, stage.GetSessionLayer(), File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 143, in runProcedurals new_layer = runProcedural(proc_path, prim, args_dict, proc_expand_dir) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs/husd/runprocedurals.py", line 50, in runProcedural geo = fn(prim, args) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/husdplugins\houdiniprocedurals\invokegraph.py", line 48, in procedural invoke.execute(result, geos) File "C:\PROGRA~1/SIDEEF~1/HOUDIN~1.368/houdini/python3.9libs\hou.py", line 79811, in execute return _hou.SopVerb_execute(self, dest, inputs) hou.OperationFailed: The attempted operation failed. Verb invokegraph generated errors: Warning: The node "<internal>: /stage/sop_graphs/hairgen/setup_hairlen_parms" was not found or was the wrong type for this operation. The node "/stage/sop_graphs/hairgen/setup_hairlen_parms" was not found or was the wrong type for this operation. The node "<internal>: /stage/sop_graphs/hairgen/setup_hairgen_parms" was not found or was the wrong type for this operation. The node "/stage/sop_graphs/hairgen/setup_hairgen_parms" was not found or was the wrong type for this operation. Invalid attribute specification: "<internal>: pscale not found". Error: <internal>: Couldn't find integer attribute for primitive number..
It is strange, it says it cannot find pscale.
I know that solaris changed traditional names of point attributes, but now the hair generate node in obj level is using @width as its parameter. Solaris uses width as well. How can it look for pscale?
Another thing I noticed that rendering in USD render rop node causing significant more time than rendering in viewport. Despite I've assigned same karma render settings via viewport and USD render rop.
Their sampler limit settings should be exactly same.
Is there any things missing for this difference?
Edited by goose7 - Dec. 6, 2022 13:15:41
Solaris and Karma » No cameras found in the USD file - Karma rendering
- goose7
- 75 posts
- Offline
robp_sidefxI guess I have a similar issues in 19.5.368
This issue should be resolved in 19.0.449 ... please do let me know if you continue to experience it!
- Rob
It is a very samll scene with a ground, shaderball, a sphere with hairs grown on it.
However
1.No matter what camera am I using, e.g cameras created in solaris or imported from object stage.
In render settings node the viewport rendere is always success.
But when you attach a usdrender_rop under it it always pops up
"No cameras found in the USD file"
I tired to set override camera however it pops up "unable to find cam3"
2.Houdiniprocedural hair seems not woring in generate mode.
Image Not Found
Edited by goose7 - Dec. 6, 2022 07:44:00
-
- Quick Links