Found 17 posts.
Search results Show results as topic list.
Houdini Indie and Apprentice » KineFX adds scale to rig after retarget
- mradfo21
- 17 posts
- Offline
Houdini Indie and Apprentice » KineFX adds scale to rig after retarget
- mradfo21
- 17 posts
- Offline
Hey guys!
I'm loving KineFX so far. TONS of potential over humanIK in maya /motionbuilder.
I've noticed a pretty big problem however that I wonder if anyone else has run into. When doing retargetting from one rig to another, KineFX will add a scale value to all the joints in the rig. I see it when I import the fbx back into maya.
this is the Metahuman rig from Epic's .. Metahumans
So okay I use KineFX's awesome retargeting workflow (love the UI)
So far I'm in love. I'd kill for a way to force a rig into a t pose (through your wise magic) but okay im hooked.
HOWEVER.. I noticed the retargetting looked weird back in unreal. Far different than it looked in Houdini.
I found out when I re-imported into maya. It seems to have applied a small scale to every joint. I really just followed the video tutorial from SideFX on retargetting and am doing nothing custom or crazy. Atleast I figured it out. Do you guys know why this might be happening?
Until then its back to maya but again, AMAZING potential here guys! Great work!
I'm loving KineFX so far. TONS of potential over humanIK in maya /motionbuilder.
I've noticed a pretty big problem however that I wonder if anyone else has run into. When doing retargetting from one rig to another, KineFX will add a scale value to all the joints in the rig. I see it when I import the fbx back into maya.
this is the Metahuman rig from Epic's .. Metahumans
So okay I use KineFX's awesome retargeting workflow (love the UI)
So far I'm in love. I'd kill for a way to force a rig into a t pose (through your wise magic) but okay im hooked.
HOWEVER.. I noticed the retargetting looked weird back in unreal. Far different than it looked in Houdini.
I found out when I re-imported into maya. It seems to have applied a small scale to every joint. I really just followed the video tutorial from SideFX on retargetting and am doing nothing custom or crazy. Atleast I figured it out. Do you guys know why this might be happening?
Until then its back to maya but again, AMAZING potential here guys! Great work!
Technical Discussion » Animated groups driving vellum simulations
- mradfo21
- 17 posts
- Offline
Got it so your recommendation would be to set every point as a pin initially and then using an attribute just modify the stiffness via vellumConstraintProperty DOP. I'm animating some breaking values already in there so that sounds reasonable.
The original attribute I'm generating the group from is called “PinSelection” and its 0-1, does that mean I can do a vexpression like
1e10 * @PinSelection to essentially have pins add influence or not? Am I understanding how stiffness would work in this correctly?
The original attribute I'm generating the group from is called “PinSelection” and its 0-1, does that mean I can do a vexpression like
1e10 * @PinSelection to essentially have pins add influence or not? Am I understanding how stiffness would work in this correctly?
Technical Discussion » Animated groups driving vellum simulations
- mradfo21
- 17 posts
- Offline
Hey guys I'm doing some tearing cloth simulations and I wan't to control how they tear over time. I'm using some attribute transfers + noise in SOPs to control a group which changes over time. This group, “pins”, should drive the group inside a Pin To Target constraint inside vellum. It seems that vellum only takes the group on the creation frame (frame 1) and never receives the animated group. Am I approaching this the wrong way? I'm no Houdini pro by any means so It may be possible I'm approaching this the wrong way? I understand how I can animate vellum constraints inside the dopnet's “forces” node using vellumconstraints, but animating the breaking threshold isn't going to give the same look as being able to actually animate the locations of these constraints.
here i make some nice animated attributes that convert to a group:
and then i use this custom “pins” group via Pin To Target:
and yay i get some animating constraint locations (i want to dive into the noise look here to get neat tearing patterns)
but when i run the simulation it only takes the group on the first frame. It'd be quite intuitive if your sops animations were able to be used but i'd love to know how i should recreate this in the vellum mindset.
here i make some nice animated attributes that convert to a group:
and then i use this custom “pins” group via Pin To Target:
and yay i get some animating constraint locations (i want to dive into the noise look here to get neat tearing patterns)
but when i run the simulation it only takes the group on the first frame. It'd be quite intuitive if your sops animations were able to be used but i'd love to know how i should recreate this in the vellum mindset.
Edited by mradfo21 - Nov. 11, 2019 11:41:03
Houdini Engine for Unreal » No Alpha when outputting sprite vertex animation texures
- mradfo21
- 17 posts
- Offline
I seem to be unable to set an alpha value and export it. VAT is always outputting 0 for me. I'm setting @Alpha, that's the correct variable right?
and yet after a successful VAT export my texture's alpha is black:
and yet after a successful VAT export my texture's alpha is black:
Houdini for Realtime » Inconsistent export of vertex animation textures
- mradfo21
- 17 posts
- Offline
I don't truly understand how the tool works under the hood but the inconsistency of this (hey sometimes i make a new one and it'll export.. for a bit) makes me think there's some kind of caching going on under the hood that doesn't get cleared. but that's just my intuition.. who knows..
Houdini for Realtime » Inconsistent export of vertex animation textures
- mradfo21
- 17 posts
- Offline
Hey guys I'm really having trouble reproducing the inconsistencies with vertex animation export of the “Sprite” method. Sometimes it works, sometimes it doesn't. What seems to happen frequently is it'll cease to export by just saying the bounding box is 0x0 and it'll export 1x1 textures. The only thing I can piece together is that it'll work sometimes by dropping a certain new node into the geometry container that i'm trying to export. But other times adding a new node (like an attribute wrangle) seems to then force this no export behavior. My hip file wont be particularly useful as it relies on caches and alembics that are large and I cannot upload them. But has anyone else experienced this cycle where sometimes it'll export and then sometimes it'll start to just output 1x1 textures forever? It's crippling truthfully.. its such an amazing tool and its devastating to spend days troubleshooting THIS part of it..
Houdini for Realtime » Houdini Data Interface for UE4's Niagara
- mradfo21
- 17 posts
- Offline
Mike hey there!
So I'm doing some FX Animation using Houdini and Unreal currently and bringing some Vellum grains simulations into Unreal via Vertex Animation Textures. It's a functional workflow minus the long import times in Unreal, the strange disappearance of points, and the very restrictive point count. I'm pushing a lot of points through it (for realtime) so I'm tediously finding the right amount of points to make it through the export / import process. To Upres I was thinking of just baking 10x sims with different seed values.
BUUUUT what'd be even better is to actually be able to use a csv file as a point cache and actually get niagara particles mapped to each point for each frame. Very much just like a traditional particle cache! I could then use meshes or sprites via Niagara which gives me a ton of nice control on the rendering over the sprites of the Vertex Animation Textures.
So I seem to have hit a roadblock that's left me scratching my head. I've watched all these videos and am really left with little idea how to use the CSV rop as a particle cache. There's all this cool functionality for doing RBD setups that seems neat but all require a dop net for some reason and I'm not sure if my use case matches to the “interpolate” workflow.. I can't really tell - but I suck at Houdini still.
SO I can basically get a single CSV file representing a nice cache of my particles.. But does the CSV Rop not provide animation functionality? Can I not say, write out these particle positions and IDs for frame 1, these for frame 2, etc etc.
I really wan't a caching solution for going from Houdini to Unreal for simple grains sims like this and the vertex textures work but are limited to about 5k points for a long animation length so it's just a horrendous workflow to get anything of quality (30k points or so?). The Niagara approach makes alot more sense but I can't tell if the Houdini side just isn't finished or I'm missing a document or if my knowledge just isn't there?
The examples in these videos are very cool and novel and I love seeing all sorts of neat data come out of houdini and go to Unreal. But I gotta say just being able to bring cached point / velocity locations over and get them playing back so it can sync to animation would be the killer feature. Is there no way to do this via .csv currently?
examples:
*sniff… if only it could do more than one frame!
so here is me using vertex animation textures.. it works but it decides to drop like 3/4 of them and its so painful to do this long import process.
save me Houdini gods!
Matt Radford
So I'm doing some FX Animation using Houdini and Unreal currently and bringing some Vellum grains simulations into Unreal via Vertex Animation Textures. It's a functional workflow minus the long import times in Unreal, the strange disappearance of points, and the very restrictive point count. I'm pushing a lot of points through it (for realtime) so I'm tediously finding the right amount of points to make it through the export / import process. To Upres I was thinking of just baking 10x sims with different seed values.
BUUUUT what'd be even better is to actually be able to use a csv file as a point cache and actually get niagara particles mapped to each point for each frame. Very much just like a traditional particle cache! I could then use meshes or sprites via Niagara which gives me a ton of nice control on the rendering over the sprites of the Vertex Animation Textures.
So I seem to have hit a roadblock that's left me scratching my head. I've watched all these videos and am really left with little idea how to use the CSV rop as a particle cache. There's all this cool functionality for doing RBD setups that seems neat but all require a dop net for some reason and I'm not sure if my use case matches to the “interpolate” workflow.. I can't really tell - but I suck at Houdini still.
SO I can basically get a single CSV file representing a nice cache of my particles.. But does the CSV Rop not provide animation functionality? Can I not say, write out these particle positions and IDs for frame 1, these for frame 2, etc etc.
I really wan't a caching solution for going from Houdini to Unreal for simple grains sims like this and the vertex textures work but are limited to about 5k points for a long animation length so it's just a horrendous workflow to get anything of quality (30k points or so?). The Niagara approach makes alot more sense but I can't tell if the Houdini side just isn't finished or I'm missing a document or if my knowledge just isn't there?
The examples in these videos are very cool and novel and I love seeing all sorts of neat data come out of houdini and go to Unreal. But I gotta say just being able to bring cached point / velocity locations over and get them playing back so it can sync to animation would be the killer feature. Is there no way to do this via .csv currently?
examples:
*sniff… if only it could do more than one frame!
Image Not Found
so here is me using vertex animation textures.. it works but it decides to drop like 3/4 of them and its so painful to do this long import process.
save me Houdini gods!
Matt Radford
Edited by mradfo21 - Aug. 15, 2019 18:51:02
Technical Discussion » efficient facet
- mradfo21
- 17 posts
- Offline
Hey guys !
I'm building a procedural building tool which allows artists to create highly detailed parts of a building and these models are distributed to create any size structure imaginable.
I'm running into a bit of a performance issue, and not in the way i expected. The normals of the objects are wrong, this is usually fixed by using the facet SOP and “make unique points” and computing the normals. However, The performance hit I am taking is immense. The finished system went from real time functionality to a 3 second cook time between doing anything. Now perhaps this is unavoidable. But I was wondering if anyone has found a more efficient way of applying new normal information?
I'm building a procedural building tool which allows artists to create highly detailed parts of a building and these models are distributed to create any size structure imaginable.
I'm running into a bit of a performance issue, and not in the way i expected. The normals of the objects are wrong, this is usually fixed by using the facet SOP and “make unique points” and computing the normals. However, The performance hit I am taking is immense. The finished system went from real time functionality to a 3 second cook time between doing anything. Now perhaps this is unavoidable. But I was wondering if anyone has found a more efficient way of applying new normal information?
Technical Discussion » efficient facet
- mradfo21
- 17 posts
- Offline
Hey guys !
I'm building a procedural building tool which allows artists to create highly detailed parts of a building and these models are distributed to create any size structure imaginable.
I'm running into a bit of a performance issue, and not in the way i expected. The normals of the objects are wrong, this is usually fixed by using the facet SOP and “make unique points” and computing the normals. However, The performance hit I am taking is immense. The finished system went from real time functionality to a 3 second cook time between doing anything. Now perhaps this is unavoidable. But I was wondering if anyone has found a more efficient way of applying new normal information?
I'm building a procedural building tool which allows artists to create highly detailed parts of a building and these models are distributed to create any size structure imaginable.
I'm running into a bit of a performance issue, and not in the way i expected. The normals of the objects are wrong, this is usually fixed by using the facet SOP and “make unique points” and computing the normals. However, The performance hit I am taking is immense. The finished system went from real time functionality to a 3 second cook time between doing anything. Now perhaps this is unavoidable. But I was wondering if anyone has found a more efficient way of applying new normal information?
Houdini Indie and Apprentice » Interior Renders
- mradfo21
- 17 posts
- Offline
Hey guys, this is a really basic question.
I'm trying to do some interior renders. Basically the camera is in a box (granted its a nicely modeled gallery), and I'm using the light template with ambient occlusion, but the occlusion only renders on the exterior of the box, instead of illuminating the interior. I reversed the surface normals to see if this would fix the problem, but no dice.
I'm sure this is a very simple fix.
also, can anyone point me to some lighting and rendering resources that go a bit beyond the tutorials that sidefx has in their learning section?
Thx alot !
I'm trying to do some interior renders. Basically the camera is in a box (granted its a nicely modeled gallery), and I'm using the light template with ambient occlusion, but the occlusion only renders on the exterior of the box, instead of illuminating the interior. I reversed the surface normals to see if this would fix the problem, but no dice.
I'm sure this is a very simple fix.
also, can anyone point me to some lighting and rendering resources that go a bit beyond the tutorials that sidefx has in their learning section?
Thx alot !
Technical Discussion » After Effects
- mradfo21
- 17 posts
- Offline
Technical Discussion » After Effects
- mradfo21
- 17 posts
- Offline
I just got a job to transitions between a series of short videos using a 3d time line and eventually making the whole thing loop.
I am trying to decide between Houdini and Maya on this project, what it is really coming down to is the composting. After Effects is my baby and I want to actually add the video in AE, but I'd love to do the 3d animation in Houdini.
The only thing is I'm not currently aware of a way to export Houdini's camera to After Effects'. Programs like Maya, Max, and C4D do this.
Can someone tell me I'm wrong and that is easy to export houdini cameras to After Effects.
I am trying to decide between Houdini and Maya on this project, what it is really coming down to is the composting. After Effects is my baby and I want to actually add the video in AE, but I'd love to do the 3d animation in Houdini.
The only thing is I'm not currently aware of a way to export Houdini's camera to After Effects'. Programs like Maya, Max, and C4D do this.
Can someone tell me I'm wrong and that is easy to export houdini cameras to After Effects.
Houdini Indie and Apprentice » HDR rendering problem. mysterious red light
- mradfo21
- 17 posts
- Offline
i'll double check that once the simulation cooks.
but if the path to the image was incorrect, why would it the diffuse and specular be correctly affected ?
thanks alot for writing back though. this is part of my final for VSFX 130 and it's due tomorrow at 11:00am.
but if the path to the image was incorrect, why would it the diffuse and specular be correctly affected ?
thanks alot for writing back though. this is part of my final for VSFX 130 and it's due tomorrow at 11:00am.
Houdini Indie and Apprentice » HDR rendering problem. mysterious red light
- mradfo21
- 17 posts
- Offline
i am.
it is the latest build from the website from about a week ago.
think it could be the HDR image?
i'm using a .hdr, not a .rat
it is the latest build from the website from about a week ago.
think it could be the HDR image?
i'm using a .hdr, not a .rat
Houdini Indie and Apprentice » HDR rendering problem. mysterious red light
- mradfo21
- 17 posts
- Offline
hey guys. i've got a problem with HDR rendering.
i'm using an .HDR probe as my area map. it looks pretty good, except for this red light that seems to be coming from the distance.
i've attached a picture. the HDR setup is correct… is it something with the env light settings ?
i'm using an .HDR probe as my area map. it looks pretty good, except for this red light that seems to be coming from the distance.
i've attached a picture. the HDR setup is correct… is it something with the env light settings ?
Houdini Indie and Apprentice » text timing
- mradfo21
- 17 posts
- Offline
hey guys, my first post here.
i'm working on this project and the goal is similar to that of time based typography exercises.
i have some matchmoved footage and in it i want text - about 30 words - to fall from the sky and hit the ground matching up to the audio.
i tried doing this a simple way bu making each individual word and then making them RBD objects. problem is that this way they all fall at the same time.
all the font objects get put into the auto dop network. inside the network i can use the RBD Keyframe Activate node, but i can't seem to use it for the individual font obects stored in there.
i am new to Houdini so i apologize if these questions seem simple. but is there any way to time the falling of my individual words without making dop networks for every word ?
i'm working on this project and the goal is similar to that of time based typography exercises.
i have some matchmoved footage and in it i want text - about 30 words - to fall from the sky and hit the ground matching up to the audio.
i tried doing this a simple way bu making each individual word and then making them RBD objects. problem is that this way they all fall at the same time.
all the font objects get put into the auto dop network. inside the network i can use the RBD Keyframe Activate node, but i can't seem to use it for the individual font obects stored in there.
i am new to Houdini so i apologize if these questions seem simple. but is there any way to time the falling of my individual words without making dop networks for every word ?
-
- Quick Links