D'oh, figured out that I needed to check 'Double Textures' and uncheck 'Pack Normal' to get the slots show up.
I then also increased my meshing resolution prior to exporting, and now I have a decent looking mesh animation without artifacts!
Found 13 posts.
Search results Show results as topic list.
Houdini for Realtime » VAT_Fluid Instance missing texture slots in UE
- drido
- 13 posts
- Online
Houdini for Realtime » VAT_Fluid Instance missing texture slots in UE
- drido
- 13 posts
- Online
Hi,
I try to get a flip fluid sim from houdini to Unreal using the provided SideFxLabs VAT_Fluid material in Unreal.
I'm running
- Houdini 18.5.462
- UE 4.26.1
- Vertex ROP 2.1 as discussed here https://www.sidefx.com/forum/topic/74083/ [www.sidefx.com]
Unfortunately above thread only has very few entries regarding flip fluid and how to use it with the VAT_Fluid material.
I do get the texture animation partially applied to the fbx, but with broken shading and jittering triangles.
My main issue right seems to be that the position2 and normal map slots are not showing up when I create a material instance, so I don't know if those are actually getting used, even if I hardcode the textures in the original material.
I tried recreating the material in a bunch of different ways, but I guess I don't know how the VAT Material Function reads in parameters and how it passes those on to material instances.
So, maybe it is fixable by just updating the shader network, or it might be buggy?
Do people out there get 4 texture slots (color, position, position2, normal) when instancing the vanilla M_VAT_FLuid from the Side Effects_Labs_Content for UE4?
Many Thx+Cheers!
I try to get a flip fluid sim from houdini to Unreal using the provided SideFxLabs VAT_Fluid material in Unreal.
I'm running
- Houdini 18.5.462
- UE 4.26.1
- Vertex ROP 2.1 as discussed here https://www.sidefx.com/forum/topic/74083/ [www.sidefx.com]
Unfortunately above thread only has very few entries regarding flip fluid and how to use it with the VAT_Fluid material.
I do get the texture animation partially applied to the fbx, but with broken shading and jittering triangles.
My main issue right seems to be that the position2 and normal map slots are not showing up when I create a material instance, so I don't know if those are actually getting used, even if I hardcode the textures in the original material.
I tried recreating the material in a bunch of different ways, but I guess I don't know how the VAT Material Function reads in parameters and how it passes those on to material instances.
So, maybe it is fixable by just updating the shader network, or it might be buggy?
Do people out there get 4 texture slots (color, position, position2, normal) when instancing the vanilla M_VAT_FLuid from the Side Effects_Labs_Content for UE4?
Many Thx+Cheers!
Edited by drido - March 17, 2021 09:14:08
Technical Discussion » Viewport Volume Rendering with RTX 3090
- drido
- 13 posts
- Online
Technical Discussion » Viewport Volume Rendering with RTX 3090
- drido
- 13 posts
- Online
Hi!
Would anyone know what's up with my viewport volume rendering?
It seems that the shadow maps don't get properly updated, as the volume slowly turns black upon playing.
The only way to fix it is to Disable Lighting and then turn it back on. Then it looks fine on the frame I'm on, but sarts breaking again upon playing.
I use 18.5.462 and run a RTX 3090. Looked through all the display options and can't seem to fix it there, so I assume it's buggy with my graphics card, maybe?
Thx+Cheers
Drido
Would anyone know what's up with my viewport volume rendering?
It seems that the shadow maps don't get properly updated, as the volume slowly turns black upon playing.
The only way to fix it is to Disable Lighting and then turn it back on. Then it looks fine on the frame I'm on, but sarts breaking again upon playing.
I use 18.5.462 and run a RTX 3090. Looked through all the display options and can't seem to fix it there, so I assume it's buggy with my graphics card, maybe?
Thx+Cheers
Drido
Work in Progress » Procedural Flash Flood
- drido
- 13 posts
- Online
Hi all,
Unfortunately I wasnt able to update this WIP thread as I would have liked to, but at least I wanted to share the result with you:
http://vimeo.com/29774292 [vimeo.com]
pwd: flood
I'm generally happy with it, although some things, as always, bother me.
I think with a better knowledge of Houdini I could have pushed it a bit further.
Most of the time I spent on finding out how to cycle textures properly, tweaking the complete coverage of the wave geometry with sprite-planes, preprocessing and animating the different textures in conjunction with the point movement, while at the same time keeping rendertime low for previews.
No shading is used whatsoever, only diffuse and semi-transparency, so rendertime actually stayed quite low. Hoewever, converting the textures to .rats introduced some pink artifacts on the sprite-planes once a certain number was exceeded, no idea why, so I sticked with leaving them .pngs, which obviously cranked up the rendertime quite a bit for converting a few thousand planes each time.
The destruction for both the top and the long shot were completely done in post. I ended up doing the comp myself, so it could have been much better, since I'm not a compositor.
Drop me a line if you are interested in more details.
cheers
hendrik
ps: the cloud was animated in post, too.
Unfortunately I wasnt able to update this WIP thread as I would have liked to, but at least I wanted to share the result with you:
http://vimeo.com/29774292 [vimeo.com]
pwd: flood
I'm generally happy with it, although some things, as always, bother me.
I think with a better knowledge of Houdini I could have pushed it a bit further.
Most of the time I spent on finding out how to cycle textures properly, tweaking the complete coverage of the wave geometry with sprite-planes, preprocessing and animating the different textures in conjunction with the point movement, while at the same time keeping rendertime low for previews.
No shading is used whatsoever, only diffuse and semi-transparency, so rendertime actually stayed quite low. Hoewever, converting the textures to .rats introduced some pink artifacts on the sprite-planes once a certain number was exceeded, no idea why, so I sticked with leaving them .pngs, which obviously cranked up the rendertime quite a bit for converting a few thousand planes each time.
The destruction for both the top and the long shot were completely done in post. I ended up doing the comp myself, so it could have been much better, since I'm not a compositor.
Drop me a line if you are interested in more details.
cheers
hendrik
ps: the cloud was animated in post, too.
Work in Progress » Procedural Flash Flood
- drido
- 13 posts
- Online
Work in Progress » Procedural Flash Flood
- drido
- 13 posts
- Online
Dear Houdini Community,
After procrastinating learning Houdini as an aspiring FX-TD for so long, I finally, thanks heavens, just have to learn it for my next job
So, the task at hand, as a voluntarily participation in a project for the school I just graduated from, is the following:
A flash flood hitting the town of Remagen in Germany after the eruption of a supervolcano in the Eifel region (quite a likely scenario btw - if there will ever be an eruption like this
I have the complete shot supervision and already handed compositing and integration over to completely focus on the actual wave.
Since this is my style of work, I prototyped what I want to come up with in Houdini already in Maya, which I'm much more familiar with.
This is the result with the most satisfying wave so far:
see prototypeWave01.mov
And this is a previs of the actual shot breakdown:
see previsWave01.mov
This was heavily influenced by Tom Kluyskens technique for flooding Isengard in LOTR he talks on the Maya Tutorial DVD “Flow Workflows”, which I adapted to the following:
- Shove a wave geometry through the countryside
- attach to/ emit particles from it
- instance patches to particles with a water sequence mapped onto them
- cleverly give birth to/accelerate/scale/kill the instances and, above all, cycle and offset the sequences accordingly
additionally:
- Instance debris into the mess
Apart from the debris, which I haven't terribly much worked on yet since its possibilties were quite limited in Maya, this proved to be working, so now there are two things to achieve:
- Transfer the whole setup to Houdini
- Improve it
For Improvement, the following things came to my mind:
- More clearly separate the front from the remaining wave -> I'm planning to record new footage showing both white and calmer water in the same sequence, so I hope to get that effect with just the frame offset
- do some interaction as soon as the wave hits the frontmost buildings towards the end -> do some automated collision detection with some proxy geo I have, and emit either lots of spray particles,more sprites or some flip fluids
- use Houdini Ocean Toolkit somehow for nicer animation of water patches and/or give debris more floating feeling
So far I did some tests in Houdini and I'm delighted by particle controllability and animated texture preview performance, which I identified as crucial.
Also, HOT seemed to be quite useful in that context.
I don't know yet about the cycling possibilities and handing particle attributes over to the shader, but I'm really positive that this is much nicer in Houdini than Maya, which was a complete turnoff there, since you basically needed a new shader for each offset sequence (hence the relative conformity so far)
So, I would really like to know from you, dear Houdini community:
1. Does that seem like a sensible approach?
and
2. Do you think this is possible for a complete Houdini noob like me to achieve in 3 weeks, or am I insane?
I decided now the very in the last minute to tap into the knowledge vault around here, although it scares me, since I can only achieve so much in the given time and don't consider myself as a fast learner.
But on the other hand, I thought why not trying this out?
Hopefully, you guys won't be mad at me, if I can't implement or understand everyone's suggestion.
But maybe that might be helpful to someone else sometime, if I document my learning process, and, eventually, to myself, too
cheers
hendrik
After procrastinating learning Houdini as an aspiring FX-TD for so long, I finally, thanks heavens, just have to learn it for my next job
So, the task at hand, as a voluntarily participation in a project for the school I just graduated from, is the following:
A flash flood hitting the town of Remagen in Germany after the eruption of a supervolcano in the Eifel region (quite a likely scenario btw - if there will ever be an eruption like this
I have the complete shot supervision and already handed compositing and integration over to completely focus on the actual wave.
Since this is my style of work, I prototyped what I want to come up with in Houdini already in Maya, which I'm much more familiar with.
This is the result with the most satisfying wave so far:
see prototypeWave01.mov
And this is a previs of the actual shot breakdown:
see previsWave01.mov
This was heavily influenced by Tom Kluyskens technique for flooding Isengard in LOTR he talks on the Maya Tutorial DVD “Flow Workflows”, which I adapted to the following:
- Shove a wave geometry through the countryside
- attach to/ emit particles from it
- instance patches to particles with a water sequence mapped onto them
- cleverly give birth to/accelerate/scale/kill the instances and, above all, cycle and offset the sequences accordingly
additionally:
- Instance debris into the mess
Apart from the debris, which I haven't terribly much worked on yet since its possibilties were quite limited in Maya, this proved to be working, so now there are two things to achieve:
- Transfer the whole setup to Houdini
- Improve it
For Improvement, the following things came to my mind:
- More clearly separate the front from the remaining wave -> I'm planning to record new footage showing both white and calmer water in the same sequence, so I hope to get that effect with just the frame offset
- do some interaction as soon as the wave hits the frontmost buildings towards the end -> do some automated collision detection with some proxy geo I have, and emit either lots of spray particles,more sprites or some flip fluids
- use Houdini Ocean Toolkit somehow for nicer animation of water patches and/or give debris more floating feeling
So far I did some tests in Houdini and I'm delighted by particle controllability and animated texture preview performance, which I identified as crucial.
Also, HOT seemed to be quite useful in that context.
I don't know yet about the cycling possibilities and handing particle attributes over to the shader, but I'm really positive that this is much nicer in Houdini than Maya, which was a complete turnoff there, since you basically needed a new shader for each offset sequence (hence the relative conformity so far)
So, I would really like to know from you, dear Houdini community:
1. Does that seem like a sensible approach?
and
2. Do you think this is possible for a complete Houdini noob like me to achieve in 3 weeks, or am I insane?
I decided now the very in the last minute to tap into the knowledge vault around here, although it scares me, since I can only achieve so much in the given time and don't consider myself as a fast learner.
But on the other hand, I thought why not trying this out?
Hopefully, you guys won't be mad at me, if I can't implement or understand everyone's suggestion.
But maybe that might be helpful to someone else sometime, if I document my learning process, and, eventually, to myself, too
cheers
hendrik
Work in Progress » Werewolf render test
- drido
- 13 posts
- Online
Houdini Indie and Apprentice » subframe fluid rendering
- drido
- 13 posts
- Online
Hello Everyone,
Meanwhile, I found a solution, which is different from the approach Allegro suggested.
This was apparently only blending the opacity, not position of the fluid and therefore led to nonlinear interpolating (at least from what I have understood).
Anyway, the new approach is not perfect, because the simulation has to be cached again, but, at least from what I tried, the result will look the same as before for integral frames, but this time with proper subframes.
The whole trick is basically to set the substepping on the simulation-tab of the dopNetwork to a value according to what will be rendered afterwards.
So, if you are going to render 4 subframes per frame, you have to set substepping to 4.
In this case you should set your increment on the dop-I/O for caching to 0.25, as well as later on the render-node.
You can verify it worked correctly by taking a look at the cache files (which should be written out with $FF):
The subframes should have reasonable values somewhere between the integral frame numbers.
Without proper substepping, the subframe cache files are messed up (and so are the renderings).
At least everything makes perfect sense now
Hope, it's going to help anyone apart from me.
It is for a VFX-Shot which is shot on hundred frames and will be time-ramped.
cheers
hendrik
Meanwhile, I found a solution, which is different from the approach Allegro suggested.
This was apparently only blending the opacity, not position of the fluid and therefore led to nonlinear interpolating (at least from what I have understood).
Anyway, the new approach is not perfect, because the simulation has to be cached again, but, at least from what I tried, the result will look the same as before for integral frames, but this time with proper subframes.
The whole trick is basically to set the substepping on the simulation-tab of the dopNetwork to a value according to what will be rendered afterwards.
So, if you are going to render 4 subframes per frame, you have to set substepping to 4.
In this case you should set your increment on the dop-I/O for caching to 0.25, as well as later on the render-node.
You can verify it worked correctly by taking a look at the cache files (which should be written out with $FF):
The subframes should have reasonable values somewhere between the integral frame numbers.
Without proper substepping, the subframe cache files are messed up (and so are the renderings).
At least everything makes perfect sense now
Hope, it's going to help anyone apart from me.
It is for a VFX-Shot which is shot on hundred frames and will be time-ramped.
cheers
hendrik
Houdini Indie and Apprentice » subframe fluid rendering
- drido
- 13 posts
- Online
Hello Again,
While the if-Syntax of Allegro's cache-loading expressions
$HIP/pyrofields_`if(trunc($FF<2),2,trunc($FF))`.bgeo.gz
and
$HIP/pyrofields_`if($FF>=1,trunc($FF)+1,trunc($FF))`.bgeo.gz
still seems very unclear to me, I think it's mainly about dealing with issues for the first 2 frames.
At least, when I replace it with the simpler
$HIP/pyrofields_`trunc($FF)`.bgeo.gz
and
$HIP/pyrofields_`trunc($FF)+1`.bgeo.gz
, the outcome is pretty much the same.
Which means, together with the volume-mix in blend-mode, there is still a non-linear blending behaviour.
Which shouldn't be, from my point of view… :?
Please, can anyone shed abit light on this?
It's probably not too surprising for the experienced Houdini-User…
thanks alot!
hendrik
While the if-Syntax of Allegro's cache-loading expressions
$HIP/pyrofields_`if(trunc($FF<2),2,trunc($FF))`.bgeo.gz
and
$HIP/pyrofields_`if($FF>=1,trunc($FF)+1,trunc($FF))`.bgeo.gz
still seems very unclear to me, I think it's mainly about dealing with issues for the first 2 frames.
At least, when I replace it with the simpler
$HIP/pyrofields_`trunc($FF)`.bgeo.gz
and
$HIP/pyrofields_`trunc($FF)+1`.bgeo.gz
, the outcome is pretty much the same.
Which means, together with the volume-mix in blend-mode, there is still a non-linear blending behaviour.
Which shouldn't be, from my point of view… :?
Please, can anyone shed abit light on this?
It's probably not too surprising for the experienced Houdini-User…
thanks alot!
hendrik
Houdini Indie and Apprentice » subframe fluid rendering
- drido
- 13 posts
- Online
Hi Allegro and Everyone,
Sorry, was away from Houdini access for some days…
You find 3 Quicktimes attached, where I basically took your scene, applied a simpler shader, put in a light with shadows and rendered it with subsampling each frame 5 times, 2 times and in “realtime” for comparison.
While it looked right in viewport preview (yes, it is slow, but thats ok, if loader network is only applied just before rendering), there is obviously an issue with the interpolation.
Even on a second look I still haven't understood, how the expressions work ops:
But thanks alot for the useful link.
Haven't found time to dive into that yet, but will definitely do so soon.
cheers
hendrik
Sorry, was away from Houdini access for some days…
You find 3 Quicktimes attached, where I basically took your scene, applied a simpler shader, put in a light with shadows and rendered it with subsampling each frame 5 times, 2 times and in “realtime” for comparison.
While it looked right in viewport preview (yes, it is slow, but thats ok, if loader network is only applied just before rendering), there is obviously an issue with the interpolation.
Even on a second look I still haven't understood, how the expressions work ops:
But thanks alot for the useful link.
Haven't found time to dive into that yet, but will definitely do so soon.
cheers
hendrik
Houdini Indie and Apprentice » subframe fluid rendering
- drido
- 13 posts
- Online
Hi Allegro,
That was not only lightning fast but (even better) seems to work very sweetly.
thanks heaps!
Now I only have to understand, how the volume mix and expressions for file loading work
The Houdini Community is just cool 8)
Update: now that the first test renders pop out it seems like the cache interpolation isn't linear or something.
The fluid seems to ‘pump’.
Any idea on that?
thanks alot!
hendrik
ps: cool ff avatar, who was that again?
That was not only lightning fast but (even better) seems to work very sweetly.
thanks heaps!
Now I only have to understand, how the volume mix and expressions for file loading work
The Houdini Community is just cool 8)
Update: now that the first test renders pop out it seems like the cache interpolation isn't linear or something.
The fluid seems to ‘pump’.
Any idea on that?
thanks alot!
hendrik
ps: cool ff avatar, who was that again?
Houdini Indie and Apprentice » subframe fluid rendering
- drido
- 13 posts
- Online
Hello out there,
(Probably) Very simple one here:
- How can I render subframes of a cached pyro fluid?
-> Rendering subframes of fluid cached on integral frame numbers doesn't work (fluid remains the same for subframes)
-> trying to cache subframes writes out messed up cache files with standard solver settings
Now my guess is I have to change substepping on solver, but this changes my simulation.
What is the relation between solver settings, replay settings and cache step-width anyway?
In Maya it was rather easy, rendering subframes interpolated the cache automatically
But there are probably good reasons its's different here
Eager to learn!
cheers
hendrik
(Probably) Very simple one here:
- How can I render subframes of a cached pyro fluid?
-> Rendering subframes of fluid cached on integral frame numbers doesn't work (fluid remains the same for subframes)
-> trying to cache subframes writes out messed up cache files with standard solver settings
Now my guess is I have to change substepping on solver, but this changes my simulation.
What is the relation between solver settings, replay settings and cache step-width anyway?
In Maya it was rather easy, rendering subframes interpolated the cache automatically
But there are probably good reasons its's different here
Eager to learn!
cheers
hendrik
-
- Quick Links