Hi, I am trying to use crowd&ragdoll setup to flow agents by water. This time I can compleltly ignore agent status and start from ragdoll status.
When siming, I saw a body is bending very strange way, and I thoguht bonecapture biharmonic (bind skin) was wrong. However, I compared those constraint icons between SOP level viewport and DOP level viewport, and noticed that some of constraints configure wrong angles/direction. ALso I confirmed my skelton seems to be fine, too.
I guess the agent configure joints SOP produces a bad constraints network because this node is located just before feeding to dopnet.
Please let me know if you have a same problem and how to fix them, thanks!
Found 14 posts.
Search results Show results as topic list.
Technical Discussion » agent configure joints SOP is not working for crowd ragdoll
- masa90210
- 16 posts
- Offline
Technical Discussion » pop force and advect for ragdoll in crowd solver
- masa90210
- 16 posts
- Offline
Hi, I have a crowd setup and a dopnet contains a crowd solver and bullet solver. I want agent to transform to ragdoll by trigger like a flood collision, and ragdoll agent flow with the wave.
Then I tested to plug pop force/advect nodes in bullet and crowd solvers. There are no error message but simulation seems to ignore those pop nodes.
There are another some Force nodes(green color) available to plug after solvers like a gravityForce node. However, functions are limited.
I even tried to add popSolver and sopSolver to multiSolver, but it did not work as we usually expected.
Question is ;
Do we have substitution of pop advect and pop force(for noise) ? Especially I wonder if I can import SOP level's volume vel field and/or point velocities to apply ragdoll agents, and also can we use mesh objects to limit force area?
Thanks for your assistances!!
PS) I found uniformForce node + noiseField behave like pop force. However, I am still looking for how to use a polygon mesh as force area mask, and how to import SDF/density/vel field to drive ragdolls.
Then I tested to plug pop force/advect nodes in bullet and crowd solvers. There are no error message but simulation seems to ignore those pop nodes.
There are another some Force nodes(green color) available to plug after solvers like a gravityForce node. However, functions are limited.
I even tried to add popSolver and sopSolver to multiSolver, but it did not work as we usually expected.
Question is ;
Do we have substitution of pop advect and pop force(for noise) ? Especially I wonder if I can import SOP level's volume vel field and/or point velocities to apply ragdoll agents, and also can we use mesh objects to limit force area?
Thanks for your assistances!!
PS) I found uniformForce node + noiseField behave like pop force. However, I am still looking for how to use a polygon mesh as force area mask, and how to import SDF/density/vel field to drive ragdolls.
Edited by masa90210 - Sept. 20, 2023 02:16:19
Houdini Engine for Unreal » Importing houdini FX asset does not work perfectly.
- masa90210
- 16 posts
- Offline
Houdini 19.5 , UE 5.1
Hi, I am a senior Houdini artist and learning UE for 1 month.
I am trying to import houdini (FX) vertex animation assets to UE, according to sideFX tutorial, but it does not work perfectly. What I tried are :
1 - I downloaded sideFX's example file and confirmed I can see that houdini and UE files are working properly in my PC's env. This means sideFx plugin and lab tools are working in UE correctly.
2 - Now I made my new hip for vellum anim(softbody) and new UE files. A cape has seams and stretches. (sc_11.PNG)
3 - Then I go back to the beginning and decided to re-generate the examples.
So, I used the sidefx's example hip file to generate fbx and VAT files, and tried to import them to the sidefx's example UE file, . But it looks not same. (sc_06, 07, 08.PNG) .
Normal is not smooth and seams are not lining up perfectly.
I guess the vertex counts between textures and fbx mesh are same, but there are some precision setting issues. And example video is demonstrating with 18.5, not 19.5.
4 - I spent 18hours for this issue. Now I made simple scene and made a zip file under my google drive. I hope someone would show me how to fix this problem.
Thank you so much.
tutorial videos I am following
https://www.sidefx.com/tutorials/vertex-animation-textures-for-unreal/ [www.sidefx.com]
my UE and houdini zip file which include a simple test. hip file is from sidefx, and UE file is I made from full scratch
https://drive.google.com/drive/folders/1ZkV71somWukAjaIZygd1sp2zeOPqKKBi?usp=sharing [drive.google.com]
Hi, I am a senior Houdini artist and learning UE for 1 month.
I am trying to import houdini (FX) vertex animation assets to UE, according to sideFX tutorial, but it does not work perfectly. What I tried are :
1 - I downloaded sideFX's example file and confirmed I can see that houdini and UE files are working properly in my PC's env. This means sideFx plugin and lab tools are working in UE correctly.
2 - Now I made my new hip for vellum anim(softbody) and new UE files. A cape has seams and stretches. (sc_11.PNG)
3 - Then I go back to the beginning and decided to re-generate the examples.
So, I used the sidefx's example hip file to generate fbx and VAT files, and tried to import them to the sidefx's example UE file, . But it looks not same. (sc_06, 07, 08.PNG) .
Normal is not smooth and seams are not lining up perfectly.
I guess the vertex counts between textures and fbx mesh are same, but there are some precision setting issues. And example video is demonstrating with 18.5, not 19.5.
4 - I spent 18hours for this issue. Now I made simple scene and made a zip file under my google drive. I hope someone would show me how to fix this problem.
Thank you so much.
tutorial videos I am following
https://www.sidefx.com/tutorials/vertex-animation-textures-for-unreal/ [www.sidefx.com]
my UE and houdini zip file which include a simple test. hip file is from sidefx, and UE file is I made from full scratch
https://drive.google.com/drive/folders/1ZkV71somWukAjaIZygd1sp2zeOPqKKBi?usp=sharing [drive.google.com]
Houdini Lounge » wind tunnel direction parameter
- masa90210
- 16 posts
- Offline
>>The wind tunnel parameter changes the background (default)
Yes, basically the parameter exist and control SOPVectorField node(velocity_data) inside of smnokeObject.
>>and border values for the velocity field
I want to know exactly how it define the border and how to change or blend wind velocity. This is because if wind velocity is applied at the first frame only, it's so easy to understand, but at every frame, it does do some calculation and mix operation.
Yes, basically the parameter exist and control SOPVectorField node(velocity_data) inside of smnokeObject.
>>and border values for the velocity field
I want to know exactly how it define the border and how to change or blend wind velocity. This is because if wind velocity is applied at the first frame only, it's so easy to understand, but at every frame, it does do some calculation and mix operation.
Houdini Lounge » wind tunnel direction parameter
- masa90210
- 16 posts
- Offline
hi,
I have a question about a “wind tunnel direction parameter” in smoke object node in DOP. This creates a constant wind effect very naturally lovely, instead of using gasWind Node. Therefore I'd really like to know how this calculation works mathematically. And eventually I want to create a custom wind-tunnel node by a gasVOP or gasWrangle.
I would like to have your answers with short explanation and some mathematical example descriptions like:
@vel = @vel + gasTunnelVelocity ( I am pretty sure this is wrong because this will accelerate every frame)
@vel =max(@v, gasTunnelVelocity)
Please find an attachment file showing the parameter, thank you.
I have a question about a “wind tunnel direction parameter” in smoke object node in DOP. This creates a constant wind effect very naturally lovely, instead of using gasWind Node. Therefore I'd really like to know how this calculation works mathematically. And eventually I want to create a custom wind-tunnel node by a gasVOP or gasWrangle.
I would like to have your answers with short explanation and some mathematical example descriptions like:
@vel = @vel + gasTunnelVelocity ( I am pretty sure this is wrong because this will accelerate every frame)
@vel =max(@v, gasTunnelVelocity)
Please find an attachment file showing the parameter, thank you.
Edited by masa90210 - Jan. 17, 2020 11:40:55
Technical Discussion » Arnold's deep rendering crashes with CUBE
- masa90210
- 16 posts
- Offline
Additionally, when I don't export ASS file, renderings are completed 100%. However, when the exporting ASS files, many renderings frames are crashed..
Technical Discussion » Arnold's deep rendering crashes with CUBE
- masa90210
- 16 posts
- Offline
Hi, I am trying to render Arnold with CUBE, and the rendering is crashing under a certain situation.
- Qube Version: 7.0-2b
- linux Cent-OS 7
- rendering is always fine without deep
- rendering deep with motion blur crashes
- rendering deep without motion blur is fine
- I set deep image as separate image file, then beauty image file is created but the deep image file failed to be created.
- I attached AOVs setting as standard, and tried many different combination of settings.
- basically main error message is "00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size"
If anyone knows how to render deep, let me know thanks!
parsing 10 output statements …
00:00:14 1193MB WARNING | output redefines AOV “RGBA” as type “RGB”: ignoring and using previous type “RGBA”
00:00:14 1193MB | registered driver: “/out/deadLeaf/arnold_ass:exrINO_camera” (driver_exr)
00:00:14 1193MB | * “RGBA” of type RGBA filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “leaf” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “direct” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “indirect” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “diffuse” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “specular” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “Z” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | registered driver: “/out/deadLeaf/arnold_ass:jpeg:aov1INO_camera” (driver_jpeg)
00:00:14 1193MB | * “RGBA” of type RGBA filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | registered driver: “/out/deadLeaf/arnold_ass:deepexr:aov8INO_camera” (driver_deepexr)
00:00:14 1193MB | * deep samples for “deep” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | registered driver: “kick_display” (driver_kick)
00:00:14 1193MB | * “RGBA” of type RGBA filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | done preparing 12 AOVs for 10 outputs to 4 drivers (4 deep AOVs)
00:00:14 1193MB WARNING | /out/deadLeaf/arnold_ass:exrINO_camera: unable to convert to color space “linear_ACEScg”, ignoring color transform
00:00:15 1573MB | starting 32 bucket workers of size 64x64 …
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 2028MB WARNING | render aborted due to earlier errors
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 1739MB WARNING | render terminating early: received abort signal
00:00:30 1739MB | render done in 0:25.086
00:00:30 1739MB | writing file `/SERVERS/ISILON01/DINOSAURS/PROJECTS/EP01_S02_039/CG/houdini/images/deadLeaf_v001t04/EP01_S02_039_deadLeaf_v001t04.0202.exr'
00:00:35 1315MB | writing file `/SERVERS/ISILON01/DINOSAURS/PROJECTS/EP01_S02_039/CG/houdini/images/deadLeaf_v001t04/EP01_S02_039_deadLeaf_v001t04.0202.jpg'
00:00:35 1315MB | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: closing file `/SERVERS/ISILON01/DINOSAURS/PROJECTS/EP01_S02_039/CG/houdini/images/deadLeaf_deep_v001t04/EP01_S02_039_deadLeaf_deep_v001t04.0202.exr'
00:00:35 1251MB | render done
00:00:35 1210MB |
00:00:35 1210MB | releasing resources
00:00:35 306MB | Arnold shutdown
- Qube Version: 7.0-2b
- linux Cent-OS 7
- rendering is always fine without deep
- rendering deep with motion blur crashes
- rendering deep without motion blur is fine
- I set deep image as separate image file, then beauty image file is created but the deep image file failed to be created.
- I attached AOVs setting as standard, and tried many different combination of settings.
- basically main error message is "00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size"
If anyone knows how to render deep, let me know thanks!
parsing 10 output statements …
00:00:14 1193MB WARNING | output redefines AOV “RGBA” as type “RGB”: ignoring and using previous type “RGBA”
00:00:14 1193MB | registered driver: “/out/deadLeaf/arnold_ass:exrINO_camera” (driver_exr)
00:00:14 1193MB | * “RGBA” of type RGBA filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “leaf” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “direct” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “indirect” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “diffuse” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “specular” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | * “Z” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | registered driver: “/out/deadLeaf/arnold_ass:jpeg:aov1INO_camera” (driver_jpeg)
00:00:14 1193MB | * “RGBA” of type RGBA filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | registered driver: “/out/deadLeaf/arnold_ass:deepexr:aov8INO_camera” (driver_deepexr)
00:00:14 1193MB | * deep samples for “deep” of type RGB filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | registered driver: “kick_display” (driver_kick)
00:00:14 1193MB | * “RGBA” of type RGBA filtered by “/out/deadLeaf/arnold_ass:gaussian_filter” (gaussian_filter)
00:00:14 1193MB | done preparing 12 AOVs for 10 outputs to 4 drivers (4 deep AOVs)
00:00:14 1193MB WARNING | /out/deadLeaf/arnold_ass:exrINO_camera: unable to convert to color space “linear_ACEScg”, ignoring color transform
00:00:15 1573MB | starting 32 bucket workers of size 64x64 …
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 2028MB WARNING | render aborted due to earlier errors
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 2028MB ERROR | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: failed deep OpenEXR2 write in scanline mode: called OpenEXROutput::write_deep_scanlines with non-matching DeepData size
00:00:30 1739MB WARNING | render terminating early: received abort signal
00:00:30 1739MB | render done in 0:25.086
00:00:30 1739MB | writing file `/SERVERS/ISILON01/DINOSAURS/PROJECTS/EP01_S02_039/CG/houdini/images/deadLeaf_v001t04/EP01_S02_039_deadLeaf_v001t04.0202.exr'
00:00:35 1315MB | writing file `/SERVERS/ISILON01/DINOSAURS/PROJECTS/EP01_S02_039/CG/houdini/images/deadLeaf_v001t04/EP01_S02_039_deadLeaf_v001t04.0202.jpg'
00:00:35 1315MB | /out/deadLeaf/arnold_ass:deepexr:aov8INO_camera: closing file `/SERVERS/ISILON01/DINOSAURS/PROJECTS/EP01_S02_039/CG/houdini/images/deadLeaf_deep_v001t04/EP01_S02_039_deadLeaf_deep_v001t04.0202.exr'
00:00:35 1251MB | render done
00:00:35 1210MB |
00:00:35 1210MB | releasing resources
00:00:35 306MB | Arnold shutdown
Technical Discussion » packed disk primitive of VDB and bounding box for render time
- masa90210
- 16 posts
- Offline
hi,
When I choose file > load >packed disk primitive, bounding box becomes huge size because density primitive(field) has dummy point(0,0,0).
I just want to make sure this huge bounding box does not impact render time at all. Or sould I force to move point position?
When I choose file > load >packed disk primitive, bounding box becomes huge size because density primitive(field) has dummy point(0,0,0).
I just want to make sure this huge bounding box does not impact render time at all. Or sould I force to move point position?
Technical Discussion » vel vs collisionvel , and staticObject node's collision and source volume node's collision
- masa90210
- 16 posts
- Offline
to Tamte,
Thanks for your explanation, and I was late to reply here. This is because I have been watching webnair collision vimeos, which is related to this topic. It's very interesting that static object can takes velocity of dynamicObject. However, it only works with some solvers, not always. Thus I try to figure out what's the best way to bring dynamic objects per solvers.
From your thread,
>> solver can enforce proper boundary velocity for vel field rather than 0…
Ok, you mean the handling of collisionvel field are different from that of vel field near collision field, right? I am not 100% sure how much sim result makes change, but I will remember it. And also:
vdbfrompolygons > exterior band voxels. Default is 3 and surface attribute v to vel.
Then, if I change the band voxels, vel field also accordingly get thicker. As a result, the length of vel field in dop also is changed/increased. It's kind of sad because we don't know what's the true setting of proper dynamic velocity. (and probably velocity fed from staticObject could be different from sourceVolume's one although it uses same animated object.)
>>while using DOP objects a…
I don't understand what node you imply “Dop objects” in general . Do you mean rbdobject node, or rbd pack object node?
>>Collision Mask node has insane overhead with high resolution volume
are you talking about a sourceVolume node > Masks > DOP field to use as Mask ?
>> just have precached collision and colisionvel attributes sourced from SOPs directly iinto those fields…
I guess you are talking about building up manually GasMatchField and SOP_solver's objectMerge stuff, instead of using sourceVolume node, right?
Thanks for your explanation, and I was late to reply here. This is because I have been watching webnair collision vimeos, which is related to this topic. It's very interesting that static object can takes velocity of dynamicObject. However, it only works with some solvers, not always. Thus I try to figure out what's the best way to bring dynamic objects per solvers.
From your thread,
>> solver can enforce proper boundary velocity for vel field rather than 0…
Ok, you mean the handling of collisionvel field are different from that of vel field near collision field, right? I am not 100% sure how much sim result makes change, but I will remember it. And also:
vdbfrompolygons > exterior band voxels. Default is 3 and surface attribute v to vel.
Then, if I change the band voxels, vel field also accordingly get thicker. As a result, the length of vel field in dop also is changed/increased. It's kind of sad because we don't know what's the true setting of proper dynamic velocity. (and probably velocity fed from staticObject could be different from sourceVolume's one although it uses same animated object.)
>>while using DOP objects a…
I don't understand what node you imply “Dop objects” in general . Do you mean rbdobject node, or rbd pack object node?
>>Collision Mask node has insane overhead with high resolution volume
are you talking about a sourceVolume node > Masks > DOP field to use as Mask ?
>> just have precached collision and colisionvel attributes sourced from SOPs directly iinto those fields…
I guess you are talking about building up manually GasMatchField and SOP_solver's objectMerge stuff, instead of using sourceVolume node, right?
Technical Discussion » vel vs collisionvel , and staticObject node's collision and source volume node's collision
- masa90210
- 16 posts
- Offline
>>Static Object works for all solver. Source Volume works only for the solver the volume plug into.
yes
>>Static Object, if deforming/translating, takes the trajectory into account, while Source Volume is not.
I am not sure trajectory means. But I guess collision detection>use surface collision, then it can track collision's surface points. Use volume collisions won't work for it.
>>Static Object takes physical properties ie bounce, friction, etc, while Source Volume doesn’t.
yes
>>vel is used for advection, pressure solve etc. collisionvel is only for collision detection.
I think collision detection should come from just collision attribute (surface), not necessary to have collisionvel attribute, right? I feel collisionvel create bouncing velocity, instead of directly merging into advection/pressure solve.
yes
>>Static Object, if deforming/translating, takes the trajectory into account, while Source Volume is not.
I am not sure trajectory means. But I guess collision detection>use surface collision, then it can track collision's surface points. Use volume collisions won't work for it.
>>Static Object takes physical properties ie bounce, friction, etc, while Source Volume doesn’t.
yes
>>vel is used for advection, pressure solve etc. collisionvel is only for collision detection.
I think collision detection should come from just collision attribute (surface), not necessary to have collisionvel attribute, right? I feel collisionvel create bouncing velocity, instead of directly merging into advection/pressure solve.
Technical Discussion » vel vs collisionvel , and staticObject node's collision and source volume node's collision
- masa90210
- 16 posts
- Offline
Hi, I have a question about velocity and collision for dopnet sim, especially flip and pyro sims.
When taking a dynamic collision object into dopnetwork, there are 2 ways, using staticObject and sourceVolume nodes.
1 - I wonder how houdini exactly handles vel vs collisionvel to make a differences. I know vel field could be from noise or emitter or etc. And collisionvel is from moving collider. But what happens if we feed collisionvel as just vel, or other way around?! Does collisionvel take special treatment during solving?
2 - There are also 2 ways to feed collision surface, staticObject node and source volume node. People says staticObject's collision makes more accurate collision result. I wonder how houdini handle and make differences.
I would like to hear technical answers, instead of how to create/use nodes, thanks!
When taking a dynamic collision object into dopnetwork, there are 2 ways, using staticObject and sourceVolume nodes.
1 - I wonder how houdini exactly handles vel vs collisionvel to make a differences. I know vel field could be from noise or emitter or etc. And collisionvel is from moving collider. But what happens if we feed collisionvel as just vel, or other way around?! Does collisionvel take special treatment during solving?
2 - There are also 2 ways to feed collision surface, staticObject node and source volume node. People says staticObject's collision makes more accurate collision result. I wonder how houdini handle and make differences.
I would like to hear technical answers, instead of how to create/use nodes, thanks!
Houdini Learning Materials » ocean spectrum tiling option in ocean tools
- masa90210
- 16 posts
- Offline
oops, actually I found a solution.
If I set a same value to grid > size, and oceanspectrum > gridSize, it become tileable ocean!!
If I set a same value to grid > size, and oceanspectrum > gridSize, it become tileable ocean!!
Edited by masa90210 - April 24, 2017 19:03:20
Houdini Learning Materials » ocean spectrum tiling option in ocean tools
- masa90210
- 16 posts
- Offline
Hi, I recently start using houdini16.
Ocean spectrum in h15 was tileable. For instance, make a grid 20 x 20 and apply ocean displacement, and copy one and move x=20. Then two displaced grids were seamless side by side.
Now it's not matching in h16. The displacement on edge doesn't line up each other. Do we have option to do it?
Ocean spectrum in h15 was tileable. For instance, make a grid 20 x 20 and apply ocean displacement, and copy one and move x=20. Then two displaced grids were seamless side by side.
Now it's not matching in h16. The displacement on edge doesn't line up each other. Do we have option to do it?
Houdini Lounge » phenomena tutorials
- masa90210
- 16 posts
- Offline
Hi, I am a Houdini beginner with some Maya experiences, and want to switch into be Houdini animator.
Does anyone know tutorials of phenomena animations exist?
I know we can learn some basic from gnomon DVD, 3D Buzz videos, Sidney VisLab videos, and examples in Houdini help.
However, I am particularly looking for tornado, ocean wave, fire, and explosion.
Thanks!
Does anyone know tutorials of phenomena animations exist?
I know we can learn some basic from gnomon DVD, 3D Buzz videos, Sidney VisLab videos, and examples in Houdini help.
However, I am particularly looking for tornado, ocean wave, fire, and explosion.
Thanks!
-
- Quick Links