Unpacked just means that it is no longer delayed load, so it's burying all the data into the IFD per frame, instead of simply
referring to it's cache location on disk. What is could be, is that the new pyro bake is assigning the material on the SOP, which
means it is not an object level render material assignment. The default Mantra operation is to only include referenced shaders,
which can sometimes miss shaders assigned prior to things being packed/delayed load.
Try flicking the "Declare Materials" over to "save all materials" and see if that solves it for you. Th only other thing it could
be is a weird bug, as the scattering field is used in the new pyro shader to drive the look, but I've never seen a field not be
available simply because it was set to packed disk.
Found 238 posts.
Search results Show results as topic list.
Technical Discussion » Rendering with Pyro Bake Volume (using pyro burst set up)
- lewis_T
- 238 posts
- Offline
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
Fluids are already very very controllable. Just look at the ability to use underlying ocean meshes to drive
FLIP sims, custom spline based hero waves traveling through, custom vel fields, the works.
I think maybe you need to watch the Masterclass video on the most recent FLIP and ocean/fluid changes since
H16+.
Vellum and RBD are two totally different systems. One is xPBD the other is a rigid body framwork. They will never
talk to each other the way you think they should. Sidefx added a stiff constraint to 18.5 to have objects behave essentially
like RBDs in Vellum, check it out.
Karma will likely never be on par with what Mantra was. Mantra had 25+ yrs of ongoing integration, and where did it end up?
A highly flexible mess that can do anything, but is too slow to be useful at all. Karma was a proof of concept to highlight
Solaris and USD, it has a long long way to go for true feature parity, and speed.
Brian, regarding render stats, you already have access to those at the required verbosity settings level.
L
FLIP sims, custom spline based hero waves traveling through, custom vel fields, the works.
I think maybe you need to watch the Masterclass video on the most recent FLIP and ocean/fluid changes since
H16+.
Vellum and RBD are two totally different systems. One is xPBD the other is a rigid body framwork. They will never
talk to each other the way you think they should. Sidefx added a stiff constraint to 18.5 to have objects behave essentially
like RBDs in Vellum, check it out.
Karma will likely never be on par with what Mantra was. Mantra had 25+ yrs of ongoing integration, and where did it end up?
A highly flexible mess that can do anything, but is too slow to be useful at all. Karma was a proof of concept to highlight
Solaris and USD, it has a long long way to go for true feature parity, and speed.
Brian, regarding render stats, you already have access to those at the required verbosity settings level.
L
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
Even as a solo operator, most moving parts should tend to stay reasonably consistent.
Paying monthly charges on 1000 nodes isn't exactly cheap is it, so there is room for both.
When I say you, it's the collective you. Even small commercials are usually not a place to go willy nilly pulling
up stumps to install a new renderer because of x feature. But that is digressing.
My only point is that both options are valid, and if push came to shove, perpetual Vs sub would be something that
needs to be weighed up properly, by Company X and solo man Y.
Cheers
L
Paying monthly charges on 1000 nodes isn't exactly cheap is it, so there is room for both.
When I say you, it's the collective you. Even small commercials are usually not a place to go willy nilly pulling
up stumps to install a new renderer because of x feature. But that is digressing.
My only point is that both options are valid, and if push came to shove, perpetual Vs sub would be something that
needs to be weighed up properly, by Company X and solo man Y.
Cheers
L
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
I don't agree. There are plenty of situations where rock solid versions of x render at x time are locked in for more than
a year or more of time, serving the purpose of the company and it's projects. A perpetual license that continues
to work with builds up to it's subscription date means the company can continue to render for many months or years to come,
without needing to spend anymore money paying for subscriptions to keep rendering frames.
Ideally you offer both. And if a lot of you take a good look at the type of things you are rendering, the tech that existed
2-3 years ago more than covers what you need. CPU cores keep increasing, so even a two year old lic of something that works
completely well for company x and it's projects would equal speed improvements.
There's a misplaced version of reality where you think that companies pivot on things as central as renderer's like they are
ordering lunch. The renderer is a core part of the pipeline, but so are all the other pieces. I think subscription only is
a stupid method of licensing. Offering both options covers both scenarios.
L
a year or more of time, serving the purpose of the company and it's projects. A perpetual license that continues
to work with builds up to it's subscription date means the company can continue to render for many months or years to come,
without needing to spend anymore money paying for subscriptions to keep rendering frames.
Ideally you offer both. And if a lot of you take a good look at the type of things you are rendering, the tech that existed
2-3 years ago more than covers what you need. CPU cores keep increasing, so even a two year old lic of something that works
completely well for company x and it's projects would equal speed improvements.
There's a misplaced version of reality where you think that companies pivot on things as central as renderer's like they are
ordering lunch. The renderer is a core part of the pipeline, but so are all the other pieces. I think subscription only is
a stupid method of licensing. Offering both options covers both scenarios.
L
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
Defaulting to VEX is a bit silly. I routinely come across complicated wrangles written by Artists
who have simply not bothered to read up on any new SOP based nodes added since H15 onwards.
The vast majority of the time, these SOP nodes do the exact funtions they have held to their hearts forever,
but often extend on them, are more flexible, and have an actual usable UI.
I cannot stress enough how important UX is to usage of tools. Monolithic wrangles with awful UI exposure end
up routinely ignored due to how esoteric they can be, depending on the Artist who coded them.
Wrangles and VEX have their place, no question, but anyone suggesting things should be leaning towards VEX
at the expense of all the newer nodes and their much better UIs, is missing the bigger picture entirely.
L
who have simply not bothered to read up on any new SOP based nodes added since H15 onwards.
The vast majority of the time, these SOP nodes do the exact funtions they have held to their hearts forever,
but often extend on them, are more flexible, and have an actual usable UI.
I cannot stress enough how important UX is to usage of tools. Monolithic wrangles with awful UI exposure end
up routinely ignored due to how esoteric they can be, depending on the Artist who coded them.
Wrangles and VEX have their place, no question, but anyone suggesting things should be leaning towards VEX
at the expense of all the newer nodes and their much better UIs, is missing the bigger picture entirely.
L
Houdini Lounge » Never working pyro collisions still an issue ???
- lewis_T
- 238 posts
- Offline
The volume source as collision, when made from the vdb from polygons is my preferred method.
Object > pointvel sop > vdb from poly: make the sdf name collision, and create a new field called
collisionvel that uses the point v you calculated earlier.
The main advantage of piping in your collision through this, is that you can keyframe the vel multiplier
at any point during the sim. To exaggerate or play down the influence of the collider.
L
Object > pointvel sop > vdb from poly: make the sdf name collision, and create a new field called
collisionvel that uses the point v you calculated earlier.
The main advantage of piping in your collision through this, is that you can keyframe the vel multiplier
at any point during the sim. To exaggerate or play down the influence of the collider.
L
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
Technical Discussion » Iron cores and hatred, I want an opinion.
- lewis_T
- 238 posts
- Offline
You've got some fundamental issues in how you think a Computer works, and how simulation and rendering works.
Let's take a pyro smoke sim. The more cores you have, the faster it will simulate, no question, this is easy to
prove. Now if you have a powerful GPU, that is only going to benefit your pyro sim IF all the nodes in your setup
for the sim are openCL capable. If they are, great. You can flick over to openCL and your sim will potentially be
even quicker than your CPU one. But you will hit GPU mem limits very quickly, so GPU sims are not a great idea for huge
situations. The speed of your hard disk will come into play purely for writing the cache data to disk, which happens in the
background while simming. Unless your sim is generating huge caches you won't see much improvement at all, as 50mb being written
to disk on an old spinning drive Vs SSD isn't going to be that different in terms of speed.
Where your hard disk will come into play more, is if you are pulling in heavy collision geometry, or some heavy velocity fields
that you have stored on disk, and want them to interact with your pyro sim. In that situation the sim will be waiting for the data
to be read in, so if it's huge, an SSD will totally make a difference.
Ram wise, it's been many years since the speed of ram was even a remote bottleneck to things. All that really matters with ram is that you have as much as you can get. The amount will do nothing speedwise, it will only enable you to sim larger more memory hungry simulations.
There are plenty of examples online of the very real differences in sim times between upgraded systems, so your assertion is wrong.
If I had to pick, I would say core count, ram, and depending on how big the data you generate or pull in is, I'd add the hard disk speed as well.
GPU will only benefit you in situations that take advantage of it, and if your sim/render fits into VRAM.
Cheers
L
Let's take a pyro smoke sim. The more cores you have, the faster it will simulate, no question, this is easy to
prove. Now if you have a powerful GPU, that is only going to benefit your pyro sim IF all the nodes in your setup
for the sim are openCL capable. If they are, great. You can flick over to openCL and your sim will potentially be
even quicker than your CPU one. But you will hit GPU mem limits very quickly, so GPU sims are not a great idea for huge
situations. The speed of your hard disk will come into play purely for writing the cache data to disk, which happens in the
background while simming. Unless your sim is generating huge caches you won't see much improvement at all, as 50mb being written
to disk on an old spinning drive Vs SSD isn't going to be that different in terms of speed.
Where your hard disk will come into play more, is if you are pulling in heavy collision geometry, or some heavy velocity fields
that you have stored on disk, and want them to interact with your pyro sim. In that situation the sim will be waiting for the data
to be read in, so if it's huge, an SSD will totally make a difference.
Ram wise, it's been many years since the speed of ram was even a remote bottleneck to things. All that really matters with ram is that you have as much as you can get. The amount will do nothing speedwise, it will only enable you to sim larger more memory hungry simulations.
There are plenty of examples online of the very real differences in sim times between upgraded systems, so your assertion is wrong.
If I had to pick, I would say core count, ram, and depending on how big the data you generate or pull in is, I'd add the hard disk speed as well.
GPU will only benefit you in situations that take advantage of it, and if your sim/render fits into VRAM.
Cheers
L
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
Ahh,
I know we are further derailing from the topic, but here are my thoughts.
As tempting as it is to have all of your scene construction in one monolithic hip file, it's a really bad idea
for a couple of reasons.
One of them being the external dependency of course, but what I mainly find, is that troubleshooting rendering issues
is made a lot easier when your lighting scene is purely that. Just imported caches, shaders, lights. No wondering
if some component of your hip is being caught during scene evaluation and breaking things. Also makes packaging up
an issue for RS Devs to evaluate when it's just a cache + lighting setup. Eh, conversation for another time!
Cheers
L
I know we are further derailing from the topic, but here are my thoughts.
As tempting as it is to have all of your scene construction in one monolithic hip file, it's a really bad idea
for a couple of reasons.
One of them being the external dependency of course, but what I mainly find, is that troubleshooting rendering issues
is made a lot easier when your lighting scene is purely that. Just imported caches, shaders, lights. No wondering
if some component of your hip is being caught during scene evaluation and breaking things. Also makes packaging up
an issue for RS Devs to evaluate when it's just a cache + lighting setup. Eh, conversation for another time!
Cheers
L
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
keyframe
I can't speak for anyone else, but I can promise you that for \ME\ not being able to access daily builds due to this external dependency problem (which I created, admittedly) ABSOLUTELY lowers the value of the support that I'm able to take advantage of.
I used to live by the daily builds... now having switched to redshift, it's production build only, for the most part.
The cost of support remained the same, but the frequency at which i'm able to use it dropped off significantly.
G
Not sure what kind of crazy external dependency you have created, but it doesn't sound ideal at all.
I don't really see much of a difference in requirements for Commercials Vs features. We are all generating the same bits
and pieces, so I'm curious why the need to live on daily builds, etc.
As mentioned above, why not lock off the lighting/rendering version of houdini, and keep it tied to the 3rd party engine, or
even good ole Mantra.
Not trying to derail it, but it seems like your workflow has bound you up in this diminished returns scenario.
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
keyframe
the trouble with third party renderers is that it actually devalues Houdini support.
There is tremendous value in being able to roll up to the latest build when a critical fix becomes available. That's not always possible when the renderer is not built against that build.
G
ps: Worth pointing out that Juanjo has been stellar in that dept, but we are ultimately at the mercy of another party's schedule.
I don't think it devalues it at all. It highlights houdini's total lack of a stable API. Plugins needing to be re-compiled every production build or daily build if that's your jam, is just a fact of life.
Other 3rd party engines could take a leaf out of 3Delights book, and make the plugins open source, freely compil-able.
Whatever houdini build I want to use, I just grab the 3delight plugin, and compile against it in 5 mins. Done.
3delight houdini plugin [gitlab.com]
Houdini Indie and Apprentice » Advice about what to practice in Houdini
- lewis_T
- 238 posts
- Offline
I would suggest only diving into VEX if you want to solve specific problems.
Otherwise you will never retain the syntax. Matt's tokeru VEX stuff is more all round type
of things you will use daily, with only a bit of stuff that is one off.
Yunichiro VEX stuff is higher order, more esoteric. I'd recommend covering all of Matt's stuff
before venturing into that area,and more only if that type of thing really interests you.
pyro, RBD, vellum are all good ones, they can get very deep and very fiddly, so take it slow!
I think making sure you have an overall solid base of "how" houdini works, attributes and their roles
in controlling things is a good spot to work towards. To that end, again, Matt's tokeru stuff will mostly
be working on spheres, pig heads, grids, and other simple geo, it will give you a solid base with zero distractions
in which to learn.
Otherwise you will never retain the syntax. Matt's tokeru VEX stuff is more all round type
of things you will use daily, with only a bit of stuff that is one off.
Yunichiro VEX stuff is higher order, more esoteric. I'd recommend covering all of Matt's stuff
before venturing into that area,and more only if that type of thing really interests you.
pyro, RBD, vellum are all good ones, they can get very deep and very fiddly, so take it slow!
I think making sure you have an overall solid base of "how" houdini works, attributes and their roles
in controlling things is a good spot to work towards. To that end, again, Matt's tokeru stuff will mostly
be working on spheres, pig heads, grids, and other simple geo, it will give you a solid base with zero distractions
in which to learn.
Technical Discussion » alembic export of bgeo cache of rbd sim, about 37k pieces.
- lewis_T
- 238 posts
- Offline
Hey Mate,
This sort of thing has been a pain for a long time, but here are some thoughts and options.
You can use use the Alembic ROP, set to "transform Geometry" and have a path attribute on the fractured
pieces, which is used to "build hierarchy from attribute" on the hierarchy tab.
When you cache this out as a single alembic, it will only store the static fractured geometry, and transform
the pieces, which keeps the file size tiny.
You will get linear interpolated deformation blur with this method also.
The other method, is to use USD. It has built in support for transforming the geometry by an incoming point cache.
This has a lot of benefits, fast IO, the ability to change the fractured geo details without a re-cache, and a bunch
more. But, it requires the target renderer to either natively support USD (Arnold, Karma) or have some procedural translation
in the middle.
Attached is the Alembic method above, but also a video to show a 10,000 piece high poly RBD being scrubbed in real time
as USD, loaded into Maya in 1.3 seconds via the multiverse USD tool.
USD instancer RBD [vimeo.com]
Cheers
Lewis
This sort of thing has been a pain for a long time, but here are some thoughts and options.
You can use use the Alembic ROP, set to "transform Geometry" and have a path attribute on the fractured
pieces, which is used to "build hierarchy from attribute" on the hierarchy tab.
When you cache this out as a single alembic, it will only store the static fractured geometry, and transform
the pieces, which keeps the file size tiny.
You will get linear interpolated deformation blur with this method also.
The other method, is to use USD. It has built in support for transforming the geometry by an incoming point cache.
This has a lot of benefits, fast IO, the ability to change the fractured geo details without a re-cache, and a bunch
more. But, it requires the target renderer to either natively support USD (Arnold, Karma) or have some procedural translation
in the middle.
Attached is the Alembic method above, but also a video to show a 10,000 piece high poly RBD being scrubbed in real time
as USD, loaded into Maya in 1.3 seconds via the multiverse USD tool.
USD instancer RBD [vimeo.com]
Cheers
Lewis
Houdini Lounge » Any rumours of Houdini 19?
- lewis_T
- 238 posts
- Offline
Solaris and Karma » Renderman Solaris UDIM tip
- lewis_T
- 238 posts
- Offline
Houdini Indie and Apprentice » caching substeps
- lewis_T
- 238 posts
- Offline
That method is not as flexible in terms of making tools around caching, and
requires additional nodes to transform time back, the above that I have shown only
requires a relative reference to the file cache path.
One additional note, $FF should never be used to read anything subframe back in, it has
rounding/precision errors as frame numbers increase. The more accurate way is $T*$FPS+1.
requires additional nodes to transform time back, the above that I have shown only
requires a relative reference to the file cache path.
One additional note, $FF should never be used to read anything subframe back in, it has
rounding/precision errors as frame numbers increase. The more accurate way is $T*$FPS+1.
Edited by lewis_T - July 22, 2021 19:39:58
Houdini Indie and Apprentice » caching substeps
- lewis_T
- 238 posts
- Offline
Houdini absolutely sucks when it comes to built in sub-frame caching. Why even have the option to cache sub frames
on the file cache, when you cannot read them back in! It's mind boggling. Even Maya mdd caches work with substeps
with no problems for 10yrs+, still a bit shocked sidefx hasn't seen the need to fix this.
Rant over. Here is the solution for you. You set your subframe interval and you're done.
on the file cache, when you cannot read them back in! It's mind boggling. Even Maya mdd caches work with substeps
with no problems for 10yrs+, still a bit shocked sidefx hasn't seen the need to fix this.
Rant over. Here is the solution for you. You set your subframe interval and you're done.
Edited by lewis_T - July 19, 2021 22:08:40
Houdini Lounge » Is there any reason to switch to Karma? Is unbelievably slow
- lewis_T
- 238 posts
- Offline
Daryl Dunlap
A few moments later....
Deadline: How ILM’s Stagecraft Team Is Pushing The Boundaries Of VFX And “Moving The Tech Forward Right Now”.
https://deadline.com/2021/07/stagecraft-ilm-disney-plus-mandalorian-vfx-cannes-magazine-disruptor-1234787530/ [deadline.com]
TLDR;
Basically, once Madalorian showed the way, everyone is building game engine powered Studios, all over the world.
For specific elements. Zero of it being used for anything like proper destruction, etc.
Technical Discussion » How to assign materials to Alembic imports
- lewis_T
- 238 posts
- Offline
Use the Alembic Group SOP, there you will get a nice hierarchical view of your Alembic, and can easily make groups
from whatever you select in the list. Way better than messing about with other methods, nice and clean.
from whatever you select in the list. Way better than messing about with other methods, nice and clean.
Technical Discussion » Interaction between FLIP and Grain, is it possible?
- lewis_T
- 238 posts
- Offline
-
- Quick Links