A layout gallery is just a file on disk containing any number of assets you add to it.
That one there happens to be the "example" file Houdini ships with. That one is read-only, it's only meant for you to have something to play with. (see the text there: Read Only Source: C:/PROGRA~1/...)
If you want to put your own assets in, you need to create a new gallery. Click the cogwheel right next to that text and when the menu pops up, click Create New Asset Database File..., then select save that .db file somewhere on disk.
Found 22 posts.
Search results Show results as topic list.
Solaris and Karma » layout asset gallery is locked? help please
- alexandru_p
- 22 posts
- Offline
Solaris and Karma » Component Geometry LOP, file inputs
- alexandru_p
- 22 posts
- Offline
File mode is just a shortcut for a File SOP. It expects something you'd load with that, like a bgeo.
Component Builder is a way of taking non-USD geometry and "packing" it into an USD asset (together with materials, variants, etc).
Component Builder is a way of taking non-USD geometry and "packing" it into an USD asset (together with materials, variants, etc).
Solaris and Karma » Use a for each to send each variant into a SOP Modify LOP?
- alexandru_p
- 22 posts
- Offline
Using the first input of the foreach instead of the second makes the original geometry be used also as an input stage. That means the layer resulting after the foreach doesn’t contain just the “over” with the modified geo, but also the original layer.
While this is a totally valid approach, we usually like to keep things separate because it gives you the flexibility in the final stage to load/unload layer at will.
Usually these things happen in different Houdini projects handled by different departments. You would have the initial static asset published by the lookdev department, the animation would read these assets but only write inside their own usd file the changes they bring to the asset. Finally, when it comes to lighting for instance, the artist can start working with the static assets, even if the animation is not there yet, and load it in later at his own convenience.
While this is a totally valid approach, we usually like to keep things separate because it gives you the flexibility in the final stage to load/unload layer at will.
Usually these things happen in different Houdini projects handled by different departments. You would have the initial static asset published by the lookdev department, the animation would read these assets but only write inside their own usd file the changes they bring to the asset. Finally, when it comes to lighting for instance, the artist can start working with the static assets, even if the animation is not there yet, and load it in later at his own convenience.
Solaris and Karma » Use a for each to send each variant into a SOP Modify LOP?
- alexandru_p
- 22 posts
- Offline
You're 90% there already! Take a look at the attachment. Is this what you were looking for?
Edited by alexandru_p - May 25, 2023 19:36:19
Solaris and Karma » Variant workflow including lights
- alexandru_p
- 22 posts
- Offline
traileverse
Yes, using the path attribute with 'proxy' and 'render' won't do anything in terms of purposes as far as I know, it's just saying where the prim will live in the scene graph, you'd still need to use the configure prim LOPS to setup the purposes, but it saves you needing 2 SOP creates to manage.
Yes, that's right. I misunderstood what you were saying. I guess it's more of a preference thing here, if you wish to have both proxy and render geometry saved in a single layer (a single usd file), then you could just merge them with the proper paths and use a single sopcreate/sopimport... If you want to have seperate geo files, the use two sopcreates/sopimports...
Solaris and Karma » Variant workflow including lights
- alexandru_p
- 22 posts
- Offline
Definitely works with another SOPcreate for proxy and 2 configureprimitives. I don't think it is meant to automatically pick up words like "proxy" and "render" from the paths attribute, but I might be wrong. Either way, I'd still manually set it with configureprimitives, just for my sanity.
Now, about light instancing, I had no idea if you asked for it to be instanceable it wouldn't copy the light as well. I tried (in the attached file) to bring everything as reference and then mark the geometry primitives as instanceable. Ideally, you would want your geometry to keep being instanceable because it should be more efficient.
Now, about light instancing, I had no idea if you asked for it to be instanceable it wouldn't copy the light as well. I tried (in the attached file) to bring everything as reference and then mark the geometry primitives as instanceable. Ideally, you would want your geometry to keep being instanceable because it should be more efficient.
Solaris and Karma » Variant workflow including lights
- alexandru_p
- 22 posts
- Offline
So one "asset" would be a lightbulb geometry + its material + 1 point light?
In that case, I'm not sure you could use the component builder... I might be wrong, but I'd personally do it "by hand".
I attached a sample hip file with how I'd do that. Hope it helps.
In that case, I'm not sure you could use the component builder... I might be wrong, but I'd personally do it "by hand".
I attached a sample hip file with how I'd do that. Hope it helps.
PDG/TOPs » Find and delete any duplicate work items
- alexandru_p
- 22 posts
- Offline
Depends on how many attributes there are. Instead of holding a set, you could hold a tuple containing each attribute.
added_items = [] for item in upstream_items: value = (item.stringAttribValue("stringA"), item.floatAttribValue("floatB"), item.intAttribValue("intC"), item.intAttribValue("intD")) if not value in added_items: new_item = item_holder.addWorkItem(parent=item) added_items = added_items + [value]
PDG/TOPs » Find and delete any duplicate work items
- alexandru_p
- 22 posts
- Offline
added_items = set() for item in upstream_items: value = item.intAttribValue("attrib_to_check") if not value in added_items: new_item = item_holder.addWorkItem(parent=item) added_items.add(value)
Depending on what "duplicate work item" means to you, you'd have to change the "attrib_to_check", but also the intAttribValue (if the attribute is not an int, but something else, like a float or string)
PDG/TOPs » ROP Composite Output TOP unexpected result in-process?
- alexandru_p
- 22 posts
- Offline
Hi.
I was trying to automate some texture shrinking using a TOPnet. Used a python script to find textures in a material library, a wedge to create tasks and attributes like original path, filename, etc. for each file, and then I split everything into 2 ROP Comp outs. One would handle the half res, the other would handle the quarter res. Inside, just a file node, nothing fancy.
Using Out-of-Process Cook Type does exactly what it's supposed to do, but switching to In-Process makes every file the same (looks like the pdg attribute from inside the copnet wouldn't update in-between wedges). Is it normal? Am I missing something?
Thanks.
I was trying to automate some texture shrinking using a TOPnet. Used a python script to find textures in a material library, a wedge to create tasks and attributes like original path, filename, etc. for each file, and then I split everything into 2 ROP Comp outs. One would handle the half res, the other would handle the quarter res. Inside, just a file node, nothing fancy.
Using Out-of-Process Cook Type does exactly what it's supposed to do, but switching to In-Process makes every file the same (looks like the pdg attribute from inside the copnet wouldn't update in-between wedges). Is it normal? Am I missing something?
Thanks.
Solaris and Karma » Optimal alembic to USD?
- alexandru_p
- 22 posts
- Offline
Just keeping the Alembics packed and selecting P, bounds and transforms in the sopimport does the trick. Thanks again for the suggestions, everyone!
Solaris and Karma » Make usd file loop ?
- alexandru_p
- 22 posts
- Offline
Check out Matt Estela's explanation:
https://www.tokeru.com/cgwiki/HoudiniLops#Looping_clips_and_vdb_sequences [www.tokeru.com]
https://www.tokeru.com/cgwiki/HoudiniLops#Looping_clips_and_vdb_sequences [www.tokeru.com]
Solaris and Karma » Loading 100 usd files
- alexandru_p
- 22 posts
- Offline
In my experience, once cached on disk, Solaris loads around the current frame (plus-minus one frame) for motion blur purposes, but not all the animation. I might be wrong, but I see no reason for it to load "the whole thing".
Other notes, even though nobody asked
Make sure you're only writing the P (position) attribute to disk as an animation layer over the static cloth (no need for other attributes like topology, uvs, etc.) It saves a lot of space.
Not sure how your shots look, but for the distant ships I'd just reuse 3-4-5 caches with time offsets (if it's just the "default" sail in the wind).
Also, what render engine?
Other notes, even though nobody asked
Make sure you're only writing the P (position) attribute to disk as an animation layer over the static cloth (no need for other attributes like topology, uvs, etc.) It saves a lot of space.
Not sure how your shots look, but for the distant ships I'd just reuse 3-4-5 caches with time offsets (if it's just the "default" sail in the wind).
Also, what render engine?
Solaris and Karma » Optimal alembic to USD?
- alexandru_p
- 22 posts
- Offline
antc
When you load an abc file into lops (via a sublayer or reference), it does convert to USD. In other words there's no native alembic support in lops, rather the abc conversion is handled behind the scenes by USD via a plugin. So once you've loaded your abc file into lops just save it out as a usd file. If you want to do that from the commandline you can run "usdcat -o myfile.usd myfile.abc". If the data in myfile.abc is spase the data in myfile.usd will also be sparse.
Yeah, but that is still limiting. As I said, depending on the origin of the incoming alembic, you might need to edit the path attribute to fit your needs (and when I say that I don't just mean just grafting it somewhere, I mean custom logic to alter the inside hierarchy).
I'll keep trying to fiddle around with the sopimport, like jsmack suggested. Last time I tried to do that, it would just do nothing (not move the geometry at all), but maybe I was missing something.
Solaris and Karma » Optimal alembic to USD?
- alexandru_p
- 22 posts
- Offline
Well, that wasn't what I was going for. That just brings in an alembic from disk. I don't want to keep using the abc, I want to "translate" its data into USD so that I can completely discard the abc file.
Again, a reference or a sublayer LOP are meant to bring in USD files. You cannot select which attributes you want to keep or discard, which ones should have time samples and which shouldn't, etc.
Let's say you have a human already written on disk as an USD. There is already a static mesh (the rest pose) with attributes like topology, P, N, uv -> st, material bindings, etc. inside the USD file.
Now, if you get an alembic with the very same human animated, all you need to write on disk from the abc file are P (the new point positions) and bounds. This will give you an "animation layer" that you can load over your "original asset" that only contains the necessary data to move your human from the rest pose.
While position caches are the solution for deforming geometry, for transforming geometry you could get away with just writing out transforms for each primitive, which is much faster and way more efficient.
So, yeah, that was my question. How can I take the data read from an abc file in SOPs and feed it to a SOPimport to just get xforms on non-deformable prims?
Again, a reference or a sublayer LOP are meant to bring in USD files. You cannot select which attributes you want to keep or discard, which ones should have time samples and which shouldn't, etc.
Let's say you have a human already written on disk as an USD. There is already a static mesh (the rest pose) with attributes like topology, P, N, uv -> st, material bindings, etc. inside the USD file.
Now, if you get an alembic with the very same human animated, all you need to write on disk from the abc file are P (the new point positions) and bounds. This will give you an "animation layer" that you can load over your "original asset" that only contains the necessary data to move your human from the rest pose.
While position caches are the solution for deforming geometry, for transforming geometry you could get away with just writing out transforms for each primitive, which is much faster and way more efficient.
So, yeah, that was my question. How can I take the data read from an abc file in SOPs and feed it to a SOPimport to just get xforms on non-deformable prims?
Solaris and Karma » Optimal alembic to USD?
- alexandru_p
- 22 posts
- Offline
What LOP (node) should I use for that? SOPimport? Asking because SOPcreate is a wrapper around SOPimport, they have basically the same options.
The only thing I change in SOPs after loading the alembic is the path attribute with a wrangle, to match the path of the static asset the animation is meant to overwrite.
The only thing I change in SOPs after loading the alembic is the path attribute with a wrangle, to match the path of the static asset the animation is meant to overwrite.
Solaris and Karma » Optimal alembic to USD?
- alexandru_p
- 22 posts
- Offline
Hi. I noticed when you're loading an alembic in Houdini, you can select to load only Deforming primitives or only Transforming primitives.
At the moment I write animation to USD as point caches (sopcreate > Import from SOPs > Primitive Definition > Other Primitives = Overlay), but this is really not optimal for non-deforming meshes, where you could get away with writing just an xform.
Unfortunately, leaving the geo as Packed Alembics and switching the dropdown to Overlay Transforms doesn't seem to work, and I couldn't find a way to transfer the intrinsic prim transforms to USD.
Anyone able to help with this one?
Thanks.
At the moment I write animation to USD as point caches (sopcreate > Import from SOPs > Primitive Definition > Other Primitives = Overlay), but this is really not optimal for non-deforming meshes, where you could get away with writing just an xform.
Unfortunately, leaving the geo as Packed Alembics and switching the dropdown to Overlay Transforms doesn't seem to work, and I couldn't find a way to transfer the intrinsic prim transforms to USD.
Anyone able to help with this one?
Thanks.
Solaris and Karma » Localize assets Output Processor doesn't work with UDIMs?
- alexandru_p
- 22 posts
- Offline
Question to my more python-savvy colleagues.
I noticed the "Copy All Assets to Referencing Layer Directory" output processor from the USD ROP works just fine bringing in textures where the file name is actually there on disk, but as soon as I have a template in the file name like "<UDIM>", it just skips the file.
Looking into the python code of the output processor, it all makes sense, but I don't really know how to change it in order to make it copy textures with UDIMs.
Anyone bumped into this already or has a simple fix for it?
Thanks.
I noticed the "Copy All Assets to Referencing Layer Directory" output processor from the USD ROP works just fine bringing in textures where the file name is actually there on disk, but as soon as I have a template in the file name like "<UDIM>", it just skips the file.
Looking into the python code of the output processor, it all makes sense, but I don't really know how to change it in order to make it copy textures with UDIMs.
Anyone bumped into this already or has a simple fix for it?
Thanks.
Edited by alexandru_p - Feb. 14, 2023 04:39:59
Solaris and Karma » World Space Displacement with MaterialX?
- alexandru_p
- 22 posts
- Offline
Hi everyone.
There is a neat feature of the Principaled Shader which lets you input a vector displacement and switch it to world space.
I was wondering if the same can be done with MaterialX.
MtlX displacement input can be a float (so it displaces along the normal), or a vector, in which case it looks like it does the displacement in tangent space.
Now, my simple understanding of the relationship is that if we multiply the tangent space vector with a matrix formed by the tangent, bitangent and normal we get the world-space vector. So, we have the world-space direction and all the above mentioned vectors as handy MtlX nodes (tangent, bitangent, normal), I thought it was as simple as making a matrix out of them, inverting the matrix (also an existing node) and then multiplying the world space vector with the resulting matrix.
And here is where I got stuck. I can't seem to find a way to make a matrix out of these 3 vectors, nor do I know if there is a node to multiply a vector with a 3x3 matrix (there is a mtlxtransformmatrix node but it doesn't seem to work).
So, is there a way to do this at the moment? Is my logic wrong or should the math work fine?
There is a neat feature of the Principaled Shader which lets you input a vector displacement and switch it to world space.
I was wondering if the same can be done with MaterialX.
MtlX displacement input can be a float (so it displaces along the normal), or a vector, in which case it looks like it does the displacement in tangent space.
Now, my simple understanding of the relationship is that if we multiply the tangent space vector with a matrix formed by the tangent, bitangent and normal we get the world-space vector. So, we have the world-space direction and all the above mentioned vectors as handy MtlX nodes (tangent, bitangent, normal), I thought it was as simple as making a matrix out of them, inverting the matrix (also an existing node) and then multiplying the world space vector with the resulting matrix.
And here is where I got stuck. I can't seem to find a way to make a matrix out of these 3 vectors, nor do I know if there is a node to multiply a vector with a 3x3 matrix (there is a mtlxtransformmatrix node but it doesn't seem to work).
So, is there a way to do this at the moment? Is my logic wrong or should the math work fine?
Edited by alexandru_p - Jan. 29, 2023 19:06:57
Houdini Learning Materials » How to handle difficult collision geo in DOPs?
- alexandru_p
- 22 posts
- Offline
Hi everyone,
I'm quite a beginner in Houdini, so maybe someone can help me with this issue. I'm trying to create an erupting volcano with Flip Fluids (the lava, not the smoke), so I modeled a geometry which I'm trying to use as a collision in the DOP network.
Now, when I'm trying to use the fluidsource SOP I get some weird artifacts and can't really manage to get a decent collision SDF volume.
One thing I've noticed is that the new VDB volume is really fast and gives me a very good result, but even after I've used vdbconvert to convert my VDB to a default houdini volume I couldn't use it as a collision in DOPs using the sourcevolume DOP. (Please check my attached pics.)
Are there any tips on how should I handle this problem? Is there a way to make this VDB volume usable by DOPs or ar there some parameters I can tweak that make the fluidsource SOP output a good collision volume?
Thank you.
I'm quite a beginner in Houdini, so maybe someone can help me with this issue. I'm trying to create an erupting volcano with Flip Fluids (the lava, not the smoke), so I modeled a geometry which I'm trying to use as a collision in the DOP network.
Now, when I'm trying to use the fluidsource SOP I get some weird artifacts and can't really manage to get a decent collision SDF volume.
One thing I've noticed is that the new VDB volume is really fast and gives me a very good result, but even after I've used vdbconvert to convert my VDB to a default houdini volume I couldn't use it as a collision in DOPs using the sourcevolume DOP. (Please check my attached pics.)
Are there any tips on how should I handle this problem? Is there a way to make this VDB volume usable by DOPs or ar there some parameters I can tweak that make the fluidsource SOP output a good collision volume?
Thank you.
-
- Quick Links