Hi,
I never usually use shader displacements on pyro stuff as I found it often doesn't workout as well on most cases compared to tweaking the solver parameters, up-resing, vorticles and field ramps but… I have an old project to resurrect for a sequence that used the old pyro shaders from H13 where I did use displacements in mantra render. It has to be the same thing I did before. As shown below, where there is a separate displacement setup inside the shader, I make vdb fields at SOP level for gradient direction and some advected particle normals as a noise pattern (and so avoid using rest1/rest2 positions that again I find only really work in some cases).
However, the pyro2 shader does not have a separate displacement setup on the shader parameters and inside the asset it still uses pyroDisplace VOP and pyroFieldVop but they don't seem to behave the same way. I also tried just adding a separate layer of flownoise setup inside the unlocked asset but pyro2 displace doesn't seem to react to any custom fields. Its so odd because it looks like the same VOP nodes but only seem to recognise the inbuilt solver fields like heat, temperature, etc., when activating noise on the vop. Even if I add another level of displacement onto final shader output, it seems to just get ignored in render.
Am I missing something? Is it a bug? How are you adding custom displacement fields in pyro2?
thanks,
frank
Found 61 posts.
Search results Show results as topic list.
Technical Discussion » Where did displacement go with pyro2 shader?
- frankvw
- 61 posts
- Offline
Technical Discussion » Getting sub material groups into a loop in python
- frankvw
- 61 posts
- Offline
Technical Discussion » Getting sub material groups into a loop in python
- frankvw
- 61 posts
- Offline
Hi,
I need to manipulate some group names brought in via alembic. I need to append a ‘_’ to the group name to match some previously defined material groups, I can't just change the material node names as this would break some pipeline dependencies elsewhere.
So I can do this for a single tab/folder material by putting :
for node in hou.selectedNodes():
for parm in node.parms():
if node.evalParm('group1'):
old = node.evalParm('group1')
new = ‘_’ + old
node.parm('group1').set(new)
But when there are sub-material tabs, the group1 material group is just evaluated in the first tab as
'____________________________________________name'
ie, it accumulates all the sub material group1 tabs for each ‘_’ and applies this to only the first group1 tab.
Obviously because I am not looping specifically for each sub material tab/folder in turn. I've turned towards evaluating the parmTemplateGroup but then I just get <parmTemplateGroup> returned for each 'group1 field, not the rename. What am I missing?
Cheers
I need to manipulate some group names brought in via alembic. I need to append a ‘_’ to the group name to match some previously defined material groups, I can't just change the material node names as this would break some pipeline dependencies elsewhere.
So I can do this for a single tab/folder material by putting :
for node in hou.selectedNodes():
for parm in node.parms():
if node.evalParm('group1'):
old = node.evalParm('group1')
new = ‘_’ + old
node.parm('group1').set(new)
But when there are sub-material tabs, the group1 material group is just evaluated in the first tab as
'____________________________________________name'
ie, it accumulates all the sub material group1 tabs for each ‘_’ and applies this to only the first group1 tab.
Obviously because I am not looping specifically for each sub material tab/folder in turn. I've turned towards evaluating the parmTemplateGroup but then I just get <parmTemplateGroup> returned for each 'group1 field, not the rename. What am I missing?
Cheers
Technical Discussion » Larger Python Functions in Parm Expression? (Object Merge)
- frankvw
- 61 posts
- Offline
Hi,
So I get that syntax differs a bit in python parm expressions vs scripts or pythonSOP but I am a bit stumped on this one. I have a scripts I use to automate alembic and Collada imports. One particular function is to filter the contents of collada geo type nodes and any of those nodes that are visible or not. A simple exmaple would be along the lines of :
for node in hou.selectedNodes():
if node.type().name() == ‘geo’:
new_name = ‘Mesh_’ + str(node)
node.setName(new_name, 1)
I am appending names say in this example of geo type nodes so that I merge various sets in another node using object_merge nodes. So, in this example, I would set the objmerge parm to a wildcard ‘Mesh_*’ and so import the various geo meshes I want.
It works fine but it would be super convenient if instead I could set a longer function expression as a python parm instead to filter the merge elements. I haven't been able to work that out though as an expression. Is it possible? Say, to set a python parm expression that in pseudo code will do this?
i.e.,
for nodes in nodes:
if node.type.name = ‘geo’
if node.isVisible():
merge node
So far, I have been working along the lines of using my existing scripts to put the filtered objects into a list and then set the parm expression to a space separated line of string object names with their paths. Obviously though, that is kinda hard coded at the time it is run. I wonder if there is a way for the parm to evaluate the merge filters itself using functions directly (instead of just setting the parm merge field to a string like ‘/obj/object/Mesh1 /obj/object/Mesh2 /obj/object/Mesh3 /obj/object/Mesh4’ etc. or ‘Mesh*’)
Thanks!
So I get that syntax differs a bit in python parm expressions vs scripts or pythonSOP but I am a bit stumped on this one. I have a scripts I use to automate alembic and Collada imports. One particular function is to filter the contents of collada geo type nodes and any of those nodes that are visible or not. A simple exmaple would be along the lines of :
for node in hou.selectedNodes():
if node.type().name() == ‘geo’:
new_name = ‘Mesh_’ + str(node)
node.setName(new_name, 1)
I am appending names say in this example of geo type nodes so that I merge various sets in another node using object_merge nodes. So, in this example, I would set the objmerge parm to a wildcard ‘Mesh_*’ and so import the various geo meshes I want.
It works fine but it would be super convenient if instead I could set a longer function expression as a python parm instead to filter the merge elements. I haven't been able to work that out though as an expression. Is it possible? Say, to set a python parm expression that in pseudo code will do this?
i.e.,
for nodes in nodes:
if node.type.name = ‘geo’
if node.isVisible():
merge node
So far, I have been working along the lines of using my existing scripts to put the filtered objects into a list and then set the parm expression to a space separated line of string object names with their paths. Obviously though, that is kinda hard coded at the time it is run. I wonder if there is a way for the parm to evaluate the merge filters itself using functions directly (instead of just setting the parm merge field to a string like ‘/obj/object/Mesh1 /obj/object/Mesh2 /obj/object/Mesh3 /obj/object/Mesh4’ etc. or ‘Mesh*’)
Thanks!
Edited by frankvw - March 26, 2018 11:20:11
Technical Discussion » Nested pack geo - per instance
- frankvw
- 61 posts
- Offline
Hi,
I don't know if this is an obvious question but it's the first time I have actually encountered it. Most of the time, I've used style sheets to edit basic existing material params on crowds or debris groups but that's normally my overide - on an existing group.
I was planning to use nested unpacking to vary textures randomly on objects but then realised that my stylesheets work on groups, not the instances. Of course, it's straightforward with copystamping or pointinstances with a vex wrangle but how do I go about that in JSON? The way round I hacked was to make some separate “entire” groups before packing, for the entire objects with different names then reference that but it seems very inefficient and would be quickly cumbersome for a large number of random texture variations. Before I go down the research route, just wanted to check I am not missing something really obvious with stylesheets (like I say, I've mostly just tweaked them before but I need to get a bit deeper into them this time).
The help card file “sop_example_unpackwithstyle” is a good starting point. Again, this is based on groups but how to make it work on “per copy” method instead? So that these spheres receive a separate random texture overide per object, not per group. So in the help card example, where the shader color overide is, for example,
“baseColor” :
Switch that out to a “baseColorMap” parameter of the shader with a randomly referenced map off disk or even a different material, applied to an entire instance (say via id or pt, not sure how/if that's possible in JSON), not just the group. Or do I need to fall back on point instance for that or maybe setup a renderstate VOP in the shader?
Thanks.
I don't know if this is an obvious question but it's the first time I have actually encountered it. Most of the time, I've used style sheets to edit basic existing material params on crowds or debris groups but that's normally my overide - on an existing group.
I was planning to use nested unpacking to vary textures randomly on objects but then realised that my stylesheets work on groups, not the instances. Of course, it's straightforward with copystamping or pointinstances with a vex wrangle but how do I go about that in JSON? The way round I hacked was to make some separate “entire” groups before packing, for the entire objects with different names then reference that but it seems very inefficient and would be quickly cumbersome for a large number of random texture variations. Before I go down the research route, just wanted to check I am not missing something really obvious with stylesheets (like I say, I've mostly just tweaked them before but I need to get a bit deeper into them this time).
The help card file “sop_example_unpackwithstyle” is a good starting point. Again, this is based on groups but how to make it work on “per copy” method instead? So that these spheres receive a separate random texture overide per object, not per group. So in the help card example, where the shader color overide is, for example,
“baseColor” :
Switch that out to a “baseColorMap” parameter of the shader with a randomly referenced map off disk or even a different material, applied to an entire instance (say via id or pt, not sure how/if that's possible in JSON), not just the group. Or do I need to fall back on point instance for that or maybe setup a renderstate VOP in the shader?
Thanks.
Edited by frankvw - Feb. 5, 2018 11:29:56
Technical Discussion » Space transform to Cd in Sop context
- frankvw
- 61 posts
- Offline
Hmm. The position does look OK as points in Nuke 3d viewport this way even though it looks different. Sending off alembic file to see if this works on their system. Fingers crossed as this will save me lots of time.
Cheers!
Cheers!
Edited by frankvw - Sept. 4, 2017 15:53:03
Technical Discussion » Space transform to Cd in Sop context
- frankvw
- 61 posts
- Offline
Hi,
I must be misunderstanding as setting camera object looks really different to the aov even on constant shader. Just raw P rendered with constant shader will match the camera space render of mantra shader export, since world space will be “camera space” in shader/mantra.
But the tricky bit is matching the “world” space render you export from shader. How do you get that to point attribute or Cd in Sop context? (you know, the other position pass you would usually export for nuke relight and that looks like a projected rgb grid across the surfaces).At the moment I bake the shader exports to point colors and it works. Just doesn't seem efficient on bigger scenes if I could just do it as an attribute wrangle/vop at sop level.
Looks like I'll have to grab a flask of coffee and dust off the rendering math textbook. Kinda doubly complicated as mantra sees the sop context as camera space and visa versa (which isn't the way Arnold sees it).
I must be misunderstanding as setting camera object looks really different to the aov even on constant shader. Just raw P rendered with constant shader will match the camera space render of mantra shader export, since world space will be “camera space” in shader/mantra.
But the tricky bit is matching the “world” space render you export from shader. How do you get that to point attribute or Cd in Sop context? (you know, the other position pass you would usually export for nuke relight and that looks like a projected rgb grid across the surfaces).At the moment I bake the shader exports to point colors and it works. Just doesn't seem efficient on bigger scenes if I could just do it as an attribute wrangle/vop at sop level.
Looks like I'll have to grab a flask of coffee and dust off the rendering math textbook. Kinda doubly complicated as mantra sees the sop context as camera space and visa versa (which isn't the way Arnold sees it).
Edited by frankvw - Sept. 4, 2017 15:37:08
Technical Discussion » Space transform to Cd in Sop context
- frankvw
- 61 posts
- Offline
Hi,
Thanks for the post but none of that answers the question and would infact all be much longer and convoluted processes than I have currently used. It's no more complicated than getting the space transform correct at the Sop level and set that to an attribute. Setting that to Cd in constant shader should then produce a render that matches worldP or camP aov exported in a shader. That's what I'm not getting. Hence why I'm baking a mantra shader with a render stage when I just want to get an OGL match directly in the viewport.
For example, just set Cd to P, with fit - 1,1,0,1 inside a vop will give you something close but not exact to space transform aov exports in a shader. So why? What is the vop/wrangle setup required to match sop/shop contexts?
So as shown here in SHOP context then the corresponding setup for point color/OGL in SOP
Thanks for the post but none of that answers the question and would infact all be much longer and convoluted processes than I have currently used. It's no more complicated than getting the space transform correct at the Sop level and set that to an attribute. Setting that to Cd in constant shader should then produce a render that matches worldP or camP aov exported in a shader. That's what I'm not getting. Hence why I'm baking a mantra shader with a render stage when I just want to get an OGL match directly in the viewport.
For example, just set Cd to P, with fit - 1,1,0,1 inside a vop will give you something close but not exact to space transform aov exports in a shader. So why? What is the vop/wrangle setup required to match sop/shop contexts?
So as shown here in SHOP context then the corresponding setup for point color/OGL in SOP
Edited by frankvw - Sept. 2, 2017 11:20:22
Technical Discussion » Space transform to Cd in Sop context
- frankvw
- 61 posts
- Offline
Hi,
So I have been working on a customer project that requires delivery in alembic format, output with additional point variables for use in some kind of VR system that can do deep composting type stuff on the-fly thru gpu.
Two variables I output are camera and world space as Cd type values. The sort of thing you normally do for relight passes in nuke. I've approached it pretty much as I would for normal render work - spitting out export variables for render passes in a shop, using the transform space node to give me worldP and camP aov, then baking it onto the geo. Works fine for them but doesn't seem a' houdini' way to do it.
I thought maybe just putting space transform vops in Sop context, set to Cd to visualise, might skip a render stage. Nope, but Sop isn't camera/view aware as a shop context. So I thought maybe first do an NDC space conversion, followed by camera or world transform to Cd might do something. Kinda, but doesn't match the worldP or camP renders.
So… how would you do that, in Sop context. Set point colors to the same values you get from exporting space transforms (world and camera space) inside a shop?
Cheers!
So I have been working on a customer project that requires delivery in alembic format, output with additional point variables for use in some kind of VR system that can do deep composting type stuff on the-fly thru gpu.
Two variables I output are camera and world space as Cd type values. The sort of thing you normally do for relight passes in nuke. I've approached it pretty much as I would for normal render work - spitting out export variables for render passes in a shop, using the transform space node to give me worldP and camP aov, then baking it onto the geo. Works fine for them but doesn't seem a' houdini' way to do it.
I thought maybe just putting space transform vops in Sop context, set to Cd to visualise, might skip a render stage. Nope, but Sop isn't camera/view aware as a shop context. So I thought maybe first do an NDC space conversion, followed by camera or world transform to Cd might do something. Kinda, but doesn't match the worldP or camP renders.
So… how would you do that, in Sop context. Set point colors to the same values you get from exporting space transforms (world and camera space) inside a shop?
Cheers!
Technical Discussion » toNDC and screenspace projection. How?
- frankvw
- 61 posts
- Offline
Hi,
Actually, having had time to render it off and overlay it, its pretty good. Even on a large scene with animated characters and cameras, its quick as well.
Million thanks,
Frank
Actually, having had time to render it off and overlay it, its pretty good. Even on a large scene with animated characters and cameras, its quick as well.
Million thanks,
Frank
Technical Discussion » toNDC and screenspace projection. How?
- frankvw
- 61 posts
- Offline
Hi,
Thanks for trying but that doesnt work.
Perpsective translation after NDC will need to account for camera perpsective per point after that stage, extra level of transformation (ignoring back culling etc.,).
If you try your method, its a bit hit and miss and definitely will not lineup closely with a final rendered image, if you applied it as a camera background.
Maybe any of the sidefx mantra render developers here? Its such a standard matrix operation, I'm just trying to be a bit lazy.
cheers
Frank
Thanks for trying but that doesnt work.
Perpsective translation after NDC will need to account for camera perpsective per point after that stage, extra level of transformation (ignoring back culling etc.,).
If you try your method, its a bit hit and miss and definitely will not lineup closely with a final rendered image, if you applied it as a camera background.
Maybe any of the sidefx mantra render developers here? Its such a standard matrix operation, I'm just trying to be a bit lazy.
cheers
Frank
Technical Discussion » toNDC and screenspace projection. How?
- frankvw
- 61 posts
- Offline
Hi,
I want to get object points in a VOP or wrangle into their flattened (depth) position in viewport, as they would appear in the perspective rendered image. So, in a VOP or wrangle the toNDC() will get the first stage done but how to apply the final screen perspective translation afterwards inorder to see, through the scene camera, the final, flattened 2d geo that will match the final points-to-pixels perspective? The same way you used to make UV renders with scene camera before the H15 “UV render” rop came along.
The topic has come up a few times before and looking through posts, Ive not actualy seen one that works accurately. Anyone mastered this? Or do I need to workout the matrix transforms from scratch? (there doesnt seem to be a “toPerspective” funtion or vop like the “toNDC” funtion/node).
million thanks!
Frank
I want to get object points in a VOP or wrangle into their flattened (depth) position in viewport, as they would appear in the perspective rendered image. So, in a VOP or wrangle the toNDC() will get the first stage done but how to apply the final screen perspective translation afterwards inorder to see, through the scene camera, the final, flattened 2d geo that will match the final points-to-pixels perspective? The same way you used to make UV renders with scene camera before the H15 “UV render” rop came along.
The topic has come up a few times before and looking through posts, Ive not actualy seen one that works accurately. Anyone mastered this? Or do I need to workout the matrix transforms from scratch? (there doesnt seem to be a “toPerspective” funtion or vop like the “toNDC” funtion/node).
million thanks!
Frank
Edited by frankvw - July 3, 2017 11:58:36
Technical Discussion » alembic export
- frankvw
- 61 posts
- Offline
Hi,
Exporting flip meshes to Maya via alembic. I've attached a basic test setup.
Maya does not import a sequence, only static frames. More specifically, it will import sequences, I can see my extra houdini attributes, just the mesh is not visible or renderable. Tried polys and poly soups.
Exocortex will do the job perfectly except only carries UV and N attributes. Seen lots of forum questions querying this over the years so guessing that modifying exocortex source to enable this is not going to be straightforward (or someone would have done it years ago).
Only solution is to use houdini engine as a bgeo importing HDA. Fine, except incovennient sometimes as it will tie up engine licenses.
Folk had similar experiences? (realise this is an autodeak issue per se since I dont have trouble with other alembic enabled apps… but thought I would check experiences, incase there is something quirky about maya alembic implementation that clashes with this setup).
thx.
Exporting flip meshes to Maya via alembic. I've attached a basic test setup.
Maya does not import a sequence, only static frames. More specifically, it will import sequences, I can see my extra houdini attributes, just the mesh is not visible or renderable. Tried polys and poly soups.
Exocortex will do the job perfectly except only carries UV and N attributes. Seen lots of forum questions querying this over the years so guessing that modifying exocortex source to enable this is not going to be straightforward (or someone would have done it years ago).
Only solution is to use houdini engine as a bgeo importing HDA. Fine, except incovennient sometimes as it will tie up engine licenses.
Folk had similar experiences? (realise this is an autodeak issue per se since I dont have trouble with other alembic enabled apps… but thought I would check experiences, incase there is something quirky about maya alembic implementation that clashes with this setup).
thx.
Technical Discussion » simple2Dcfd exchange for H12+ hdk?
- frankvw
- 61 posts
- Offline
Hi,
Bit of a long shot question here (but maybe Mr Lait comes here from time to time).
So back in the day, 2007-08 or so, when I was first hitting hard the (then) new pyro and solver tools, I found invaluable as learning resources Mr Lait's “building solvers from scratch” tutorial and the code for “simple2DCFD” example DSO from the exchange (original attached), together with Joss Stams original paper.
The 2DCFD DSO is a really nice, simple sandbox to get into experimenting with gas solvers coding and I recently went back over it recently to find that the overhaul of DOPs around v11-12 breaks the compiler on this (old) version DSO.
Anyone ever got round to recoding it for later (v12) versions? It just looks like a bit of reorganisation and updating headers, so I thought it just needed referencing the new PRM_shared and SIM_UTILS to the toolkit/include versions and replacing the SIM_VoxelArray header with a CFD_VoxelArray subclass from UT_voxelArray, like the Snow_Solver hdk sample but … It seems a fair bit more involved unfortunately.
So…. anyone done it? Would be nice even if sidefx could put out on exchange again, its a really nice little framework for experimenting with other solvers given how much spin off work developed on from the original paper out there in the communities.
(the solveForObject function below is the one that throws my compile sideways when I try referencing a CFD_VoxelArray instead of the old SIM_VoxelArray from H v.8.0.)
Cheers!
void
SIM_SolverCFD::solveForObject(SIM_Object &object,
SIM_VoxelArray &densities,
SIM_VoxelArray &velocityu,
SIM_VoxelArray &velocityv,
const SIM_Time ×tep) const
{
SIM_VoxelArray *newdensities;
SIM_VoxelArray *newvelocityu;
SIM_VoxelArray *newvelocityv;
int gridsize = getGridSize();
newdensities = SIM_DATA_CREATE(object, “newDensities”, SIM_VoxelArray, 0);
newdensities->makeEqual(&densities);
newvelocityu = SIM_DATA_CREATE(object, “newVelocityu”, SIM_VoxelArray, 0);
newvelocityu->makeEqual(&velocityu);
newvelocityv = SIM_DATA_CREATE(object, “newVelocityv”, SIM_VoxelArray, 0);
newvelocityv->makeEqual(&velocityv);
vel_step(gridsize - 2,
newvelocityu->getVoxelArray(), newvelocityv->getVoxelArray(),
velocityu.getVoxelArray(), velocityv.getVoxelArray(),
getViscosity(), timestep);
dens_step(gridsize - 2,
newdensities->getVoxelArray(), densities.getVoxelArray(),
newvelocityu->getVoxelArray(), newvelocityv->getVoxelArray(),
getDiffusion(), timestep);
object.moveNamedSubData(“newDensities”, “Densities”);
object.moveNamedSubData(“newVelocityu”, “Velocityu”);
object.moveNamedSubData(“newVelocityv”, “Velocityv”);
}
Bit of a long shot question here (but maybe Mr Lait comes here from time to time).
So back in the day, 2007-08 or so, when I was first hitting hard the (then) new pyro and solver tools, I found invaluable as learning resources Mr Lait's “building solvers from scratch” tutorial and the code for “simple2DCFD” example DSO from the exchange (original attached), together with Joss Stams original paper.
The 2DCFD DSO is a really nice, simple sandbox to get into experimenting with gas solvers coding and I recently went back over it recently to find that the overhaul of DOPs around v11-12 breaks the compiler on this (old) version DSO.
Anyone ever got round to recoding it for later (v12) versions? It just looks like a bit of reorganisation and updating headers, so I thought it just needed referencing the new PRM_shared and SIM_UTILS to the toolkit/include versions and replacing the SIM_VoxelArray header with a CFD_VoxelArray subclass from UT_voxelArray, like the Snow_Solver hdk sample but … It seems a fair bit more involved unfortunately.
So…. anyone done it? Would be nice even if sidefx could put out on exchange again, its a really nice little framework for experimenting with other solvers given how much spin off work developed on from the original paper out there in the communities.
(the solveForObject function below is the one that throws my compile sideways when I try referencing a CFD_VoxelArray instead of the old SIM_VoxelArray from H v.8.0.)
Cheers!
void
SIM_SolverCFD::solveForObject(SIM_Object &object,
SIM_VoxelArray &densities,
SIM_VoxelArray &velocityu,
SIM_VoxelArray &velocityv,
const SIM_Time ×tep) const
{
SIM_VoxelArray *newdensities;
SIM_VoxelArray *newvelocityu;
SIM_VoxelArray *newvelocityv;
int gridsize = getGridSize();
newdensities = SIM_DATA_CREATE(object, “newDensities”, SIM_VoxelArray, 0);
newdensities->makeEqual(&densities);
newvelocityu = SIM_DATA_CREATE(object, “newVelocityu”, SIM_VoxelArray, 0);
newvelocityu->makeEqual(&velocityu);
newvelocityv = SIM_DATA_CREATE(object, “newVelocityv”, SIM_VoxelArray, 0);
newvelocityv->makeEqual(&velocityv);
vel_step(gridsize - 2,
newvelocityu->getVoxelArray(), newvelocityv->getVoxelArray(),
velocityu.getVoxelArray(), velocityv.getVoxelArray(),
getViscosity(), timestep);
dens_step(gridsize - 2,
newdensities->getVoxelArray(), densities.getVoxelArray(),
newvelocityu->getVoxelArray(), newvelocityv->getVoxelArray(),
getDiffusion(), timestep);
object.moveNamedSubData(“newDensities”, “Densities”);
object.moveNamedSubData(“newVelocityu”, “Velocityu”);
object.moveNamedSubData(“newVelocityv”, “Velocityv”);
}
Technical Discussion » hou equivalent of opwrite and opread?
- frankvw
- 61 posts
- Offline
Hi,
I'm porting one of my old scripts to python where I'm using hscript opread/write commands alot to generate a series of Chan files that match scene elements by name. I can of course reference the hscript commands inside hom but just wondering if there is a more python method for the same thing that already exists (like an opread/write or instead of setting filenames in a channel rop). Had a look at Chopnode() but didn't seem to offer it and asCode() doesnt suit my workflow. I was hoping for a neat function I can put inside my loops to just reference some channels (like the channel rop does) and say “put these to a Chan file”.
Cheers
I'm porting one of my old scripts to python where I'm using hscript opread/write commands alot to generate a series of Chan files that match scene elements by name. I can of course reference the hscript commands inside hom but just wondering if there is a more python method for the same thing that already exists (like an opread/write or instead of setting filenames in a channel rop). Had a look at Chopnode() but didn't seem to offer it and asCode() doesnt suit my workflow. I was hoping for a neat function I can put inside my loops to just reference some channels (like the channel rop does) and say “put these to a Chan file”.
Cheers
Technical Discussion » Stuck on wrangle expressions looping group prims
- frankvw
- 61 posts
- Offline
Hi,
So this was good learning experience now, had chance to dig through docs again and seen my earlier issues were trying to declare and set attributes all at same time and too liberally mixing casts. I've found declaring any calculations separately then assigning with explicit casts each item makes vex happier.
Adding into your working example with my earlier failed attempts will in fact also now work for me, if I want to set vector elements separately. Here it is if that's handy for any other vex apprentices also on the transition road. Really cleared things up in my mind a lot more.
// groups array
string groups = detailintrinsic(0, ‘primitivegroups’);
for (int g=0; g<len(groups); g++)
{
// prims array per group
int prims = expandprimgroup(0, groups);
// random color every frame different per group per color CrCgCb
float randCol1 = float(rand(g*0.123));
float randCol2 = float(rand(g*4.567));
float randCol3 = float(rand(g*89.123));
// assign random color to all the prims of the group
vector randCol = set(float(randCol1), float(randCol2), float(randCol3));
foreach(int p; prims)
setprimattrib(geoself(), “Cd”, p, randCol, “set”);
}
Thanks again!
So this was good learning experience now, had chance to dig through docs again and seen my earlier issues were trying to declare and set attributes all at same time and too liberally mixing casts. I've found declaring any calculations separately then assigning with explicit casts each item makes vex happier.
Adding into your working example with my earlier failed attempts will in fact also now work for me, if I want to set vector elements separately. Here it is if that's handy for any other vex apprentices also on the transition road. Really cleared things up in my mind a lot more.
// groups array
string groups = detailintrinsic(0, ‘primitivegroups’);
for (int g=0; g<len(groups); g++)
{
// prims array per group
int prims = expandprimgroup(0, groups);
// random color every frame different per group per color CrCgCb
float randCol1 = float(rand(g*0.123));
float randCol2 = float(rand(g*4.567));
float randCol3 = float(rand(g*89.123));
// assign random color to all the prims of the group
vector randCol = set(float(randCol1), float(randCol2), float(randCol3));
foreach(int p; prims)
setprimattrib(geoself(), “Cd”, p, randCol, “set”);
}
Thanks again!
Technical Discussion » Stuck on wrangle expressions looping group prims
- frankvw
- 61 posts
- Offline
hi,
Cool! Thanks for that, works perfectly. I didn't know you could assign rand values across all elements of a vector in one go, in the line;
vector randCol=vector(rand(g*4.2343+@Frame));
This was the main area I was tripping up, trying to set each element individually. But that kinda focuses my bigger issue moving over to vex in that declaring and setting variables and arrays in vex doesn't seem to act like in hscript, python or even C (which it looks like).
Just rambling here in case its useful to any other folk trying to transition to vex or maybe any of the sidefx docs/tutorial folks. I'll admit I'm not a very good coder but usually get there in the end through sheer determination in the face of the odd daunting task that motivates me to try and script it.
So, for example;
f@red = rand(@P*0.123);
f@green = rand(@P*4.567);
f@blue = rand(@P*89.123);
Will not work in the vex expression if I try to set each color component in the loop. But setting from global variables will work;
@Cd.x = rand(@P*0.123);
@Cd.y = rand(@P*4.567);
@Cd.z = rand(@P*89.123);
That is outside the main loop. Inside the main loop, it still works but just returns greyscale colors, ie, it sets Cd.r, Cd.g and Cd.b to one random value, not individual like it does outside the loop.
I totally get these ideas in hscript and hom using referencing but vex syntax doesn't seem as clear to me yet. Still, plan to swallow all the vex docs over the coming week to get attributes, declaring and setting variable and array elements clearer in my head.
cheers!
frank
Cool! Thanks for that, works perfectly. I didn't know you could assign rand values across all elements of a vector in one go, in the line;
vector randCol=vector(rand(g*4.2343+@Frame));
This was the main area I was tripping up, trying to set each element individually. But that kinda focuses my bigger issue moving over to vex in that declaring and setting variables and arrays in vex doesn't seem to act like in hscript, python or even C (which it looks like).
Just rambling here in case its useful to any other folk trying to transition to vex or maybe any of the sidefx docs/tutorial folks. I'll admit I'm not a very good coder but usually get there in the end through sheer determination in the face of the odd daunting task that motivates me to try and script it.
So, for example;
f@red = rand(@P*0.123);
f@green = rand(@P*4.567);
f@blue = rand(@P*89.123);
Will not work in the vex expression if I try to set each color component in the loop. But setting from global variables will work;
@Cd.x = rand(@P*0.123);
@Cd.y = rand(@P*4.567);
@Cd.z = rand(@P*89.123);
That is outside the main loop. Inside the main loop, it still works but just returns greyscale colors, ie, it sets Cd.r, Cd.g and Cd.b to one random value, not individual like it does outside the loop.
I totally get these ideas in hscript and hom using referencing but vex syntax doesn't seem as clear to me yet. Still, plan to swallow all the vex docs over the coming week to get attributes, declaring and setting variable and array elements clearer in my head.
cheers!
frank
Technical Discussion » Stuck on wrangle expressions looping group prims
- frankvw
- 61 posts
- Offline
Hi,
I'm stuck on looping foreach on groups in vex wrangle.
I want to grab randomly named groups and apply a random attribute number or color to the prims of that group. Its sort of a learning experience (trying to move away from hscript as much as possible) since I can do this easily anyway with a name sop followed by a createAttribute or color SOP set to random by variable (name). But since looping groups in a wrangle node seems incredibly useful for many things, I want to properly get this understood.
From a wrangle example here, I get the bits that setup the groups into an array but I just don't seem to get sensible results when I try to change the foreach loop to set a random color attribute per group (but not per prim, like the wrangle preset examples. I want a random color applied once across the entire group prim range, for each group, but not just random per face everywhere).
Wonder if any vex guru could show how that's done from this.
s@groups = detailintrinsic(0, ‘primitivegroups’);
int rand = int(rand(@Frame)*len(groups));
i@prims = expandprimgroup(0, s@groups);
// delete the prims
Int p; foreach (p; prims) {
removeprim(0,p,1);
}
Thank you!
I'm stuck on looping foreach on groups in vex wrangle.
I want to grab randomly named groups and apply a random attribute number or color to the prims of that group. Its sort of a learning experience (trying to move away from hscript as much as possible) since I can do this easily anyway with a name sop followed by a createAttribute or color SOP set to random by variable (name). But since looping groups in a wrangle node seems incredibly useful for many things, I want to properly get this understood.
From a wrangle example here, I get the bits that setup the groups into an array but I just don't seem to get sensible results when I try to change the foreach loop to set a random color attribute per group (but not per prim, like the wrangle preset examples. I want a random color applied once across the entire group prim range, for each group, but not just random per face everywhere).
Wonder if any vex guru could show how that's done from this.
s@groups = detailintrinsic(0, ‘primitivegroups’);
int rand = int(rand(@Frame)*len(groups));
i@prims = expandprimgroup(0, s@groups);
// delete the prims
Int p; foreach (p; prims) {
removeprim(0,p,1);
}
Thank you!
Technical Discussion » Why does hou.createNode() not work for material types?
- frankvw
- 61 posts
- Offline
Hi,
So I know its probably me but I did have a good dig around the docs/forum but …
If I middle click say the surface shader it tells me its a vopmaterial type. Running hou.nodeCreate with type as vopmaterial errors out saying “type none doesn't exist”.
If I repeat it with a box, cone, transform, channel, anything else really, using the type operator show in the network view, it WILL work. So what's up with shaders or shader context nodes?
I did see if any of the hou.shader or shop methods would do the same but no. I also tried hou.as code and the hou ‘ opscript’ wrapper and that will give me function definition to create the shader nodes but also blows away the top level parms so then I need to run opparm wrappers as well and suddenly it becomes a major task with error checking, instead of the 4-5 lines of code it takes for anything else.
So am I missing something? I am just switching out some of the fbx materials on fbx imports with mantra shaders but I am falling over at the final lines of the script which generate the shader nodes because of this nodeCreate() error line.
Thnks
So I know its probably me but I did have a good dig around the docs/forum but …
If I middle click say the surface shader it tells me its a vopmaterial type. Running hou.nodeCreate with type as vopmaterial errors out saying “type none doesn't exist”.
If I repeat it with a box, cone, transform, channel, anything else really, using the type operator show in the network view, it WILL work. So what's up with shaders or shader context nodes?
I did see if any of the hou.shader or shop methods would do the same but no. I also tried hou.as code and the hou ‘ opscript’ wrapper and that will give me function definition to create the shader nodes but also blows away the top level parms so then I need to run opparm wrappers as well and suddenly it becomes a major task with error checking, instead of the 4-5 lines of code it takes for anything else.
So am I missing something? I am just switching out some of the fbx materials on fbx imports with mantra shaders but I am falling over at the final lines of the script which generate the shader nodes because of this nodeCreate() error line.
Thnks
Technical Discussion » Packing, materials and copysop.
- frankvw
- 61 posts
- Offline
hi,
I was initially put off packed prims when they first came along because of the convoluted materials issues and since point instancing is easy and robust to use.
I am working on a city generation landscape where packing the copysop with material references in the same hip would be really convenient for lighting setups of day, night, winter, summer, etc.
So I write off bgeos, ensuring shop materials in vertices are maintained, then try copy stamping packed bgeos, regular bgeos packed in filesop and bgeos packed by the copysop packing/caching option. I only get gray in renders. turn of packing in copysop or use an instance node, I get materials. Why do materials drop off when packing? Does it need an attribute promote or something else on the verts?
attached is a simple example. Render with packing turned on and then off in the copysop stamping option to see the result. What am I missing?
thx
I was initially put off packed prims when they first came along because of the convoluted materials issues and since point instancing is easy and robust to use.
I am working on a city generation landscape where packing the copysop with material references in the same hip would be really convenient for lighting setups of day, night, winter, summer, etc.
So I write off bgeos, ensuring shop materials in vertices are maintained, then try copy stamping packed bgeos, regular bgeos packed in filesop and bgeos packed by the copysop packing/caching option. I only get gray in renders. turn of packing in copysop or use an instance node, I get materials. Why do materials drop off when packing? Does it need an attribute promote or something else on the verts?
attached is a simple example. Render with packing turned on and then off in the copysop stamping option to see the result. What am I missing?
thx
-
- Quick Links