I assume writing alembic userproperties would suit your needs? There have already been multiple threads about it on this forum.
e.g.
https://www.sidefx.com/forum/topic/70783/ [www.sidefx.com]
Found 28 posts.
Search results Show results as topic list.
Technical Discussion » Alembic and custom attributes
- gorrod
- 28 posts
- Offline
Technical Discussion » How to divide in COPs
- gorrod
- 28 posts
- Offline
You could use a Python COP to generate the average pixel values and write that into a additional plane and then access it with a copinput in another VOP filter, or modify your color values directly in the python COP.
Here is the documentation for it. https://www.sidefx.com/docs/houdini/hom/pythoncop.html [www.sidefx.com]
A caveat on it though, since I was just trying to make this work, is that a COP node seems to cook twice when modifying the C plane that is displayed as the small preview on COP nodes. Once for the actual image, or what is displayed in the composite view with the expected image size, and once more for the small preview image.
The second cook will expect a different amount of pixels/resolution to be set with .setPixelsOfCookingPlane() and will error, which is really annoying.
You can collapse the image preview on cop nodes to avoid this, fix it with a little workaround (e.g. checking the resolution*resolution value against the length of the pixel array and then construct a pixel array with the correct size and sample the it with getPixelByUv() from the original image if that makes sense?) or not write to the C plane directly at all.
This will however only work until you try to write to the C plane in any node further down the stream which then results in another error when cooking the image preview for the python COP.
Maybe someone on the Forum knows how to fix this or do it correctly?
I would maybe suggest doing image operations on points or 2D volumes/heightfields. This would make it a bit easier to work with since the COP2 context seems to be a bit outdated. In the end just sample the volume into COPs when writing it out as an image or do COP specific operations on it.
Anyway here's the example code I put into the python COP to get the average value for each channel in the C plane and write it to the avgC plane which could then be used for further processing in the network. There are plenty of other and better examples in the linked documentation as well:
Here is the documentation for it. https://www.sidefx.com/docs/houdini/hom/pythoncop.html [www.sidefx.com]
A caveat on it though, since I was just trying to make this work, is that a COP node seems to cook twice when modifying the C plane that is displayed as the small preview on COP nodes. Once for the actual image, or what is displayed in the composite view with the expected image size, and once more for the small preview image.
The second cook will expect a different amount of pixels/resolution to be set with .setPixelsOfCookingPlane() and will error, which is really annoying.
You can collapse the image preview on cop nodes to avoid this, fix it with a little workaround (e.g. checking the resolution*resolution value against the length of the pixel array and then construct a pixel array with the correct size and sample the it with getPixelByUv() from the original image if that makes sense?) or not write to the C plane directly at all.
This will however only work until you try to write to the C plane in any node further down the stream which then results in another error when cooking the image preview for the python COP.
Maybe someone on the Forum knows how to fix this or do it correctly?
I would maybe suggest doing image operations on points or 2D volumes/heightfields. This would make it a bit easier to work with since the COP2 context seems to be a bit outdated. In the end just sample the volume into COPs when writing it out as an image or do COP specific operations on it.
Anyway here's the example code I put into the python COP to get the average value for each channel in the C plane and write it to the avgC plane which could then be used for further processing in the network. There are plenty of other and better examples in the linked documentation as well:
import numpy as np def output_planes_to_cook(cop_node): # This sample only modifies the color plane. #return ("avgC", "C", ) # Uncomment this to see the C plane issue return("avgC",) def required_input_planes(cop_node, output_plane): # This sample requires the color and alpha planes from the first input. if output_plane in ["avgC", "C"]: return ("0", "avgC", "0", "C") return () def cook(cop_node, plane, resolution): input_cop = cop_node.inputs()[0] color = input_cop.allPixels("C") # If writing to the "C" plane this highlights the issue if plane=="C" and resolution[0]*resolution[1] != (len(color)/3): print("pixel difference: ", len(color)/3, resolution[0]*resolution[1]) r = color[0:len(color):3] r = r[:(resolution[0]*resolution[1])] g = color[1:len(color):3] r = g[:(resolution[0]*resolution[1])] b = color[2:len(color):3] r = b[:(resolution[0]*resolution[1])] else: r = color[0:len(color):3] g = color[1:len(color):3] b = color[2:len(color):3] avg_vals = np.array([np.mean(r), np.mean(g), np.mean(b)]) new_r = np.full_like(r, avg_vals[0]) new_g = np.full_like(r, avg_vals[1]) new_b = np.full_like(r, avg_vals[2]) cop_node.setPixelsOfCookingPlane(new_r, component="r") cop_node.setPixelsOfCookingPlane(new_g, component="g") cop_node.setPixelsOfCookingPlane(new_b, component="b")
Edited by gorrod - April 2, 2023 16:07:46
Technical Discussion » Digital Asset: display value in parameters ?
- gorrod
- 28 posts
- Offline
It works if you set the node path correctly. e.g. detail("/obj/geo1/subnet1/make_rand","rand",0) or probably easier to just set it relative: detail("./make_rand", "rand", 0)
Edited by gorrod - Oct. 2, 2022 11:33:08
Technical Discussion » Smarter Heightfield Mask Expand node
- gorrod
- 28 posts
- Offline
Take this with a grain of salt as this isn't expanding the mask by a given distance at all yet (and is currently also dependent on the voxelsize), but scales the expansion of the mask by the difference between adjacent height voxels and fits that between arbitrary values.
So the mask is basicly just growing slower up/down hill.
I guess this could be a starting point for your use case? Most of the code is taken from the opencl Masterclass from Jeff Lait [vimeo.com].
Expansion iterations would be handled by the opencl SOP iterations.
So the mask is basicly just growing slower up/down hill.
I guess this could be a starting point for your use case? Most of the code is taken from the opencl Masterclass from Jeff Lait [vimeo.com].
Expansion iterations would be handled by the opencl SOP iterations.
Technical Discussion » how to use Neighborsearch OpenCL node?
- gorrod
- 28 posts
- Offline
After playing around with it for a bit I think the leave on GPU option only works in dop context and not in sops.
You can take a look at the example file where the neighbor attribute is correctly accessible in dop context even though it does not have any values on the geometry spreadsheet.
You can take a look at the example file where the neighbor attribute is correctly accessible in dop context even though it does not have any values on the geometry spreadsheet.
Edited by gorrod - Aug. 21, 2022 13:01:08
Technical Discussion » Bind export not showing up in geo spreadsheet in pointvop
- gorrod
- 28 posts
- Offline
There's an error inside your vop network. Delete the switch node that has no inputs or connect something to it and everything works as expected.
Technical Discussion » How to convert HeightField to mesh by mask?
- gorrod
- 28 posts
- Offline
You could just convert your entire heightfield to a mesh and delete the parts you dont want.
Technical Discussion » ask for a invokegraph example
- gorrod
- 28 posts
- Offline
Even though I definitly don't belong to the esoteric group that understands this node, I've put together a very simple example of how to maybe use it in its most basic form.
Generally you just convert a node, subnetwork or compiled node graph to geometry with the attribfromparm SOP (or create your own geo and attributes, change them or w/e) and then execute the network with the invokegraph.
The node documentation already explains it better than I can.
Maybe you already know all this but I have attached a file with everything I could figure out about this node so far.
I would definitly welcome a better answer about this as well if someone has a bit more in depth knowledge.
Generally you just convert a node, subnetwork or compiled node graph to geometry with the attribfromparm SOP (or create your own geo and attributes, change them or w/e) and then execute the network with the invokegraph.
The node documentation already explains it better than I can.
Maybe you already know all this but I have attached a file with everything I could figure out about this node so far.
I would definitly welcome a better answer about this as well if someone has a bit more in depth knowledge.
Technical Discussion » Basic detail function not working... how?! SOLVED
- gorrod
- 28 posts
- Offline
Using "op:" before your node paths works.
f@testValue = detail("op:../source/", "value");
f@testValue = detail("op:../source/", "value");
Technical Discussion » Pin Vellum Hair to Pop Sim
- gorrod
- 28 posts
- Offline
You need to set your vellumhair SOP to match the animation of the points and then set the target_pt of the vellum points to match some attribute on your pop sim source pts in the solver. I would probably recommend not using id for this, since that is sometimes used by solvers internally.
I've also updated the orient attributes on the pin points to match the normal orientation of your source points as that might be the expected behaviour you're after.
I updated your file.
I've also updated the orient attributes on the pin points to match the normal orientation of your source points as that might be the expected behaviour you're after.
I updated your file.
Edited by gorrod - June 15, 2022 11:10:42
Houdini Indie and Apprentice » Flip Collisions doesnt work with any geometry
- gorrod
- 28 posts
- Offline
Check your merge node, by default the left inputs affects the right inputs. Either set it to affect mutual or, more common, wire your collisions left of the solver stream.
Houdini Indie and Apprentice » RBD Deform pieces with Instances?
- gorrod
- 28 posts
- Offline
There's a few ways you could go about fixing this. Easiest is probably to just unlock the deform asset, transfer your point name to the unpacked geo and then promoting the name to your primitives. This is the best way I can think of.
Generally your packed geo does not match the name in the unpacked geometry after copying, so the deform sop does not know how to match your pieces/constraints together as the name attribute does not match.
You could also pack your pieces after copying them, which is probably not ideal.
Or you go over each piece after the simulation and restore your name attribute of what you had before copying them, which will also be slower I assume.
There might also be other ways to get this done but option #1 seems alright.
Generally your packed geo does not match the name in the unpacked geometry after copying, so the deform sop does not know how to match your pieces/constraints together as the name attribute does not match.
You could also pack your pieces after copying them, which is probably not ideal.
Or you go over each piece after the simulation and restore your name attribute of what you had before copying them, which will also be slower I assume.
There might also be other ways to get this done but option #1 seems alright.
Technical Discussion » Multiple length dictionary array attributes
- gorrod
- 28 posts
- Offline
I can't exactly tell you why this happens, but if you dont use the dict_array_attrib variable for setting the attrib value, but use the attributes name instead, it works.
I updated your file.
I updated your file.
Edited by gorrod - June 10, 2022 05:34:43
Technical Discussion » How to automatically partially change an expression link
- gorrod
- 28 posts
- Offline
I attached a file with a few ways of doing it, easiest is probably to just write this as an expression for your parameters:
chs("../../param_2/Shot_"+padzero(2, ftoa(ch("select_shot")))+"1")
I hope that helps.
chs("../../param_2/Shot_"+padzero(2, ftoa(ch("select_shot")))+"1")
I hope that helps.
Technical Discussion » connect start-end point spline for smooth NURBS curve
- gorrod
- 28 posts
- Offline
You could use an Ends SOP to close the curve with Close Rounded U and Preserve Shape U, then delete the last point and copy back the position of the points before the Ends SOP to get a smooth result back.
There's probably a better way, but maybe this already works for you?
There's probably a better way, but maybe this already works for you?
Technical Discussion » String attribute to write the file path in an alembic node
- gorrod
- 28 posts
- Offline
The details() [www.sidefx.com] unlike the detail() function only takes 2 arguments. You do not need the "0" attrib_index in the end for string attributes.
Houdini Indie and Apprentice » A task. Attribute blur by time.
- gorrod
- 28 posts
- Offline
You could just detect when a certain parm value or the highest parm value in a range is reached in a solver, save that value and the frame it was set and then ramp the value in/out.
Here's a quick file of how I would do that.
Here's a quick file of how I would do that.
Technical Discussion » OpenCL Voxel Space to World Space position?
- gorrod
- 28 posts
- Offline
It's just like jlait wrote. What you need to do is going the other way around from the world pos to the index.
This is what works for me:
This is what works for me:
kernel void get_world_P( int world_Px_stride_x, int world_Px_stride_y, int world_Px_stride_z, int world_Px_stride_offset, float16 world_Px_xformtoworld, global float * world_Px , global float * world_Py , global float * world_Pz ) { int gidx = get_global_id(0); int gidy = get_global_id(1); int gidz = get_global_id(2); int idx = world_Px_stride_offset + world_Px_stride_x * gidx + world_Px_stride_y * gidy + world_Px_stride_z * gidz; float4 world_pos = gidx * world_Px_xformtoworld.lo.lo + gidy * world_Px_xformtoworld.lo.hi + gidz * world_Px_xformtoworld.hi.lo + world_Px_xformtoworld.hi.hi; world_Px[idx] = world_pos.x; world_Py[idx] = world_pos.y; world_Pz[idx] = world_pos.z; }
Technical Discussion » Can't get hou.SimpleDrawable to display in SOP context.
- gorrod
- 28 posts
- Offline
Thanks a lot, that does the trick already.
I will certainly take a look into the demo scene!
I will certainly take a look into the demo scene!
Technical Discussion » Can't get hou.SimpleDrawable to display in SOP context.
- gorrod
- 28 posts
- Offline
Hi,
I'm trying to simply display a hou.SimpleDrawable in an otherwise empty asset on SOP level, basicly following the documentation example here [www.sidefx.com].
So all I have is this code in the Viewer State of an asset with a Null inside labeled "GET_INPUT" connected to the first input and nothing connected to the output.
I would expect to see the geometry connected to the first input of the asset to be displayed as a wireframe when I select the node and press enter in the viewport, but nothing shows up.
Could someone shed some light on what else I need to be doing with the SimpleDrawable to display correctly?
I'm trying to simply display a hou.SimpleDrawable in an otherwise empty asset on SOP level, basicly following the documentation example here [www.sidefx.com].
So all I have is this code in the Viewer State of an asset with a Null inside labeled "GET_INPUT" connected to the first input and nothing connected to the output.
class State(object): def __init__(self, state_name, scene_viewer): self.state_name = state_name self.scene_viewer = scene_viewer def onDraw(self, kwargs): """ Called for rendering a state e.g. required for hou.AdvancedDrawable objects """ draw_handle = kwargs["draw_handle"] geo = kwargs["node"].node("GET_INPUT").geometry() drawable = hou.SimpleDrawable(self.scene_viewer, geo, "drawable") drawable.setDisplayMode(hou.drawableDisplayMode.WireframeMode) drawable.setWireframeColor(hou.Color(1.0,0.0,0.0)) drawable.enable(True) drawable.show(True) def createViewerStateTemplate(): """ Mandatory entry point to create and return the viewer state template to register. """ state_typename = kwargs["type"].definition().sections()["DefaultState"].contents() state_label = "SimpleDrawable test" state_cat = hou.sopNodeTypeCategory() template = hou.ViewerStateTemplate(state_typename, state_label, state_cat) template.bindFactory(State) template.bindIcon(kwargs["type"].icon()) return template
I would expect to see the geometry connected to the first input of the asset to be displayed as a wireframe when I select the node and press enter in the viewport, but nothing shows up.
Could someone shed some light on what else I need to be doing with the SimpleDrawable to display correctly?
Edited by gorrod - May 24, 2021 15:05:22
-
- Quick Links