Hello Kenxu, very appreciated the effort you put into explaining!
Since my "procedural bucketing code [www.sidefx.com]" works only with an ordered set of distinct attributes values with no gaps (0,1,2,3,4), and I do have gaps in my real project (0,3,5, etc), it doesn't fit my needs.
Option 1) I could try to implement a more generic ‘partition by attribute value’ code, but seems a bit overkill effort, when there's a node that already does it! (-:
Option 2) Instead, I'm trying your suggestion now, I'm doing the post-partitioner work of retrieving the real values of the partionItems inside a python processor node and re-build the workitems with those values.
Option 2 works, but I'm asking myself if that's a not very efficient operation of re-building all the workitems in the python processor.
(I'll be working with thousands of workitems, and I'm aiming at good latency: the user is supposed to read those values updated as he changes the parameters in the HDA)
So maybe I should really develop option 1 and measure the performance difference. For now I'll stick with option 2.
Anyway just for your information, I noticed that the printed values of the partitionItems are weirdly formatted in the console if they are read from the python partitioner. This seems not happening with partitions created by a partition by attribute node.
Not a big deal, it's just a bit difficult to debug. I attached an example file reproducing the error.
Thanks again!
Cheers
Found 900 posts.
Search results Show results as topic list.
PDG/TOPs » How-to combine a variable amount of geometryimport?
- Andr
- 900 posts
- Offline
Technical Discussion » Array attribute to Vex variable
- Andr
- 900 posts
- Offline
hey, I'm not sure what you're trying to read with
I mean, you are trying to read a non-existent vertex attribute “Test”.
But you are in the Detail Wrangle, the “test” attribute that you create is a detail attribute. You should ask for a detail attrib instead.
BUT be aware: if you change it to detail(0,“Test”,0) it won't work as well because you are trying to access a non-existent detail attribute now, hehe. Why so?
When you use “0” as first argument of the point(), prim(), detail() functions you are just using a placeholder for the path of the immediate upstream node. It's like you are asking for the attribute TEST in the upstream node, which of course it doesn't exist, as you create the “test” attribute only later in the wrangle.
Anyway, you do the same thing in the “Collapse” node. You are trying to read a non existent “PossibleTile” vertex attrib.
You should instead use the point() function, instead of vertex(), because PossibleTile is a point attrib.
In this case you can use the point() function because PossibleTile attrib is already existent the in previous node that you are trying to access.
cheers
vertex(0,"Test",0);
I mean, you are trying to read a non-existent vertex attribute “Test”.
But you are in the Detail Wrangle, the “test” attribute that you create is a detail attribute. You should ask for a detail attrib instead.
BUT be aware: if you change it to detail(0,“Test”,0) it won't work as well because you are trying to access a non-existent detail attribute now, hehe. Why so?
When you use “0” as first argument of the point(), prim(), detail() functions you are just using a placeholder for the path of the immediate upstream node. It's like you are asking for the attribute TEST in the upstream node, which of course it doesn't exist, as you create the “test” attribute only later in the wrangle.
Anyway, you do the same thing in the “Collapse” node. You are trying to read a non existent “PossibleTile” vertex attrib.
You should instead use the point() function, instead of vertex(), because PossibleTile is a point attrib.
In this case you can use the point() function because PossibleTile attrib is already existent the in previous node that you are trying to access.
cheers
Edited by Andr - July 5, 2019 16:59:36
Technical Discussion » PC build for beginner looking towards going pro
- Andr
- 900 posts
- Offline
Hello, what kind of rendering engine you are going to use (mantra, redshift, etc) and what type of workload do you have usually? (lots of simulations? what kind of simulations?)
I'm not an hardware expert, but as far as I understand it, threadripper is usually chosen over normal ryzen cpus because of the following:
1) Possibility to add more than 2 GPUs, given the huge amount of lanes
2) Possibility to add up 128GB of RAM (vs 64GB of ryzen)
As of now your setup is 1 GPU only and 32GB of RAM.
I guess you would find better solutions in Ryzen gen2 (and soon gen3) for price and performance if you want to stick with this config and skip this gen1 threadripper.
Only thing I'm pretty sure about is to have a minimum of 64GB of RAM. Because RAM is really easy to saturate even if you don't do simulation work.
Anyway, I leave it to the hardware experts of this forum
cheers
I'm not an hardware expert, but as far as I understand it, threadripper is usually chosen over normal ryzen cpus because of the following:
1) Possibility to add more than 2 GPUs, given the huge amount of lanes
2) Possibility to add up 128GB of RAM (vs 64GB of ryzen)
As of now your setup is 1 GPU only and 32GB of RAM.
I guess you would find better solutions in Ryzen gen2 (and soon gen3) for price and performance if you want to stick with this config and skip this gen1 threadripper.
Only thing I'm pretty sure about is to have a minimum of 64GB of RAM. Because RAM is really easy to saturate even if you don't do simulation work.
Anyway, I leave it to the hardware experts of this forum
cheers
Houdini Indie and Apprentice » wrangle vs compiled block. Order of operations?
- Andr
- 900 posts
- Offline
Not having issues at all, I'm just curious about the multithreading nature of the twos.
This is a the scenario:
1) A point wrangle node that prints to console the i@id of the points.
2) And a compiled for-loop iterating over points with the very same wrangle (attached picture).
In the first case the IDs will be printed in ascending order.
In the latter case the IDs order will be slightly*** random (but still ascending), and every new cook will be different.
Given the results it seems that the compiled block does a truer parallel cooking, while the operations in the wrangle are still parallel but returned so that they follow the point number order. Does this produce some kind of overhead in the wrangle?
I guess I should do some profiling.
Anyway, I don't really have questions, but would like somebody to tell me more about it and/or correct the above guessing work.
cheers
*** it's just “slightly” random in my poor 4 threads notebook, I guess the randomness would increase in cores rich machines.
This is a the scenario:
1) A point wrangle node that prints to console the i@id of the points.
2) And a compiled for-loop iterating over points with the very same wrangle (attached picture).
In the first case the IDs will be printed in ascending order.
In the latter case the IDs order will be slightly*** random (but still ascending), and every new cook will be different.
Given the results it seems that the compiled block does a truer parallel cooking, while the operations in the wrangle are still parallel but returned so that they follow the point number order. Does this produce some kind of overhead in the wrangle?
I guess I should do some profiling.
Anyway, I don't really have questions, but would like somebody to tell me more about it and/or correct the above guessing work.
cheers
*** it's just “slightly” random in my poor 4 threads notebook, I guess the randomness would increase in cores rich machines.
Edited by Andr - July 5, 2019 16:05:59
PDG/TOPs » How-to combine a variable amount of geometryimport?
- Andr
- 900 posts
- Offline
Regarding question 2: Why a blank python script prevents the python partitioner to generate partitions?
I have produced a case example where the python script is also able to break a partition by attribute node.
Also I noticed the following:
1) python script re-order the workitems randomly. Workitems order will be different from the upstream node.
2)If you dirty and cook the python script, the order of the workitems will be the same as the last cook. But if you re-cook the immediate upstream node (in this case the merge2 node), the python script will be dirtied and when re-cooked the order of the workitems will be different from the previous cook
Please, have a look.
I have produced a case example where the python script is also able to break a partition by attribute node.
Also I noticed the following:
1) python script re-order the workitems randomly. Workitems order will be different from the upstream node.
2)If you dirty and cook the python script, the order of the workitems will be the same as the last cook. But if you re-cook the immediate upstream node (in this case the merge2 node), the python script will be dirtied and when re-cooked the order of the workitems will be different from the previous cook
Please, have a look.
Edited by Andr - July 5, 2019 15:01:14
PDG/TOPs » How-to combine a variable amount of geometryimport?
- Andr
- 900 posts
- Offline
So since the partitioner doesn't seem to like much to work with partitionItems, if I need to do the cartesian product of N partitions, the trick is to group the workitems into N arrays already inside the partitioner, like you did.
In my case the initial partitions were built with a “partition by attribute” node (distinct values).
I recreated it inside the python partitioner with following code, with few conditions:
1)“P_id” is the int attribute whose distinct values are being used to group the workitems.
2) Workitems are sorted through “P_id”
3) Values for “P_id” have no gaps (1, 2, 3, 4) is ok, (1,4,5) not ok
Now the cartesian product of these buckets using itertools produces partitions with no lost attribute values.
I still would like to know what's going wrong with work_item.partitionItems inside the python partitioner.
In my case the initial partitions were built with a “partition by attribute” node (distinct values).
I recreated it inside the python partitioner with following code, with few conditions:
1)“P_id” is the int attribute whose distinct values are being used to group the workitems.
2) Workitems are sorted through “P_id”
3) Values for “P_id” have no gaps (1, 2, 3, 4) is ok, (1,4,5) not ok
idcheck = 0 d = {} for w in work_items: id = w.data.intData("P_id", 0) if id != idcheck: idcheck = idcheck + 1 if "bucket{0}".format(idcheck) not in d: d["bucket{0}".format(idcheck)] = [w] else: d["bucket{0}".format(idcheck)].append(w) else: if "bucket{0}".format(idcheck) not in d: d["bucket{0}".format(idcheck)] = [w] else: d["bucket{0}".format(idcheck)].append(w)
Now the cartesian product of these buckets using itertools produces partitions with no lost attribute values.
I still would like to know what's going wrong with work_item.partitionItems inside the python partitioner.
Edited by Andr - July 5, 2019 07:17:37
PDG/TOPs » How-to combine a variable amount of geometryimport?
- Andr
- 900 posts
- Offline
Hello Kenxu,
Regarding question 1, I've found out that the same issue occurs with the nested for-loops solution when you feed the partitioner an already partitioned node.
That's why you did the partition by attribute already inside the partitioner?
I changed your partitioner code into the following and feed it a 3 partitions node:
Interestingly, I'm able to read the single work_item attribute values if I iterate over the .partitionItems and ask for their data, but when I assign them to new partitions in the nested for-loops something breaks and the attributes values are not being copied.
What logical understanding am I missing?
Regarding question 1, I've found out that the same issue occurs with the nested for-loops solution when you feed the partitioner an already partitioned node.
That's why you did the partition by attribute already inside the partitioner?
I changed your partitioner code into the following and feed it a 3 partitions node:
Interestingly, I'm able to read the single work_item attribute values if I iterate over the .partitionItems and ask for their data, but when I assign them to new partitions in the nested for-loops something breaks and the attributes values are not being copied.
What logical understanding am I missing?
# Partition by Attribute A_bucket = work_items[0].partitionItems B_bucket = work_items[1].partitionItems C_bucket = work_items[2].partitionItems partition_count = 0 for wi_A in A_bucket: for wi_B in B_bucket: for wi_C in C_bucket: partition_holder.addItemToPartition(wi_A, partition_count) partition_holder.addItemToPartition(wi_B, partition_count) partition_holder.addItemToPartition(wi_C, partition_count) partition_count = partition_count + 1
Technical Discussion » HDAs within an HDA
- Andr
- 900 posts
- Offline
Technical Discussion » Array attribute to Vex variable
- Andr
- 900 posts
- Offline
s[]@test = {"one", "two", "five"}; string myVar[] = @test; s[]@test2 = myVar;
this is working for me..
maybe post a hip file reproducing the error.
Edited by Andr - July 4, 2019 14:20:40
Technical Discussion » COP2 Produce crisp bevel effect
- Andr
- 900 posts
- Offline
PDG/TOPs » How-to combine a variable amount of geometryimport?
- Andr
- 900 posts
- Offline
I tried to use itertools.product() and while it produces the right amount of iterations, the merged attributes “A”, “B”, “C”, would all be set to values A0, B0, C0 (skipping the other values A1, B1, C1, etc).
1)Why is this happening?
2)Also I noticed that the python script (as shown in the image), even when there is no code in it, is stopping the partitioner from working. I need to do some operations on the workitems with the python script just before sending them to the partitioner
Any help much appreciated!
Thanks
1)Why is this happening?
2)Also I noticed that the python script (as shown in the image), even when there is no code in it, is stopping the partitioner from working. I need to do some operations on the workitems with the python script just before sending them to the partitioner
Any help much appreciated!
Thanks
Edited by Andr - July 4, 2019 14:00:11
PDG/TOPs » how to rename attributes?
- Andr
- 900 posts
- Offline
Let's say I have 2 workitems coming from 2 geos.
Each workitem has the same named attribute, but of different data type.
Example: “myAttribute” on w1 is a float, while “myAttribute” on w2 is a integer Array.
I have to combine the two workitems into a single partition, so I need to rename “myAttribute” on each workitems to have an unique name, so that the two attributes don't get lost when merged into the partition.
I can't find any function in the API to do that. I was looking for something like renameAttrib(old_name, new_name)
So far, a workaround I'm trying to implement is to create a new attrib from scratch, copy the values from “myAttribute” and remove the latter.
To do so I should first find the data type of “myAttribute”, so that I can use it to drive the conditions to pick the appropriate set*() function.
This workaround seems lot of work for such a simple task, and also not very efficient (create new attr, copy values, and delete original attr).
So, there's really no simpler solution for attribute renaming? Have I overlooked anything maybe?
cheers and thanks for any help
Each workitem has the same named attribute, but of different data type.
Example: “myAttribute” on w1 is a float, while “myAttribute” on w2 is a integer Array.
I have to combine the two workitems into a single partition, so I need to rename “myAttribute” on each workitems to have an unique name, so that the two attributes don't get lost when merged into the partition.
I can't find any function in the API to do that. I was looking for something like renameAttrib(old_name, new_name)
So far, a workaround I'm trying to implement is to create a new attrib from scratch, copy the values from “myAttribute” and remove the latter.
To do so I should first find the data type of “myAttribute”, so that I can use it to drive the conditions to pick the appropriate set*() function.
This workaround seems lot of work for such a simple task, and also not very efficient (create new attr, copy values, and delete original attr).
So, there's really no simpler solution for attribute renaming? Have I overlooked anything maybe?
cheers and thanks for any help
Edited by Andr - July 4, 2019 11:34:02
PDG/TOPs » How-to combine a variable amount of geometryimport?
- Andr
- 900 posts
- Offline
Hello Kenxu, thanks a lot for providing this example and introducing me to more custom tops setups with the different python nodes.
In the partitioner I'm now using the itertools.product() function, instead of the nested for-loops, to do a more procedural cartesian product of the workitems, since the number of imported geos can vary every time.
cheers
In the partitioner I'm now using the itertools.product() function, instead of the nested for-loops, to do a more procedural cartesian product of the workitems, since the number of imported geos can vary every time.
cheers
Houdini Indie and Apprentice » VEX : Create groups based on prim attribute value
- Andr
- 900 posts
- Offline
hello,
the partition sop might be of any help?
https://www.sidefx.com/docs/houdini/nodes/sop/partition.html [www.sidefx.com]
the partition sop might be of any help?
https://www.sidefx.com/docs/houdini/nodes/sop/partition.html [www.sidefx.com]
Technical Discussion » [Solved] Unable to properly link float ramps via script
- Andr
- 900 posts
- Offline
Hello,
just for the sake of completeness, you could also set a callback script on the “Ramp_from” parameter that would update the “Ramp_to” every time the callback is triggered (every time you modify the ramp).
This is the inline code I used for the callback:
Anyway, glad that u already found another solution.
cheers
just for the sake of completeness, you could also set a callback script on the “Ramp_from” parameter that would update the “Ramp_to” every time the callback is triggered (every time you modify the ramp).
This is the inline code I used for the callback:
dummy=hou.pwd().node("/obj/to").parm("ramp_to"); thisRamp=hou.pwd().parm("ramp_from").eval(); dummy.set(thisRamp)
Anyway, glad that u already found another solution.
cheers
Edited by Andr - June 21, 2019 09:53:36
PDG/TOPs » @attribute=val not working for defining groups ?
- Andr
- 900 posts
- Offline
Hi,
I noticed that in a geometry import node it's not possible to dynamically define a group with the @attribute=value notation.
Is there any reason for that?
It's not a big deal of course, but sometimes you just don't want to create more groups :p
cheers
I noticed that in a geometry import node it's not possible to dynamically define a group with the @attribute=value notation.
Is there any reason for that?
It's not a big deal of course, but sometimes you just don't want to create more groups :p
cheers
PDG/TOPs » How-to combine a variable amount of geometryimport?
- Andr
- 900 posts
- Offline
Sorry for not being clear enough, probably the HDA part was a bit misleading.
Let's say I have 2 geos:
GEO1 (2 points) has a point attribute “A” with values (A0, A1)
GEO2 (3 points) has a point attribute “B” with values (B0, B1, B2)
When the GEOs are imported, the system should be able to produce 6 workitems with the attributes merged:
(A0, B0), (A0, B1), (A0, B2), (A1, B0), (A1, B1), (A1, B2)
I could do that if I manually append GEO2 after GEO1 in the topnet, but I'd like it to be automatic and suitable for N geometries. Every time there can be a different amount of geos to combine.
The GEOs are produced by in a SOP context. They are not actual files.
Let's say I have 2 geos:
GEO1 (2 points) has a point attribute “A” with values (A0, A1)
GEO2 (3 points) has a point attribute “B” with values (B0, B1, B2)
When the GEOs are imported, the system should be able to produce 6 workitems with the attributes merged:
(A0, B0), (A0, B1), (A0, B2), (A1, B0), (A1, B1), (A1, B2)
I could do that if I manually append GEO2 after GEO1 in the topnet, but I'd like it to be automatic and suitable for N geometries. Every time there can be a different amount of geos to combine.
The GEOs are produced by in a SOP context. They are not actual files.
Edited by Andr - June 21, 2019 02:08:40
Houdini Indie and Apprentice » Subdivide SOP On a SIngle Axis
- Andr
- 900 posts
- Offline
divide sop has a bricker polygons parameter that can be helpful.
Check the this example file, It's NOT an efficient method. And you might need to work a bit to implement it for your needs.
But it could be a starting point..
I would like to know a better solution,
cheers
Check the this example file, It's NOT an efficient method. And you might need to work a bit to implement it for your needs.
But it could be a starting point..
I would like to know a better solution,
cheers
Edited by Andr - June 20, 2019 15:23:19
Technical Discussion » [Solved] Unable to properly link float ramps via script
- Andr
- 900 posts
- Offline
PDG/TOPs » How-to combine a variable amount of geometryimport?
- Andr
- 900 posts
- Offline
I have a digital asset outputting a variable amount of geometries.
I need to import the GEOs (points) into a topnet and procedurally combine them together, like you would append one to each other.
It has to be dynamic, there can be indeed a variable amount of ‘geo import nodes’.
I tried with a for loop inside the topnet with no luck.
My last resort would be to have the digital asset generate and append the ‘geo import nodes’ with python, but I'd like to avoid it.
Any help very appreciated!
Thanks
I need to import the GEOs (points) into a topnet and procedurally combine them together, like you would append one to each other.
It has to be dynamic, there can be indeed a variable amount of ‘geo import nodes’.
I tried with a for loop inside the topnet with no luck.
My last resort would be to have the digital asset generate and append the ‘geo import nodes’ with python, but I'd like to avoid it.
Any help very appreciated!
Thanks
Edited by Andr - June 20, 2019 12:47:03
-
- Quick Links