json dynamic nested query

   880   27   2
User Avatar
Member
7 posts
Joined: Feb. 2019
Offline
I'm trying to access a nested value in a json file using the TOP jsoninput node.

this is what i've been trying for the query field
"root/"+@pdg_index+"/transform/0"

but each time I change the processor type to dynamic I get this result

Error running callback 'onGenerate': No input results tagged as 'file/json' were provided. Traceback (most recent call last): File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.173/houdini/pdg/types\utils\jsondata.py", line 93, in onGenerate raise pdg.CookError("No input results tagged as 'file/json' were provided.") CookError: No input results tagged as 'file/json' were provided.

what would be the proper way to setup a query like this?
User Avatar
Staff
255 posts
Joined: Aug. 2017
Offline
Hi there,

It looks like you're getting that error because the JSON node expects to be provided an input that is tagged as ‘file/json’. What method are you using to pull the in the JSON file to TOPs?

As for the query, PDG will evaluate the entire field as a string, so I believe it should just be as follows:

root/`@pdg_index`/transform/0
Edited by BrandonA - March 21, 2019 00:04:48
User Avatar
Member
31 posts
Joined: Feb. 2017
Offline
Hi,

I have the exact same issue. It appears like the json input node does not evaluate the expression inside the query field for some reason. If I put a valid query into an attribute @query, and sets the query to `@query`, I get an error message telling that queries can not be empty, although the attribute is properly set in the incoming work items.
Edited by gyltefors - March 22, 2019 03:26:10
User Avatar
Member
7 posts
Joined: Feb. 2019
Offline
I have two json inputs chained below a FilePattern TOP. the reason i'm doing this is so that i have a work item per root
item in the json so i can use @pdg_index and access a transform per object.



below i've included the json structure. the transform array is what i'm attempting to access with Tops

{
"testobjects" : [
{
"name" : "test0.obj",
"transform" : [
19.906424,
-8.984698,
367.900803,
-21.071611,
64.845776,
-20.422042,
1.0,
1.0,
1.0
]
},
{
"name" : "test1.obj",
"transform" : [
18.732764,
-8.058209,
366.650893,
-21.071611,
64.845776,
-20.422042,
1.0,
1.0,
1.0
]
},
{
"name" : "test2.obj",
"transform" : [
18.739486,
-8.111116,
366.590925,
-8.432899,
64.086945,
-3.668125,
0.486806,
0.486806,
0.486806
]
}
]
}
Edited by NDkg - March 21, 2019 15:16:18

Attachments:
graph.png (34.0 KB)

User Avatar
Member
31 posts
Joined: Feb. 2017
Offline
For nested queries, I am using the Attribute Copy node, putting the JSON input on the right input, and my work items on the left, and checking the Copy Result Data flag. That generates all correct inputs for the Json Input node. I also copy top level @pdg_index attributes into new attributes, like @testobject_index if I used the sample file above. Though, it appears like the Json Input node ignores custom attributes. Maybe it is special somehow in that only the json file itself can be passed to it, and nothing else?

This is just not intuitive. I would appreciate if SideFX could provide and example on how to parse nested JSON files in TOPS.
Edited by gyltefors - March 22, 2019 03:29:20
User Avatar
Member
7 posts
Joined: Feb. 2019
Offline
It looks like its just not working as intended. Using the attribute copy was progress. I now have no problem getting a WorkItem per testobject. but its dumping the same transform values into each Workitem. with this method im getting 19.9, -8.9, 367.9for each WorkItem. the JsonInput parameters
testobjects/`@count`/transform/0
look to process/iterate correctly. but the result values do not reflect that.
Edited by NDkg - March 22, 2019 12:22:59

Attachments:
graphjson2.png (68.3 KB)

User Avatar
Staff
255 posts
Joined: Aug. 2017
Offline
Hi – it looks like expressions aren't being evaluated for the “Query” field which is why the @-expressions were evaluating to nothing. The reason for this is that fields have to be evaluated with respect to a specific work item – and the common practice is to use the work item that gets generated as that context for evaluating. Due to the array retrieve operation, we don't know how many work items are going to be produced, so it's not evaluating the field with respect to any work item. I believe the most intuitive approach (which would fix this issue) would be to make this field evaluate with respect to the upstream work item from which it is generating.
User Avatar
Member
31 posts
Joined: Feb. 2017
Offline
I would vote for that. I always assumed that any node would always evaluate based on the input work items, rather than the generated ones.
Edited by gyltefors - March 22, 2019 21:59:22
User Avatar
Staff
255 posts
Joined: Aug. 2017
Offline
gyltefors
I would vote for that. I always assumed that any node would always evaluate based on the input work items, rather than the generated ones.

The wide majority of parameters are evaluated with respect to the generated work item (and not the upstream work item). There's a few exceptions, such as the “Item Count” parameter on Generic Generator, which determines how many work items will be generated.
User Avatar
Member
31 posts
Joined: Feb. 2017
Offline
Hmm, it does feel a bit counterintuitive, when used to SOPs, where each SOP works on its inputs. Also, would it not be more in the spirit of TOPs if the JSON node could output sub-branches, that could in turn be cascaded into other JSON nodes? Storing away index parameters for each parent branch makes the graph much more complicated that it has to be.
Edited by gyltefors - March 23, 2019 00:01:38
User Avatar
Staff
255 posts
Joined: Aug. 2017
Offline
NDkg
It looks like its just not working as intended. Using the attribute copy was progress. I now have no problem getting a WorkItem per testobject. but its dumping the same transform values into each Workitem. with this method im getting 19.9, -8.9, 367.9for each WorkItem. the JsonInput parameters
testobjects/`@count`/transform/0
look to process/iterate correctly. but the result values do not reflect that.

Hi, I was able to reproduce your TOP graph and test it out using the JSON file you supplied. It looks like there was a bug in the evaluation of the multiparameter fields which is why they were calling the same values in each work item. This is now fixed in the next daily build!

I've also updated the Array Retrieve “Query” field to evaluate with respect to the upstream work item. This makes it consistent with how Generic Generator's “Item Count” field works (which also evaluates with respect to the upstream work item). These parms are an exception to the rule – they must be evaluated before any work items are created (because these parms actually dictate how many work items will be created), so the next logical behaviour for them would be to evaluate with respect to the parent work item. All other work items evaluate with respect to the created work items on the current node.

With that being said, I'd like to make nested querying more easy. The planned solution would be to add a wildcard operator to the query language. This would reduce your graph to two nodes: a file pattern node to import the JSON file, and a single Json Input node that uses the Array Retrieve operation with these parameters:

Query: testobjects/*/transform
Field: *
Attribute Name: transform

Would this help make your use cases easier?
Edited by BrandonA - March 26, 2019 15:44:45
User Avatar
Member
31 posts
Joined: Feb. 2017
Offline
Thanks, that is great news! Wildcards would be perfect for a single level query, generating one work item per match. In my case, I have a least two levels, like this:

Query: packages/*/models/*/properties/myproperty

I would probably need at least one query per level, so, cascading would likely simplify things a lot. Could we have a query like

Query: packages/*

output the JSON subtree as a string, and feed it into another node, parsing it with

Query: models/*

output that as well as work items, and finally

Query: properties/myproperty
User Avatar
Staff
255 posts
Joined: Aug. 2017
Offline
With the updates that I'm working on

Query: packages/*/models/*/properties/myproperty

would be a valid query. This would prevent the need for multiple JSON nodes in a row. I think that this would be preferable to having multiple JSON nodes required to make a single nested query. Is there a particular reason that you would prefer the subtree approach?

One last thing! Is it at all possible for you to post a sample of your JSON so I can use it for testing?
User Avatar
Member
31 posts
Joined: Feb. 2017
Offline
I'll prepare a sample JSON file, and post it later. One thing I noticed while testing the new daily build. It allows nested queries, with some workarounds, using the attribute copy node. The query is a little bit more complex than the example above, as each model has a type property, that defines what other properties are available. So, I would query

Query: packages/*/models/*/type

Then filter out say, all models of type ‘Scene’, to generate one work item per scene. For this I used a partition node, with @type=“Scene”, expanded the partition, and fed it into a new JSON node to read the scene properties like:

Query: packages/'@package_index'/models/'@model_index'/template/some_scene_property

Though, before feeding it into a JSON, I needed the Attribute Copy as discussed earlier. This node does not work after a partition and expand. A workaround is to put the attribute copy before the partitioning. Though, if the JSON node did not consume that attribute, and passed the JSON filename downwards, there would be no need to copy it back each time.

Just a simple case of generating work items for each scene requires quite a complex setup with the existing nodes. And this is before even fetching all data needed for each scene, like curves and points, each being a model of a specific type in the JSON file. Depending on the type, it will be fed into a different HDA processor, like a mountain generator, to add a mountain to the scene.
Edited by gyltefors - March 28, 2019 06:07:29
User Avatar
Staff
255 posts
Joined: Aug. 2017
Offline
Hi there! The node has been updated with the changes and it will be available in the next daily build of Houdini 17.5. Here's a summary of the changes:

- Introduced the concept of a wildcard operator to the query syntax
- Improved the node UI to make it more clear how it gathers inputs
- Can now generate without any upstream items
- The index of created work items is now set to the parent item's index
- For the Array Retrieve operation, added an option to not split results into separate work items
- For the Array Retrieve operation, will now attach an array index of the object if it is in an array
- For the Array Retrieve operation, can now use the wildcard operator in the field parameter to gather all anonymous objects in an array (such as an array of integers)

The node documentation was updated as well and goes into more detail.

Much more complicated queries should be possible with the JSON node now, and should eliminate a lot of the overhead work that you've been having to do.
Edited by BrandonA - March 29, 2019 17:35:21
User Avatar
Member
31 posts
Joined: Feb. 2017
Offline
Thanks for the update. I tested the latest build, but struggle to extract my data using the JSON node. I prepared and attached a stripped down version of my JSON data file. It is first divided into packages, and then each package contains a number of models. The models are in a hierarchy as defined by the Parent property. In the sample, the Scene contains one UserFolder, which contains one Cavern. The cavern has a list of 2D vertices. A am trying to make a TOP network to create a base geometry for each scene work item, and one curve geometry for each cavern work item. Each user folder work item would merge all contained geometry, in this case, the curve for the cavern. The merged curves will be partitioned together with the base geometry for the corresponding scenes, and modify the base geometry to generate caverns (and a ton of other stuff not included in the sample). Even with the new version of the JSON node, this is quite a nightmare to untangle. For example, I have 2D vectors. I can make one query to get an array of x values, and another query to get an array of y values. I am now trying to figure out how to combine these attribute arrays into vectors. I am adding a ROP Geo with a wrangle node, but have no idea how to access the x and y TOP attribute arrays…
Edited by gyltefors - March 30, 2019 07:50:11

Attachments:
Sample.json (746 bytes)

User Avatar
Staff
255 posts
Joined: Aug. 2017
Offline
Hi – I don't think the updates would have been available in the build that you tested. Were you using 17.5.210? The Json Input node will have an updated user interface.

Thank you for posting the sample! I'll see if I can use the sample to put together an example .hip file for one of your use cases.
Edited by BrandonA - March 30, 2019 14:33:16
User Avatar
Member
31 posts
Joined: Feb. 2017
Offline
I checked the release notes to make sure, and downloaded 17.5.211, so I got the right version. Though, I am ending up doing a lot of workarounds to get the data in. Using the source custom file path as `@directory`/`@filename` eliminated for copying in the JSON file each time, but otherwise, the complexity is more or less the same. To extract the vertices, I found a way now, generating a CSV file for each vertex list (btw, the CSV output node doesn't seem to put the temp files in the pgd temp folder, instead cluttering up $HIP), and using a table import in the ROP Geo. Though, an example of how the JSON node could be used to parse the sample file in a more straightforward way would be very helpful. (I also noticed that re-cooking a geo node doesn't really re-cook it unless I manually delete the output files, which may be a bug, as the whole idea of TOPs is to keep track of dependencies for me…)
Edited by gyltefors - April 1, 2019 03:27:11
User Avatar
Staff
255 posts
Joined: Aug. 2017
Offline
You should be able to perform the JSON queries and then use a partition to combine the x and y components of the vectors if I'm understanding correctly. I've attached a .hip file with this setup.
Edited by BrandonA - April 1, 2019 12:31:58

Attachments:
json_example.hip (90.9 KB)

User Avatar
Member
7 posts
Joined: Feb. 2019
Offline
thanks for the help Brandon! the json input now processes as expected, and the wildcards in the Query are a fantastic addition. I am still struggling a bit with their outputs. It might be my lack of understanding partitioning. But how would I go about partitioning multiple json inputs? so far i've only been able to get a resulting single work_item.
  • Quick Links