I'm trying to access a nested value in a json file using the TOP jsoninput node.
this is what i've been trying for the query field
but each time I change the processor type to dynamic I get this result
Error running callback 'onGenerate': No input results tagged as 'file/json' were provided.
Traceback (most recent call last):
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.173/houdini/pdg/types\utils\jsondata.py", line 93, in onGenerate
raise pdg.CookError("No input results tagged as 'file/json' were provided.")
CookError: No input results tagged as 'file/json' were provided.
what would be the proper way to setup a query like this?
I have the exact same issue. It appears like the json input node does not evaluate the expression inside the query field for some reason. If I put a valid query into an attribute @query, and sets the query to `@query`, I get an error message telling that queries can not be empty, although the attribute is properly set in the incoming work items.
For nested queries, I am using the Attribute Copy node, putting the JSON input on the right input, and my work items on the left, and checking the Copy Result Data flag. That generates all correct inputs for the Json Input node. I also copy top level @pdg_index attributes into new attributes, like @testobject_index if I used the sample file above. Though, it appears like the Json Input node ignores custom attributes. Maybe it is special somehow in that only the json file itself can be passed to it, and nothing else?
This is just not intuitive. I would appreciate if SideFX could provide and example on how to parse nested JSON files in TOPS.
It looks like its just not working as intended. Using the attribute copy was progress. I now have no problem getting a WorkItem per testobject. but its dumping the same transform values into each Workitem. with this method im getting 19.9, -8.9, 367.9for each WorkItem. the JsonInput parameters
look to process/iterate correctly. but the result values do not reflect that.
Hi – it looks like expressions aren't being evaluated for the “Query” field which is why the @-expressions were evaluating to nothing. The reason for this is that fields have to be evaluated with respect to a specific work item – and the common practice is to use the work item that gets generated as that context for evaluating. Due to the array retrieve operation, we don't know how many work items are going to be produced, so it's not evaluating the field with respect to any work item. I believe the most intuitive approach (which would fix this issue) would be to make this field evaluate with respect to the upstream work item from which it is generating.
gyltefors I would vote for that. I always assumed that any node would always evaluate based on the input work items, rather than the generated ones.
The wide majority of parameters are evaluated with respect to the generated work item (and not the upstream work item). There's a few exceptions, such as the “Item Count” parameter on Generic Generator, which determines how many work items will be generated.
Hmm, it does feel a bit counterintuitive, when used to SOPs, where each SOP works on its inputs. Also, would it not be more in the spirit of TOPs if the JSON node could output sub-branches, that could in turn be cascaded into other JSON nodes? Storing away index parameters for each parent branch makes the graph much more complicated that it has to be.
NDkg It looks like its just not working as intended. Using the attribute copy was progress. I now have no problem getting a WorkItem per testobject. but its dumping the same transform values into each Workitem. with this method im getting 19.9, -8.9, 367.9for each WorkItem. the JsonInput parameters
look to process/iterate correctly. but the result values do not reflect that.
Hi, I was able to reproduce your TOP graph and test it out using the JSON file you supplied. It looks like there was a bug in the evaluation of the multiparameter fields which is why they were calling the same values in each work item. This is now fixed in the next daily build!
I've also updated the Array Retrieve “Query” field to evaluate with respect to the upstream work item. This makes it consistent with how Generic Generator's “Item Count” field works (which also evaluates with respect to the upstream work item). These parms are an exception to the rule – they must be evaluated before any work items are created (because these parms actually dictate how many work items will be created), so the next logical behaviour for them would be to evaluate with respect to the parent work item. All other work items evaluate with respect to the created work items on the current node.
With that being said, I'd like to make nested querying more easy. The planned solution would be to add a wildcard operator to the query language. This would reduce your graph to two nodes: a file pattern node to import the JSON file, and a single Json Input node that uses the Array Retrieve operation with these parameters:
would be a valid query. This would prevent the need for multiple JSON nodes in a row. I think that this would be preferable to having multiple JSON nodes required to make a single nested query. Is there a particular reason that you would prefer the subtree approach?
One last thing! Is it at all possible for you to post a sample of your JSON so I can use it for testing?
I'll prepare a sample JSON file, and post it later. One thing I noticed while testing the new daily build. It allows nested queries, with some workarounds, using the attribute copy node. The query is a little bit more complex than the example above, as each model has a type property, that defines what other properties are available. So, I would query
Then filter out say, all models of type ‘Scene’, to generate one work item per scene. For this I used a partition node, with @type=“Scene”, expanded the partition, and fed it into a new JSON node to read the scene properties like:
Though, before feeding it into a JSON, I needed the Attribute Copy as discussed earlier. This node does not work after a partition and expand. A workaround is to put the attribute copy before the partitioning. Though, if the JSON node did not consume that attribute, and passed the JSON filename downwards, there would be no need to copy it back each time.
Just a simple case of generating work items for each scene requires quite a complex setup with the existing nodes. And this is before even fetching all data needed for each scene, like curves and points, each being a model of a specific type in the JSON file. Depending on the type, it will be fed into a different HDA processor, like a mountain generator, to add a mountain to the scene.
Hi there! The node has been updated with the changes and it will be available in the next daily build of Houdini 17.5. Here's a summary of the changes:
- Introduced the concept of a wildcard operator to the query syntax - Improved the node UI to make it more clear how it gathers inputs - Can now generate without any upstream items - The index of created work items is now set to the parent item's index - For the Array Retrieve operation, added an option to not split results into separate work items - For the Array Retrieve operation, will now attach an array index of the object if it is in an array - For the Array Retrieve operation, can now use the wildcard operator in the field parameter to gather all anonymous objects in an array (such as an array of integers)
The node documentation was updated as well and goes into more detail.
Much more complicated queries should be possible with the JSON node now, and should eliminate a lot of the overhead work that you've been having to do.
Thanks for the update. I tested the latest build, but struggle to extract my data using the JSON node. I prepared and attached a stripped down version of my JSON data file. It is first divided into packages, and then each package contains a number of models. The models are in a hierarchy as defined by the Parent property. In the sample, the Scene contains one UserFolder, which contains one Cavern. The cavern has a list of 2D vertices. A am trying to make a TOP network to create a base geometry for each scene work item, and one curve geometry for each cavern work item. Each user folder work item would merge all contained geometry, in this case, the curve for the cavern. The merged curves will be partitioned together with the base geometry for the corresponding scenes, and modify the base geometry to generate caverns (and a ton of other stuff not included in the sample). Even with the new version of the JSON node, this is quite a nightmare to untangle. For example, I have 2D vectors. I can make one query to get an array of x values, and another query to get an array of y values. I am now trying to figure out how to combine these attribute arrays into vectors. I am adding a ROP Geo with a wrangle node, but have no idea how to access the x and y TOP attribute arrays…
I checked the release notes to make sure, and downloaded 17.5.211, so I got the right version. Though, I am ending up doing a lot of workarounds to get the data in. Using the source custom file path as `@directory`/`@filename` eliminated for copying in the JSON file each time, but otherwise, the complexity is more or less the same. To extract the vertices, I found a way now, generating a CSV file for each vertex list (btw, the CSV output node doesn't seem to put the temp files in the pgd temp folder, instead cluttering up $HIP), and using a table import in the ROP Geo. Though, an example of how the JSON node could be used to parse the sample file in a more straightforward way would be very helpful. (I also noticed that re-cooking a geo node doesn't really re-cook it unless I manually delete the output files, which may be a bug, as the whole idea of TOPs is to keep track of dependencies for me…)
thanks for the help Brandon! the json input now processes as expected, and the wildcards in the Query are a fantastic addition. I am still struggling a bit with their outputs. It might be my lack of understanding partitioning. But how would I go about partitioning multiple json inputs? so far i've only been able to get a resulting single work_item.