Just bumping this, it would be extremely helpful if we could access the full data of drop events inside houdini (eg. including web urls), not just files.
I'd also like to get Shotgun drag/drop working here too.
Found 20 posts.
Search results Show results as topic list.
Technical Discussion » Event Trigger for python callback on drag and drop
- mattebb
- 263 posts
- Offline
PDG/TOPs » Making TOPs less confusing
- mattebb
- 263 posts
- Offline
kenxu
Ok, some notes from our earlier meeting on what to do about the usability issues. This is not the whole list, but just what's relevant to the discussion on this thread.
That's great to hear, thank you very much!
I think one of the stumbling blocks for TOPs is how it's so different to the rest of Houdini. I think the more that these potential changes/solutions could be more consistent with other aspects of Houdini the easier it will be for people to start picking it up.
cheers
PDG/TOPs » Making TOPs less confusing
- mattebb
- 263 posts
- Offline
mattebb
At the very least there should be a local scheduler added by default when you create a TOPnet (like the copnet inside /img). Seeing this unusual floating scheduler node immediately would go some way towards communicating how TOPs works differently to other Houdini networks.
I just found that this is actually a bug when creating a TOP network inside a ROP network. Creating it inside Sops or OBJ does indeed make a local scheduler by default. So it just seems like just a simple oversight, which is good! I've logged the bug with support.
PDG/TOPs » Making TOPs less confusing
- mattebb
- 263 posts
- Offline
I've recently used a bit of TOPs for the first time and while I can see it's got a lot of potential, it's been hard. I'm a pretty experienced Houdini user, but figuring out how TOPs works for me has been rather confusing. I've talked about it to people I work/have worked with and their responses have generally been along the lines of “seems like it could be cool, but I had a look and I'm not touching it again”.
This is a shame because IMO it's small generalist studios without established pipelines or lots of technical R&D staff who can potentially get the most value out of TOPs, however it's just not very accessible to them.
I'm confident that the tech behind it is solid but the usability and discoverability is letting TOPs down right now. On top of this there are quite a few inconsistencies with the way the rest of Houdini works, which adds to the confusion. Here's a few ideas on things that can and should be improved, and potential solutions.
Discoverability
TOPs should have its own top level context but that's been mentioned before already. But at the moment, if you do put down a TOP network and start poking around in the tab menu adding nodes, nothing works. It's very confusing! Turns out you need to add a scheduler, which is non-obvious and inconsistent (other networks in Houdini don't work like this).
At the very least there should be a local scheduler added by default when you create a TOPnet (like the copnet inside /img). Seeing this unusual floating scheduler node immediately would go some way towards communicating how TOPs works differently to other Houdini networks.
Discoverability (Execution)
Also on discoverability, it's also very unclear at a glance how to make the TOP network actually do anything. Other networks in Houdini have a visual UI element which triggers a cook, but there's nothing like that for TOPs. These are the most important actions, which are needed to actually making TOPs work, but they're hidden away in right-click menus and hotkeys. No other networks in Houdini work this way either (you never RMB to cook a SOP or render a ROP) which makes it even less likely that people will stumble across it.
TOPs really needs a consistent, visual, obvious way to trigger a cook. For SOPs this is the display flag (kinda), and ROPs have the Render button which is easy to find, right at the top of every ROP node. TOP nodes could have something similar on every TOP, which would be easy to understand for people already familiar with ROPs (eg. see image attachment)
Terminology
The idea of ‘dirtying a graph’ is a highly technical programming/graph theory term which means absolutely nothing to a huge proportion of Houdini users.
Can we please change ‘Dirty’ to something like ‘Reset’ or ‘Clear’? Eg. ‘Reset and Cook Selected Node’?
UI/Parameter Layout
Usually Houdini's parameter panes are organised top to bottom, with the most important often used parameters at the top, and lesser used (or parameters dependent on higher ones) further down the bottom. This is good since when you're trying a new node, you have a sense of what's important and what you should start poking at.
Many of the TOPs on the other hand have the ‘Work Item Generation’ and/or ‘Cache Mode’ parameters always at the top. These seem rather obscure to me - maybe these were more important in earlier versions of TOPs but with the ‘Automatic’ mode it seems like something that rarely if ever needs to be used in the course of day to day work. These parameters could then be sent down the bottom or to a consistent secondary tab so they're not always the first thing you see when you select a TOP node.
There are also a few cases where things could be clearer with better parameter naming. One thing that I didn't understand immediately was ‘Evaluate Using’ on the ROP Fetch, which didn't mean much to me. Perhaps this could be changed to something like ‘Generate Work Items’: ‘One per Single Frame’ / ‘One per Frame Range’
Anyway, that's just a few things I came across initially, will mention more as I get deeper. I also want to make clear that I do like TOPs and the potential for what it can do. Some of the other UI bits like the visualisation of work items is fantastic, I just would love to see it more accessible and easier to get into and make use of.
This is a shame because IMO it's small generalist studios without established pipelines or lots of technical R&D staff who can potentially get the most value out of TOPs, however it's just not very accessible to them.
I'm confident that the tech behind it is solid but the usability and discoverability is letting TOPs down right now. On top of this there are quite a few inconsistencies with the way the rest of Houdini works, which adds to the confusion. Here's a few ideas on things that can and should be improved, and potential solutions.
Discoverability
TOPs should have its own top level context but that's been mentioned before already. But at the moment, if you do put down a TOP network and start poking around in the tab menu adding nodes, nothing works. It's very confusing! Turns out you need to add a scheduler, which is non-obvious and inconsistent (other networks in Houdini don't work like this).
At the very least there should be a local scheduler added by default when you create a TOPnet (like the copnet inside /img). Seeing this unusual floating scheduler node immediately would go some way towards communicating how TOPs works differently to other Houdini networks.
Discoverability (Execution)
Also on discoverability, it's also very unclear at a glance how to make the TOP network actually do anything. Other networks in Houdini have a visual UI element which triggers a cook, but there's nothing like that for TOPs. These are the most important actions, which are needed to actually making TOPs work, but they're hidden away in right-click menus and hotkeys. No other networks in Houdini work this way either (you never RMB to cook a SOP or render a ROP) which makes it even less likely that people will stumble across it.
TOPs really needs a consistent, visual, obvious way to trigger a cook. For SOPs this is the display flag (kinda), and ROPs have the Render button which is easy to find, right at the top of every ROP node. TOP nodes could have something similar on every TOP, which would be easy to understand for people already familiar with ROPs (eg. see image attachment)
Terminology
The idea of ‘dirtying a graph’ is a highly technical programming/graph theory term which means absolutely nothing to a huge proportion of Houdini users.
Can we please change ‘Dirty’ to something like ‘Reset’ or ‘Clear’? Eg. ‘Reset and Cook Selected Node’?
UI/Parameter Layout
Usually Houdini's parameter panes are organised top to bottom, with the most important often used parameters at the top, and lesser used (or parameters dependent on higher ones) further down the bottom. This is good since when you're trying a new node, you have a sense of what's important and what you should start poking at.
Many of the TOPs on the other hand have the ‘Work Item Generation’ and/or ‘Cache Mode’ parameters always at the top. These seem rather obscure to me - maybe these were more important in earlier versions of TOPs but with the ‘Automatic’ mode it seems like something that rarely if ever needs to be used in the course of day to day work. These parameters could then be sent down the bottom or to a consistent secondary tab so they're not always the first thing you see when you select a TOP node.
There are also a few cases where things could be clearer with better parameter naming. One thing that I didn't understand immediately was ‘Evaluate Using’ on the ROP Fetch, which didn't mean much to me. Perhaps this could be changed to something like ‘Generate Work Items’: ‘One per Single Frame’ / ‘One per Frame Range’
Anyway, that's just a few things I came across initially, will mention more as I get deeper. I also want to make clear that I do like TOPs and the potential for what it can do. Some of the other UI bits like the visualisation of work items is fantastic, I just would love to see it more accessible and easier to get into and make use of.
Edited by mattebb - June 26, 2019 09:11:16
PDG/TOPs » Batching/wedging failure with Redshift
- mattebb
- 263 posts
- Offline
PDG/TOPs » Batching/wedging failure with Redshift
- mattebb
- 263 posts
- Offline
No for loops, and just a single GPU and only one concurrent process ('single' tick box on the local scheduler). I'm just using TOPs since I wanted to give it a try, and since it seems like it might streamline the workflow a bit for me, not for any parallel wizadry.
Re. the original problem, come to think of it, is there any reason why output files are requirement of batching in the first place? I can imagine there could be situations where you want to run a ROP (maybe not sidefx built in ROPs, but custom pipeline tools) as a batch across multiple frames but not necessarily generate output files, or even an output file per frame.
Re. the original problem, come to think of it, is there any reason why output files are requirement of batching in the first place? I can imagine there could be situations where you want to run a ROP (maybe not sidefx built in ROPs, but custom pipeline tools) as a batch across multiple frames but not necessarily generate output files, or even an output file per frame.
PDG/TOPs » Batching/wedging failure with Redshift
- mattebb
- 263 posts
- Offline
Update: I just took a look in that python script (rop.py) that's erroring out and I think I've found the issue. It seems to be looking for an output file parameter on the ROP, but the script only has a hardcoded list of parameter names that obviously only includes relevant parameters found on built-in SESI ROPs.
Redshift's output parm name is RS_outputFileNamePrefix so clearly doesn't get found, and errors out. I did a hacky workaround by adding a spare parameter ‘vm_picture’ to the Redshift ROP, channel referencing redshift's output path value and now it all seems to be working.
Ideally:
a) In the short term there should be a better error message that says the ROP node you are fetching is unsupported, rather than just the python error.
or
b) There should probably be a well documented, better way of accessing this output path information from ROP nodes (maybe even a parameter on the ROP Fetch that you could channel reference to). Many vfx facilities will have their own collection of ROP hdas to do various things in the pipeline and that hardcoded python script won't have any knowledge of those parameter names either.
thanks
Redshift's output parm name is RS_outputFileNamePrefix so clearly doesn't get found, and errors out. I did a hacky workaround by adding a spare parameter ‘vm_picture’ to the Redshift ROP, channel referencing redshift's output path value and now it all seems to be working.
Ideally:
a) In the short term there should be a better error message that says the ROP node you are fetching is unsupported, rather than just the python error.
or
b) There should probably be a well documented, better way of accessing this output path information from ROP nodes (maybe even a parameter on the ROP Fetch that you could channel reference to). Many vfx facilities will have their own collection of ROP hdas to do various things in the pipeline and that hardcoded python script won't have any knowledge of those parameter names either.
thanks
PDG/TOPs » Batching/wedging failure with Redshift
- mattebb
- 263 posts
- Offline
I'm doing some wedged rendering with redshift and thought it would be a good opportunity to try TOPs for the first time. I was able to get it working on a basic level with a Wedge TOP (3 workitems) connected to a ROP Fetch TOP, referencing my redshift ROP (900 frames each, so 2700 workitems).
The problem is that each workitem seems to be a separate command, and therefore a separate houdini process. I'd like to batch each wedge together, into the one houdini process, so I don't suffer the houdini startup and scene cook time penalty each frame.
I'm new to TOPs and find it all a bit confusing but it seems like the ‘All Frames in One Batch’ option on the ROP Fetch could do what I need. When I generate the work items they seem to be doing what I need (the work item command represents a frame range, not individual frames), however when I cook the TOP, all my work items fail with this python error:
I've tried it with fetching a mantra ROP and it seems to work ok. I'm using Houdini 17.5.173.
thanks
The problem is that each workitem seems to be a separate command, and therefore a separate houdini process. I'd like to batch each wedge together, into the one houdini process, so I don't suffer the houdini startup and scene cook time penalty each frame.
I'm new to TOPs and find it all a bit confusing but it seems like the ‘All Frames in One Batch’ option on the ROP Fetch could do what I need. When I generate the work items they seem to be doing what I need (the work item command represents a frame range, not individual frames), however when I cook the TOP, all my work items fail with this python error:
Loading .hip file C:/myfilepath/waveforms_v04.hiplc.
Traceback (most recent call last):
File "C:/myfilepath/pdgtemp/47396/scripts/rop.py", line 500, in
cooker.cookBatchFrames(args)
File "C:/myfilepath/pdgtemp/47396/scripts/rop.py", line 253, in cookBatchFrames
""".format(args=args, index_expr=index_expr, parm_name=parm.name())
AttributeError: 'NoneType' object has no attribute 'name'
[Redshift] Closing the RS instance. End of the plugin log system.
I've tried it with fetching a mantra ROP and it seems to work ok. I'm using Houdini 17.5.173.
thanks
Edited by mattebb - June 9, 2019 02:42:54
Technical Discussion » How to use xnoise in the OpenCl node?
- mattebb
- 263 posts
- Offline
I was wondering the same thing, then I realised there's a good example inside the Gas Turbulence DOP, in the openCL pathway that is.
I know this is an old topic but just posting for the benefit of anyone else who googled this like I did.
I know this is an old topic but just posting for the benefit of anyone else who googled this like I did.
Edited by mattebb - April 1, 2019 07:44:21
Houdini Lounge » Houdini Engine for Nuke
- mattebb
- 263 posts
- Offline
Has anyone given a Houdini Engine Nuke plugin any consideration?
Some of the compers here would find it pretty darn useful for manipulating 3d geo (eg. projections) in the Nuke 3d scene.
Some of the compers here would find it pretty darn useful for manipulating 3d geo (eg. projections) in the Nuke 3d scene.
Houdini Lounge » Please Help us Beta Test the New Sidefx Website!
- mattebb
- 263 posts
- Offline
It's pretty nice! I like that things are given a bit more space, rather than being crammed in, but I think it's gone a bit too far in this direction.
Even on this HDish monitor the big banner on the main page takes up half the screen. I think the size of the banners on the other pages (e.g. customer support) works better. On the main page there's also a lot of extra padding sitting below the preview text of those articles (e.g. Avengers/Houdini Engine/etc) which could be tightened up.
My impression of the main pages that you get to after clicking on film or games or whatever, is that all the info seems quite vague and hidden. I think the sort of people looking for houdini would be looking for more concrete straight up examples of how houdini works and what it can do - eg. with more concrete info (text/images) showing why houdini is different to other 3D apps.
If I was considering houdini for the first time, I'd be more impressed by seeing in-situ screenshots and animations of houdini producing great things - eg showing the power of procedural workflows, rather than assorted renders and quite general marketing text.
My 2c!
Even on this HDish monitor the big banner on the main page takes up half the screen. I think the size of the banners on the other pages (e.g. customer support) works better. On the main page there's also a lot of extra padding sitting below the preview text of those articles (e.g. Avengers/Houdini Engine/etc) which could be tightened up.
My impression of the main pages that you get to after clicking on film or games or whatever, is that all the info seems quite vague and hidden. I think the sort of people looking for houdini would be looking for more concrete straight up examples of how houdini works and what it can do - eg. with more concrete info (text/images) showing why houdini is different to other 3D apps.
If I was considering houdini for the first time, I'd be more impressed by seeing in-situ screenshots and animations of houdini producing great things - eg showing the power of procedural workflows, rather than assorted renders and quite general marketing text.
My 2c!
Technical Discussion » Alligator Noise formula
- mattebb
- 263 posts
- Offline
Hi,
Here at work we've got a proprietary renderer, and there have been some requests from houdini users to get Alligator noise implemented in its shading system to make life much easier for us to interchange look dev between mantra and our other renderer.
After a bit of hunting, the r&d team couldn't find any definitive info on what Alligator noise consists of, in order to replicate it. Since many of the other noise functions (eg. worley/perlin/simplex) are all quite public, we were wondering if there was any public documentation on Alligator noise? Or if not, is that something you guys at sidefx would consider?
thanks!
Here at work we've got a proprietary renderer, and there have been some requests from houdini users to get Alligator noise implemented in its shading system to make life much easier for us to interchange look dev between mantra and our other renderer.
After a bit of hunting, the r&d team couldn't find any definitive info on what Alligator noise consists of, in order to replicate it. Since many of the other noise functions (eg. worley/perlin/simplex) are all quite public, we were wondering if there was any public documentation on Alligator noise? Or if not, is that something you guys at sidefx would consider?
thanks!
Technical Discussion » Crowds - more detailed info re. agents/clips?
- mattebb
- 263 posts
- Offline
Hi,
We've been trying to figure out the nitty gritty of what's actually going on behind the scenes in the new H14 crowd tools, in order to do some customisations for our pipeline here. It's great that a lot of data is accessible easily via attributes, but there are still some things that are a bit opaque and hard to fully understand.
We already have animations baked out into our own format that we have been previously been reading in and applying to our characters in houdini. These animations are already available and checked in to our own asset library, and it would be a bit annoying to have to then re-generate additional files on disk, so we'd like to be able to have a setup that's ‘live’, reading in animation with our own tools, and without having to bake out additional clip files.
So a few questions:
* I notice that when using the Input: Scene mode on the Agent SOP and pressing ‘Reload’ it seems to take the bone transformations from the rig object subnet, and ‘bake’ them to clip internally. I don't see any output files, so is this doing the same thing as an agent bake, but storing it in memory? What actually happens when you press Reload?
* If this is the case, is it possible to somehow bake multiple animation clips in memory in advance, from the one agent rig subnet, so that we can switch between clips without having to have multiple rig subnets - one for each animation?
We've been trying to figure out the nitty gritty of what's actually going on behind the scenes in the new H14 crowd tools, in order to do some customisations for our pipeline here. It's great that a lot of data is accessible easily via attributes, but there are still some things that are a bit opaque and hard to fully understand.
We already have animations baked out into our own format that we have been previously been reading in and applying to our characters in houdini. These animations are already available and checked in to our own asset library, and it would be a bit annoying to have to then re-generate additional files on disk, so we'd like to be able to have a setup that's ‘live’, reading in animation with our own tools, and without having to bake out additional clip files.
So a few questions:
* I notice that when using the Input: Scene mode on the Agent SOP and pressing ‘Reload’ it seems to take the bone transformations from the rig object subnet, and ‘bake’ them to clip internally. I don't see any output files, so is this doing the same thing as an agent bake, but storing it in memory? What actually happens when you press Reload?
* If this is the case, is it possible to somehow bake multiple animation clips in memory in advance, from the one agent rig subnet, so that we can switch between clips without having to have multiple rig subnets - one for each animation?
Technical Discussion » creating thousands of nodes, memory / speed?
- mattebb
- 263 posts
- Offline
Hi, I'm working with carsten.
In the end, it looks like it was just the sheer number of nodes - nothing was ever getting cooked. We need to instantiate hundreds (or perhaps thousands) of HDAs which themselves contain a few other nodes. It turns out though that one of those internal nodes was a moderately complicated VOPSOP, containing ~75 VOP nodes. It seems like these internal VOP nodes also contribute to the total node memory usage budget, so that in the end we were having hundreds of thousands of nodes in total, including all sub children.
To reduce the total number of nodes, we compiled this VOPSOP to a VEX Sop, which reduced our time spent creating, and memory usage, to a third of what it was originally.
In the end, it looks like it was just the sheer number of nodes - nothing was ever getting cooked. We need to instantiate hundreds (or perhaps thousands) of HDAs which themselves contain a few other nodes. It turns out though that one of those internal nodes was a moderately complicated VOPSOP, containing ~75 VOP nodes. It seems like these internal VOP nodes also contribute to the total node memory usage budget, so that in the end we were having hundreds of thousands of nodes in total, including all sub children.
To reduce the total number of nodes, we compiled this VOPSOP to a VEX Sop, which reduced our time spent creating, and memory usage, to a third of what it was originally.
Work in Progress » Spherical Harmonics in VOPs
- mattebb
- 263 posts
- Offline
I've done a bit of experimenting, and implemented some VOPs which can be used to generate and evaluate spherical harmonics inside a VOP net. I've written up some info on what this is all about here: http://mattebb.com/weblog/spherical-harmonics-in-vops/ [mattebb.com] and you can download the OTL and a hip here: http://mattebb.com/projects/houdini/houdini_sh_otl_hipnc.v001.zip [mattebb.com]
Or just check the video - hope it's understandable!
https://vimeo.com/40133531 [vimeo.com]
I have ideas for some more more practical usage for these tools in mind, will update here if I can get time to do it
Or just check the video - hope it's understandable!
https://vimeo.com/40133531 [vimeo.com]
I have ideas for some more more practical usage for these tools in mind, will update here if I can get time to do it
Work in Progress » pathtracer in vops
- mattebb
- 263 posts
- Offline
I've been doing some silly experiments with the intersect VOP in spare time lately. I previously made a little ray trace renderer, tracing rays from the points of a grid into the scene, shading, and storing the result in Cd point attribute of the original grid. I wanted to go a step further though so I upgraded it to a conceptually simpler, prettier looking, but slower pathtracer.
more info and hip file on vimeo here:
http://vimeo.com/21436831 [vimeo.com]
more info and hip file on vimeo here:
http://vimeo.com/21436831 [vimeo.com]
Work in Progress » silver ring (rapid prototyped)
- mattebb
- 263 posts
- Offline
Nothing really visible on the edges, the straight edges (especially in those extruded lines) came through very sharp and clear. It looks a fair bit better in real life than it does in the large image above - It's quite small so you don't notice the imperfections as much.
On the flat areas, there's a light texture from the layering process that gives it a very feint striped matte appearance. Rather than get these areas polished, I really liked how it looked - the patterned lines from the process matched the rest of the design really well.
On the flat areas, there's a light texture from the layering process that gives it a very feint striped matte appearance. Rather than get these areas polished, I really liked how it looked - the patterned lines from the process matched the rest of the design really well.
Work in Progress » silver ring (rapid prototyped)
- mattebb
- 263 posts
- Offline
hey jesse! After looking at the examples on the shapeways site, I wasn't fully confident with the level of quality and detail that they could provide. I ended up using http://www.rapidprototype.com.au/ [rapidprototype.com.au] , based in St Peters, who I was really happy with - fast turnaround too if you're local.
Work in Progress » silver ring (rapid prototyped)
- mattebb
- 263 posts
- Offline
Work in Progress » silver ring (rapid prototyped)
- mattebb
- 263 posts
- Offline
Hi, first time poster here.
I've been learning Houdini lately with Apprentice and for my girlfriend's birthday decided to set myself a challenge to model some jewellery in Houdini to be rapid prototyped/cast in silver - first completed practical project in Houdini! I tried to keep it as procedural as possible, and used blender to add a bit of detailing and export to STL. The final print/cast came out great, was lots of fun.
[mke3.net]
I made a little screen recording of the sop network here: http://vimeo.com/18110527 [vimeo.com]
I've been learning Houdini lately with Apprentice and for my girlfriend's birthday decided to set myself a challenge to model some jewellery in Houdini to be rapid prototyped/cast in silver - first completed practical project in Houdini! I tried to keep it as procedural as possible, and used blender to add a bit of detailing and export to STL. The final print/cast came out great, was lots of fun.
[mke3.net]
I made a little screen recording of the sop network here: http://vimeo.com/18110527 [vimeo.com]
-
- Quick Links