Fco Javier
What software does it offer at any time in the modeling, send only a fragment of the modeling to any operator for the particles to happen or any other madness? I think that is the essence of Houdini. I do not think they change to make it parametric, Houdini has what it takes to model, in his own way.
A bit of nit picking, it is possible in Softimage ICE to some extent, however I don't remember that people were so excited by multiple dependent networks all around. On contrary, built in ICE particle system was allowed to do as much possible from single network, even it will be more ‘natural’ for ICE to use something like at least two, similar to Houdini SOP-DOP relation.
Have a personal experience here, was running a few free systems based on ICE, it always been demand for as much compact structure, despite the complexity of ‘one network for all’ solution.
Perhaps because users were animators or modelers, focused on final result, not on ‘beauty of code’ or network structure. Houdini user base seems to be somehow different…
On positive side, widely used procedural approach was a post process of direct modeling, relying on Softimage ‘construction modes’, things like shrink wrap executed after modeling, as an effective re-topology method, or custom symmetry options based on ICE ops, finally a ‘clone’ option, as end result of construction history of one object used as starting point in another object. In Houdini that's a full ‘object merge’, while option is possible and reliable in other apps as well, like ‘instance reference’ in Max.
One really nice implementation of post process is Blender's ability to do non destructive beveling by keeping the bevel modifier at top of stack, based on edge properties created in ‘edit’ mode, before any modifier stack.
And, even nicer example of effective, viewport based procedural approach, IMO is
Autodesk Fusion 360 [
www.youtube.com] around 8.00 they showing how it's possible to jump into various stages of NurbS construction.
Now back to Houdini, while it's possible to setup probably everything mentioned, in practice is different story. ‘Object merge’ is prohibitively slow when used as post process of direct modeling, Edit node behaves rigidly, it's always added on end of tree, Houdini is not recognizing the existing Edit node in middle of three and proceeding with that one (or at least this was last time I've tried in first H 16 or so).
In short, when it comes to direct (viewport based) modeling, seems to be similar story with Maya, where existing network is actually too wide, too general to allow imposing the effective viewport based rules and control over it. In same time, framework is already adopted by facilities, so they can't change it (that is, simplify it), just for sake of something almost not existing in H, like direct modeling. So only way seems to be Maya path, to implement something simple like Silo in Maya today, basically unrelated to rest of app, except a plain in-out connection somewhere in network. Better than nothing, but also not competitive to free app like Blender, today. (Long time ago when Maya started its long journey toward polygonal modeling, Blender was a way behind commercial apps).
Anyway just personally, I don't see problem there. I'm already feeling confident with three polygonal modelings in other apps, don't see any reason to put fourth one in mix, actually I'm trying to reduce that number. Houdini alone is able to keep live connection with imported geometry, so that's it, model in something else, import or reload in H.