Houdini Evolving

   13302   31   3
User Avatar
Member
148 posts
Joined: March 2016
Offline
First, I am completely committed to Houdini for non-linear organic modeling and animation as my primary platform and making great progress in this regard. Second, I want to encourage the Houdini developers to continue enabling Houdini for world class organic modeling with a story of when I first worked at Data General Corporation in the 70s. This was my first job in the computer industry, which culminated in work on the first true 32 bit mini computer (Pulitzer prize winning non fiction book: “Soul of A New Machine” and yes I worked with Tom West the protagonist in the book). Now the story:

The very first project I was assigned too was to work on the development of a visual form based replacement for keyboard punch card entry using two dimensional graphics. This was pretty exciting at the time (lol). We decided to interview key punch operators to see what they wanted, who were at that time typists who had migrated to key punch.

Their feedback was to reproduce the manual keypunch machine graphically on the computer screen, which was obviously a bad idea. The new interface did not require Hollerith codes since that was what was obsoleted, but the key punch people wanted that functionality (we did not do this) because that is what they knew.

The point is this: You need to listen to your old user base but with an eye to what you can do that is enabled by novel technological approaches. In the case of Houdini, the key is non-linear workflow, which is enabled by the Houdini procedural paradigm (and significantly not the other way around, the disruptive technology is the non-linear 3D modeling workflow).

Similarly, non-linear video editing in post production revolutionized the video industry in the late 80s and early 90s. I believe Houdini will be the same for “soup to nuts” 3D modeling especially for organics.

So my humble suggestion to Houdini developers is to listen to the feedback of the existing base but don't loose sight of making Houdini the industry goto 3D application for both VFX and 3D modeling of organics, which I am positive is possible based on my own experience with “soup to nuts” Houdini modeling and animation of 3D organic faces.

At the other extreme, you don't want to just reproduce linear modeling using procedural methods, that is wrong. There are novel less complex yet more powerful 3D modeling capabilities enabled by Houdini's non-linear procedural technology that goes beyond that. I know as I am experiencing this first hand after many years of linear 3D facial modeling and successfully migrating to the Houdini non-linear workflow technology and doing it procedurally in conjunction with the new version 15 3D modeling interface. This includes new Houdini procedural nodes I am developing for my non-linear 3D modeling workflow.

Very Respectfully,

Lars
Lars Wood
Future Vision Guide
Advanced Research And Development
User Avatar
Member
323 posts
Joined: Jan. 2015
Offline
Hi Lars,
maybe you could elaborate a bit more what you are suggesting.
Do you have an example that shows your concept?
I am intrigued.

kind regards

Olaf
User Avatar
Member
148 posts
Joined: March 2016
Offline
Lots of things. To start with I think it is a huge mistake to populate the shelves with nodes. It makes it confusing as to what the sequence of operations is sometimes. I no longer use the shelves except for one or two composite operations that should be implemented as nodes in the network editor.

Everything should be available through the network editor, hard stop. The reason is dumping functionality onto the shelves is a linear workflow side effect. It doesn't help transition new comers to Houdini other than to slow them down as they start applying the linear workflow they are used to by punching on the shelf.

The network editor in general should be enhanced further. Get people focused on that and the learning curve for Houdini and the non linear workflow is made much easier.

There are other changes having to do with enhancing node creation but I have to think about this a little more. As an example, years ago I did a lot of microelectronics design using Mentor Graphics Design Station (or whatever it was called, I can't remember I am too old), which is a non linear workflow. You had the ability to create modules (which are like nodes in Houdini) using only a graphical editor and then publish this to a module (node) library.

So you could build complex adders, multipliers, muxes and other stuff using the graphical interface. Houdini has some of this functionality (maybe more I haven't dug into it too throughly at this point since I am focusing on other things in Houdini), but this is an area that needs to be enhanced. Eventually get rid of python programming of nodes altogether and only have a graphical node design interface.

First results of facial organic modeling with Houdini only: http://fvg.com/non-linear-workflow.html [fvg.com]

This took me 1/3 the time of using the linear approach. Amazing.

PS: Please, please provide a download zip archive of the help files!

VR

Lars
Lars Wood
Future Vision Guide
Advanced Research And Development
User Avatar
Member
453 posts
Joined: Feb. 2013
Offline
Everything should be available through the network editor, hard stop.

I kinda agree with this:

Generally the shelf is a collection of macros. That is a very good concept.
Did you know that you can drag and drop nodes into the shelf to create shortcuts for their creation? Basically that is what the shelf is all about: Dropping nodes into your network as fast as possible.
Essentially it is already the answer to the question on how to improve the node creation. You just need to understand how to use it and how it is intended to be used.

I agree that people use it wrong. And SideFX should probably clarify this more. They should create a guide for good practices when using the shelf. But there is no point to rip this system out.

Never the less I think each shelf item should be provided in form of digital assets with a bunch of (on/off) switches. I strongly believe that every shelf tool should also be an HDA. And if a shelf tool cannot be represented as an HDA, then SideFX should create nodes that would make it possible. Essentially this would lead to an expansion of the node system in a meaningful way.
For example, as far as I understand, the entire pyro shelf could be combined into one (complex) HDA. Users could directly dive into tweaking, or rip the HDA appart to figure out how it works to use parts of it for their own more complex needs.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
Interesting: Can you explain the advantages of using Mathematica instead of Vex please.

Thanks!
User Avatar
Member
453 posts
Joined: Feb. 2013
Offline
MartybNz wrote:
Interesting: Can you explain the advantages of using Mathematica instead of Vex please.

Thanks!

Are you sure you posted in the right thread? :wink: Otherwise who are you refering to with “you” and why are you asking about mathematica and vex?
Edited by - March 18, 2016 18:27:27
User Avatar
Member
148 posts
Joined: March 2016
Offline
MartybNz
Interesting: Can you explain the advantages of using Mathematica instead of Vex please.

Thanks!

Vex is a scripting language whereas Mathematica is an algorithm powerhouse well suited for rapid design and efficient implementation of extremely complex algorithms of virtually any Turing computational complexity.

Further, Mathematica has seamless massively parallel processing with integral GPU computing. Mathematica implements Stephen Wolfram's knowledge Language: https://youtu.be/_P9HqHVPeik. [youtu.be]

I am in the process of implementing and testing Mathematica software, whose functionality appear as nodes within Houdini that automates character facial modeling and rigging from a single reference image. http://fvg.com/technology.html [fvg.com]

I have been working with Mathematica for 27 years and first met Stephen Wolfram at MIT while he was working on the ideas that lead to Mathematica.

I first had the notion to fully automate modeling and animation in the 80s after seeing the movie Gandahar while a senior scientist working on Artificial Neural Network algorithm research and microelectronic circuit automated design (microelectronic circuit design technologies and Houdini are fundamentally related technologies since they are both node based design architectures), but the technological infrastructure was not in place to support the objective of automating modeling and animation for feature film production at that time.

Coupling Houdini to the Mathematica kernel is the infrastructure for this. This has been a life long personal dream that I am now able to pursue after decades of working on other technologies and applications (yet I always kept my hands in CGI for everything from subatomic molecular modeling http://fvg.com/science-of-the-unseen.html, [fvg.com] to human facial character modeling and animation development http://fvg.com/examples.html). [fvg.com]

Indeed I recently communicated to Stephen by email my progress automating character modeling and animation by coupling the Houdini/Mathematica platforms. If I am successful I believe my work will fundamentally change animated feature film production.

Note: I previously connected Mathematica to Maya but the Maya linear workflow architecture dominates Maya, so its much more difficult to do what I am doing in the current Maya implementation. Maya needs to incorporate a much more robust non-linear architecture beyond their current node based system implementation. Manipulating and orchestrating Maya nodes programmatically deep within Maya can be done (I have done it myself http://fvg.com/examples.html) [fvg.com] but it is not trivial (and mutable by Autodesk) nor straightforward as with Houdini, which is architecturally correct for my purposes.

VR

Lars
Lars Wood
Future Vision Guide
Advanced Research And Development
User Avatar
Member
148 posts
Joined: March 2016
Offline
DASD
MartybNz
Interesting: Can you explain the advantages of using Mathematica instead of Vex please.

Thanks!

Are you sure you posted in the right thread? :wink: Otherwise who are you refering to with “you” and why are you asking about mathematica and vex?

Your correct and I responded not realizing he was responding to a different thread but this should be a new thread with his question as the initial post.

VR

Lars
Lars Wood
Future Vision Guide
Advanced Research And Development
User Avatar
Member
453 posts
Joined: Feb. 2013
Offline
@FVGDOTCOM

Couple things that sponteniously come to my mind when I see your work:
The ideas are good, but the results will go nowhere without an artist that is up to date on modern workflows, tools and skills.
Here's the gist of it:
Modern characters look more like:
http://cdn.wccftech.com/wp-content/uploads/2015/06/Uncharted-4_drake-looking_1434429044.jpg [cdn.wccftech.com]
and this:
http://icdn2.digitaltrends.com/image/rise-of-the-tomb-raider-press-image-0001-970x546-c.jpg [icdn2.digitaltrends.com]
in games.
And in movies they might look like something like this:
http://d23ipcd5miwp4q.cloudfront.net/wp-content/uploads/2015/11/Zootipia.jpg [d23ipcd5miwp4q.cloudfront.net]

The bunny has (I think) 40000 groomed individual hairs on its head. Maybe it was 400000… Doesn't really matter. Anyways, the shape of the head is almost generic at this point. What makes the character interesting are the details and its range of possible detailed expressions and of course its animations.

The process might be something like this:
https://www.youtube.com/watch?v=JbQSpfWUs4I [youtube.com]

We are at the point where we can (and do) animate wrinklemaps on the faces of the main characters in games. The basic bone facial animation is done with dozens of bones and many handsculpted blendshapes.
There are plenty of automatic rigging tools available and most studios develop their own scripted rigs. Even so they constantly have to adjust and improve and fix them up as technology evolves and the requirements of their games change.
For movies it is even more extreme. Every character receives a rig that suits its role and complexity in the movie. Sometimes characters are expected to deform as if they were drawn on paper. Needless to say that is not easy to achieve in 3D. But it is being done all the time:
https://youtu.be/T3nqmGgnJe8?t=77 [youtu.be]
New technologies are invented to groom snd render the characters' hair and to create their complex cloth animations etc.

Many artist don't start from scratch when developing new characters. Most artists will start from a basemesh of varying complexity. Some prefer to sculpt from a sphere, others use something like makehuman, DAZ3D or ZBush character builder. The basic shape is achieved within minutes. The dificult part is to achieve a richly detailed interesting and unique sculpt athat depicts a fascinating unique character.
The topolgy your process creates would be a rather bad start for a sculpt. You might as well start from a sphere. But it doesn't matter anyways, because new sculpting tools allow an artist to retopologize automatically and quickly while they create. The sculpt has to be further cleaned up and processed after sculpting anyways.
There is also stuff like: https://facerig.com/ [facerig.com] and I suspect this will become more common for games in the foreseeable future.

In short: There is still a lot of good stuff to be developed, but you need to check out what is already out there.
User Avatar
Member
148 posts
Joined: March 2016
Offline
DASD
In short: There is still a lot of good stuff to be developed, but you need to check out what is already out there.

Thanks for your post, I am familiar with all you wrote but it is good you summarized it in one place and if you don't mind I would like to use your post as a summary of the current state of the art.

The models on my website were mostly all generated automatically from images and this is the second generation of the technology, which is already rapidly being refined. In contrast this was the first generation of the technology, which is much rougher: http://fvg.com/the-entertainer.html [fvg.com] , big difference in the mesh generated and the second generation automated technology was developed within two weeks of the first generation using Mathematica enhanced codes. On the first generation of the technology many basic things went wrong (bad and heavy geometry and normals flipped etc). This is corrected in second generation. My goal is to automate, eventually the technology I am developing may surpass what can be done by hand. That is my goal, i.e. my competition is what you described in your post. I intend to automate all modeling and animation using non-linear machine intelligence. There will be some aspects that are not worth the effort, that is always the case. As an example, I developed algorithms to automate therapeutic small molecule design but it turned out only 99% of that was necessary and the rest was better finished manually by chemists.

Manhattan project notwithstanding, new technology paradigms very rarely bump the established paradigm out the gate, as an example the telephone was rejected by Western Union since their investment was in long distance telegraph. So Alexander Graham Bell focused on an unmet market, which turned out to be communications within a single building, floor to floor. That was the beach head for the telephone. Eventually Bell bought western union when they went bankrupt, since the old Western Union telegraph technology could not compete with the new telephone paradigm.

This subsumption of the telegraph by the telephone took several years. I am old but I still have time, assuming I don't die for a while (I figure I am good for ten years, my wife says 20+ years ha ha) since as this gets off the ground others will come in and continue what I started. There is already interest to move forward with me from industries orthogonal to the animation industry who want to be players here. Disruptive technology is flanking, its not generally predicted or expected.

Note: The third generation technology automates the creation of blend shapes from facial images and the geometry is created from a single reference image instantaneously using a single Houdini node connected to Mathematica distributed multi-processing sub-kernels. I work on this tirelessly.

The generation one technology was used to test automated blend shapes and lip sync from voice with background noise. The Mathematica codes filtered the music, separated the voice from an unmastered, standard stereo wav file, processed from a noisy mp3. Houdini + Mathematica == non-linear automation of modeling and animation. I started with faces. That is the beach head.

VR

Lars
Lars Wood
Future Vision Guide
Advanced Research And Development
User Avatar
Member
4189 posts
Joined: June 2012
Offline
Thanks for the posts Lars. It's very interesting the research you are doing. I would surmise that's it's above most td and artists level easily.

In regards to the original post, I don't see the business plan for removing the shelf tools. A lot of Houdini is used by artists who have no way to create fluids, smoke, rbd etc. the shelf quickly gets you the effect. Unless I'm misunderstanding the intention.

Cheers.
User Avatar
Member
148 posts
Joined: March 2016
Offline
I am definitely learning a great deal on the Houdini forum. On the shelf tools, it just doesn't feel right to me and there has to be a better way to do it in the network editor but I can't put my finger on it yet. As I recall the shelf, was introduced in Houdini 9 and Houdini has been around for much longer than that, (I am a little older than Kim Davidson but not by much ha ha). One of the initial areas of application for some of the early work I am doing might be useful in previz but I need to talk to people who do that to confirm.

Thanks for the feedback!

VR

Lars
Lars Wood
Future Vision Guide
Advanced Research And Development
User Avatar
Member
453 posts
Joined: Feb. 2013
Offline
I'm not saying it's impossible or that I am against it or whatever. I am saying it's likely other technology will outpace you.
Here are basic steps to achieve what you are (probably) trying to do:
1.: Facial recognition system that detects landmarks like eyes mouth and nose on multiple pictures or video. (One picture will not have enough accurate information.)

2.: Fine shape recognition. Basically use the landmarks to analyze lighting conditions in the picture to then extract fine curves and details of the face.

3.: Parametric, or flexible basemesh. A way to mesh the data into good topology.

4.: Parametric rig based on the parametric basemesh.


- At this point you have an automatic face creation with integrated rigging sytem. This would be useful on its own. Versions of this already exist.

5.: Analysis of multiple pictures - probably video - to determine deformations of fine details. The video or pictures would probably have to contain predefined expressions. The result is a collection of blendshape (and wrinklemap etc.) corrected bone animations for the previously created mesh.

6.: Animation based on sound and or mood cues.


- To summerize: Nothing that has not been achieved (and polished) to some degree already.

The tech to most likely outpace you, if you work from video or 2d pictures, is 3D scanning. Essentially they are in the process of breaking through step 2 (which is the only serious hurdle at this point).
Useability of such technology? Varied application in many fields. Certainly the future for many applications and experiences. So, good luck, have fun!
User Avatar
Member
148 posts
Joined: March 2016
Offline
@DASD, thanks so much for your feedback this is most valuable information to me as it confirms my working hypothesis.

Regards others being ahead of me, “sauce for the goose” my friend.

I am known, ( by those who know me, which are very few people indeed http://fvg.com/testimonials.html [fvg.com] ), for flanking science and technology. I metaphorically parachute into unknown territory. Then communicate with friendly local inhabitants on whats going on, what there biggest challenges are. Finally, I develop an asymmetric science and/or technology that disrupts the existing paradigm (or get bored) and then move on to parachute into somewhere else, repeating the cycle.

Many examples of this, including 3D projection used in the movie theaters, technology I helped develop with Stewart Screen long ago. Other examples include, artificial intelligence machine learning for NP-Complete problems considered intractable, hardware superconducting supercomputer design, evolutionary stem cell/retro-virus hypothesis responsible for all cancers recently confirmed by others, high level translation of software to VLSI hardware, physics string theory five dimensional space time that eliminates quantum paradoxes without violating Bells theorem. Lots of other stuff. I am old, you name it I have probably been there working with the thought leaders.

This is typical: I designed a “smart molecule” to treat hypertension. Smart molecules have “if statements” regards their activity and their ability to modulate their chemistry. I had no chemistry background at the time since I am completely self-taught with no formal education in anything I have ever done in science and technology. A senior director of chemistry wrote a letter to the head of the New Jersey Biotechnology council saying what I was trying to do was completely impossible. He gave me 1 in a billion odds of being successful. He even wrote out the zeros for emphasis, i.e. 1 in 1,000,000,000 odds. My smart molecule worked and was patented by the drug company I designed it for.

Many call me the real “good will hunting”. https://www.youtube.com/watch?v=N7b0cLn-wHU [youtube.com] BTDTBTTS

Back to the subject, I now pose a very serious question. If nothing were impossible and the sky were the limit, what would you and others want to automate in animation and modeling that everyone in the field believes can not be done today?

On a final note, I hope to go to Siggraph this year for the first time (in the past I have always sent others to Siggraph and other conferences around the world in diverse fields of study of interest to me or people I have worked for). I usually don't publish or go to conferences, (even when the conferences or companies volunteer to pay my way) and prefer others do so instead but in this case have submitted a paper on subatomic visualization of molecules, which I hope will be accepted to Siggraph since my objective to automate animation and modeling is on my life bucket list and I don't have much time left as I am not getting younger: http://fvg.com/science-of-the-unseen.html [fvg.com]

VR

Lars

PS: @DASD, I think you may have answered my question about what you would want to be automated in your post. If there are others areas please let me know. Thanks so much again.
Lars Wood
Future Vision Guide
Advanced Research And Development
User Avatar
Member
4516 posts
Joined: Feb. 2012
Offline
Automation in 3d doesn't excite me but real-time fluid dynamics would Have you done or planning to do any R&D on that front?
Senior FX TD @ Industrial Light & Magic
Get to the NEXT level in Houdini & VEX with Pragmatic VEX! [www.pragmatic-vfx.com]

youtube.com/@pragmaticvfx | patreon.com/animatrix | animatrix2k7.gumroad.com
User Avatar
Member
148 posts
Joined: March 2016
Offline
pusat
Automation in 3d doesn't excite me but real-time fluid dynamics would Have you done or planning to do any R&D on that front?

Yes, lots of work in lattice gas theory and an invited presentation at Los Alamos National Laboratory (since thermonuclear related stuff is a fluid dynamics problem). The late Brosl Haslacker invited me to give the talk to the theoretical division http://www.lanl.gov/org/padste/adtsc/theoretical/index.php [lanl.gov]

Brosl did a lot of work in this area before his untimely death.

https://en.wikipedia.org/wiki/Brosl_Hasslacher [en.wikipedia.org] .

Fundamentally, all these problems are reducible to NP-Complete optimization challenges. In this context I have invented the “Heat Seeker” algorithm, which is a new method for rapidly solving any NP-Complete problem in O(n) time with GPU parallelism. I am applying this algorithm to my work to automate animation and modeling. The algorithm will work for fluid dynamics too.

Thanks so much for your valuable feedback!

Here is a page describing the Heat Seeker algorithm along with a free download that contains a laboratory for experimenting with it. Along side Heat Seeker is another conventional algorithm related to Quadratic Heuristic Search to compare. Heat Seeker will solve any NP-Complete problem under any transformation dynamics. http://fvg.com/heat-seeker-algorithm.html [fvg.com]

I will provide documentation on how to use the laboratory soon, I just posted it recently.

VR

Lars
Edited by - March 19, 2016 15:23:01
Lars Wood
Future Vision Guide
Advanced Research And Development
User Avatar
Member
453 posts
Joined: Feb. 2013
Offline
On the spot: Make a perfect UV unwrapping process that is compatible with current workflows and people will love you.
XD
There are techniques such as ptex and uvw mapping, but we always go back to UV maps (and or UDIM), because they can be used universally and are performance efficient.
A lot has been done in the field of automatic unwrapping to UVs, but it's still mostly crap.
Even if you get a good automatic unwrap the automatic layout will be worthless.
All automatic UV packing algorythms I have tried, waste at least 20% more UV space than I would, if I did it manually. Usually it's more along the lines of over 50% wasted space.
Doing UVs is a timeconsuming thankless job that only trained professionals can appreciate. It would probably be best for everyone involved to automate the process as much as possible.

I have tried making my own UV unwrap processes with Houdini. I spent many hours on this and even had some success for particular cases. But it's still all crap. The basic Houdini UV unwrapping tools are terrible. Until H15 there wasn't even a dedicated node to do an automatic layout. No matter how great your procedural model is, without procedurakl UVs, you will still have to rework it by hand.

Here are some of the specifics of UV packing: UV space is expensive. So you want to pack all UVs into the tightest possible space. There should be no overlap, unless it is specifically allowed for identical pieces. Not all processes support overlapping UVs.
Textures are commonly in a square power of 2 format (because of commonly used compression and file formats). So artist will always make textures that are 512*512 or 1024*1024 or 2048*2048 (etc) pixels. Some artist would make textures of 512*128 format and similar, but as far as I know this is not efficiently supported by all game engines and render engines. So most of the time you will pack UVs into a 0-1 square.
Anyways, because of the scale difference between UVs and texture pixels you get artefacts called pixel-bleeding if the UVs are too close to each other. So you need to account for padding space.
But thats still not the whole story. UVs need to be optimized for little distortion and few seams. More seams mean less distortion but also more wasted space due to padding. If there is too much distortion, the UVs are useless. Now it gets tricky: Because all UVs will have some distortion, UVs are technically a bit flexible. So when you pack them and you realize that something is inefficient you could technically bend the UVs a bit or change your seams. - In times of 3D painting seams are only an issue in terms of wasted space due to padding. Oh and rotation doesn't matter, but you should not flip UVs. Why should you not flip UVs? Because it messes with your normal maps unless the (render) engine compensates. And since it's another calculation for the shader, you don't want the engine to do it. And the kicker is that small scale differences between pieces don't matter too much.
Oh and your process should be able to handle geometry that consists of multiple objects, non-manifold geometry, concave geometry, mechanical geometry (with lots of straight lines), organic geometry (with lots of crazy shapes), and all sorts of other fun stuff.
To summerize: It's an optimization and 2D packing problem with many many many variables and very few constants.
I would love a great automatic UV packing system. ^^
I think it's possible, but needs some serious math and code wizardry.

Skilled artist would still create their own UV layouts, because there are special effects and techniques that require specific layouts, but there would still be many applications that would greatly benefit from an automated process. For example, light-map UVs are still relevant and 3D character painting benefits from unique UVs.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
In regards to state of the art for UVs, check out UVLayout. It's relatively old but has solid algorithms for flattening, not sure if it's packing is considered leading edge but it has a wonderful workflow.
http://www.uvlayout.com/index.php?option=com_wrapper&Itemid=38 [uvlayout.com]

It's main issues are a terribly old interface, and, being an external program adds a round trip instead of being integrated.

ZBrush is also considered quite automated.
User Avatar
Member
453 posts
Joined: Feb. 2013
Offline
@ MartybNz
Yes, thanks, I am aware of that one (and Roadkill, which is similar). The whole UV packing problem still remains and the Unwrap of those programs is still not that great.
Besides the process is mostly manual and therefore incompatible with a fully procedural workflow.
I tried using ZBrush unwrapping, but I have yet to get good results out of it. It wastes a lot of space and painfully optimizes for fewer seams, even when it is clearly a mistake to do so.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
It'll be best to quality what is ‘not great’ compared to a manually flattened and packed example then. Any real-world example will do.

Thanks!
  • Quick Links