lipsync??

   7265   5   2
User Avatar
Member
4 posts
Joined: 10月 2006
Offline
Hello,

I'm pretty new to Houdini, and animation in particular ‘’ .
Where i can find a tutorial for lipsync in houdini?
I've been trying to follow the help but no success…

Thanks in advance!!
User Avatar
Member
7740 posts
Joined: 7月 2005
Online
What are you trying to do exactly? Are you using the autorig to do some animation?
User Avatar
Member
537 posts
Joined: 12月 2005
Offline
don't know about tutorials, but I do know Houdini is better suited for lip sync than any other program I've seen … this is because of extensive audio processing nodes in the CHOPs section.

There are things like phenome detection and audio splitting utilities, as well as very easy ways to guage volume and frequency.

These different functions can all be combined with Math and Expression operations for different procedural animations, and it's also very straitforeward to convert 44100 hz (audio) information to 24 fps (animation) using resampling.

So, I'd recommend doing some generic chops tutorials and also read the help stuff on the chops.
User Avatar
Member
4 posts
Joined: 10月 2006
Offline
I h ve rigged simple figures and the keyframing is ok, i was thinking about the ability to control facial expressions via an audio file. I was reading the hodini help, and the funny thing is that i get a single entry - TBD

Thanks for the advice guys!
User Avatar
Member
7740 posts
Joined: 7月 2005
Online
I've never tried it but I think the idea is that you create an audio file where the actor says each phoneme. The Voice Split CHOP takes this audio file and generates the phoneme library that associates a phoneme name with each audio segment. Then you can use this phoneme library with the Voice Sync CHOP, which will try to match your audio dialogue with the phoneme library.

Alternatively, the Phoneme CHOP takes english text and splits it up (evenly) into phoneme channels. For this approach, I think the idea is that you then manually fix up the spacing between the phonemes.

At the end of all this, you get some channel data which you should then be able to use to drive a blendshape setup with. I think that's what they mean by “viseme”.
User Avatar
Member
537 posts
Joined: 12月 2005
Offline
depending on the length of animation (say shorter shots) an alternate approach would be to not rely on Houdini's actual detection and manually specify or “tell” where each phenome was … but with methods more efficient than keyframing.

To do this it would probably be good to animate which expression goes where by placing “triggers” or single sample spikes of certain values at the specified times, a DA's could be made to create spikes easily.

These could correspond to the number of the facial/blend pose, so look for all the “a”'s every time there's a certain phenome put a “trigger” there for it with say value of two.

Then add all the different triggers together to form one channel using Math. The zero's wouldn't add up to anything.

Then use an interpolation algorythm (maybe another DA) such as hermite (this can be constructed using expression chops etc) to create animation channels based on those numbers.
  • Quick Links