Houdini / Mantra running on virtual servers

   2250   5   2
User Avatar
Member
3 posts
Joined: June 2010
Offline
Just wondering if anyone has tried running render blades, running Houdini / Mantra, across virtual servers. My thought is to add virtual servers to increase the rendering throughput. The current rendering jobs do not tax the render nodes to their limits. If the capacity is there on the blades would this concept work? I would be interested to hear from anyone who may have already attempted this or has an opinion to share.

Tony M.
User Avatar
Member
1529 posts
Joined: July 2005
Offline
Heya Tony,

Your premise seems a little suspect.

In my opinionm, the real question ought to be ‘Why isn’t mantra using all the cores available to it' - because it should.

Maybe you are network bound? are you generating IFDs? or rendering inline? Are you running out of RAM and into swap?

Need more details…

G
User Avatar
Member
3 posts
Joined: June 2010
Offline
Thanks for the reply.

Yes Mantra is using all the Cores / Threads. We assign 1 Mantra per thread. Our Blade has 32 cores (64 threads) and 128 GB of Ram. The renders are inline. We pass the renders from Houdini directly to Mantra.

Some of our renders are more difficult than others. It appears that the harder renders utilize hardware. CPU usage at around 75% and memory usage is 30 GB. The easier renders do not utilize hardware to the max. CPU usage at around 35% and memory usage is 18 GB.

The thought process is to virtualize the blade since we have a lot of computing power and memory available.

This logic may not compute but I thought I would ask.

TM.
User Avatar
Member
1529 posts
Joined: July 2005
Offline
Since mantra is generally very good at resource utilization, I suspect something else is happening.

My gut instinct says that since you are launching the renders inline, houdini is busy ‘cooking’ the frame, before handing it off to mantra to render (you can confirm that by looking at the task manager, or ‘ps -ef | grep mantra’ in the linux console).

If you find that it is houdini itself that's bottle necked, you might look at optimizing the cooking of your scenes, or more licenses.

Hard to get more specific without knowing other constraints.

Cheers,

G
User Avatar
Member
3 posts
Joined: June 2010
Offline
Yes have thought about what you mentioned a lot. In the process of putting additional monitoring hooks to see what is happening with python, hython and mantra processes, memory and cpu usage.

Will keep you posted in what I find.

Tony.
User Avatar
Member
7726 posts
Joined: July 2005
Online
Why would virtual servers improve the throughput? In theory, wouldn't virtual servers slow you down instead? If you run all the processes within the same OS instance, then it can do efficient file caching. If you have multiple OS instances, then each has to independently file cache. I suppose the virtualization layer may do some caching as well but that just sounds like an extra layer for nothing?

If the current rendering jobs are not using all the capacity on the machine, then you can always try just running more jobs per machine. When you do this though, you need to be careful that the combined cache sizes of all the simultaneous mantra jobs don't push you into swap (see: vm_cacheratio).

Another problem will be increased simultaneous I/O. However, I don't personally see how having separate virtual servers will avoid that problem either.

Consider the locality of your jobs if you haven't already. ie. see if you can run jobs on the same machines which will reuse data already cached by the OS.
  • Quick Links