Search - User list
Full Version: Limiting simultanious Wedge Amount - Scheduler Options [ SOLVED]
Root » PDG/TOPs » Limiting simultanious Wedge Amount - Scheduler Options [ SOLVED]
tricecold
I have been fiddling with scheduler settings, but so far could not find an option to limit the simultaneous wedge limit. Houdini Max Threads just puts a limit for the running task but does not actually stop new jobs being run, is there a way to limit this behaviour. This limitation makes single machine use very problematic as we can fill the ram very quickly
BrookeA
There's 3 key parameters for controlling resource usage when using the local scheduler:

1. Houdini Max Threads (which limits the actual job as you've mentioned)
2. Maximum CPUs to Use – tells PDG the maximum number of CPUs it can use
3. CPUs Per Task – tells PDG how many cores are used per task.

E.g. if you have maximum CPUs set to 6 and CPUs per task set to 2, the maximum number of jobs that will be running at once is 3.

Let me know if you have any other questions!

EDIT:

There's also a 4th parameter called “Single”. When this is enabled, it will force only one job to be run at a time.
Tyler Britton2
This is very helpful.
Just to be clear, if I have 24 threads on my machine. I was getting 6 jobs at max when rendering a bunch of tasks, I assume because each job was assigned to 4 threads, is that correct? When I switch the CPUs Per Task to 2, from what I understand, I should have a max amount of jobs of 12, but I only seem to get 3. When I put the Maximum CPUs to use to 24, it then seems to open up and I get my 12 jobs. Why do I need to specify the Maximum CPUs to get my 12 jobs? And it looks like I go over 12 jobs when I set Maximum CPUs to over 24, is there a way to limit that number to the amount of CPUs that my computer has automatically?
BrookeA
When local scheduler is set to default values, max CPUs to use will be set to <Number of CPUs>/4. This is why you were getting 6 jobs by default.

Now if you increase CPUs Per Task to 2, that means that each job *must* have 2 CPUs available for the job to run. So that means the maximum number of jobs will be 6/2 = 3 jobs.

If you increase Maximum CPUs to 24, you will get 24 CPUs / 2 CPUs Per job = 12 maximum jobs at once.

I recommend using your maximum number of CPUs - 1 so that you still have one core available for doing other work. You can do this by setting Maximum CPUs to -1.

I hope that this helps to clarify!
Tyler Britton2
Thanks Brandon, I will start doing this. When there are no jobs left to start and some jobs are still going, will the CPUs from the finished jobs go over to help the jobs still going, or does it not work like that?
BrookeA
It depends – there's a subtlety I left out from my post: the settings that I outlined essentially describe when a job can be run. The actual CPU usage that a job will ultimately end up using depends on the program that is being run. For example, a job from the Mantra Render node can make use of multiple threads, and you can control the number of threads that the actual job uses with the Houdini Max Threads parameter. When jobs complete, the total current CPU utilization will decrease and might allow other jobs to complete quicker.
tricecold
I guess the notation here CPU as oppose to core or thread got me confused here,this works just fine.
tricecold
BrandonA
It depends – there's a subtlety I left out from my post: the settings that I outlined essentially describe when a job can be run. The actual CPU usage that a job will ultimately end up using depends on the program that is being run. For example, a job from the Mantra Render node can make use of multiple threads, and you can control the number of threads that the actual job uses with the Houdini Max Threads parameter. When jobs complete, the total current CPU utilization will decrease and might allow other jobs to complete quicker.

Ok I am little confused, I can understand this working as a farm tool rather better than a single machine and it makes lots of sense, but it has opportunity to run for single machines without limiting it to a single task due to latest mega multi core CPUs while letting threading to keep the cores busy

sometime ago, I made my own similar wedger just for this case, I was creating hython jobs with multiprocessing module which would run n amount of hythons simultaneously but not limited to any kind of thread limits. Are we doing something similar here.

Basically I am tyring to understand the best values, lets say if a single wedged JOB takes 20gb ram tops, and i dont want to run more than 3 of these at the same time because i have 64gb ram. so if a job finishes next one gets picked up, etc

here is the link if you want to have a look
https://github.com/tricecold/hythonWedger [github.com]
BrookeA
tricecold
Ok I am little confused, I can understand this working as a farm tool rather better than a single machine and it makes lots of sense, but it has opportunity to run for single machines without limiting it to a single task due to latest mega multi core CPUs while letting threading to keep the cores busy

The local scheduler can be configured to run multiple tasks in parallel and also allow the jobs to be multithreaded so that all of the cores are active. E.g. for the case that you described:

tricecold
Basically I am tyring to understand the best values, lets say if a single wedged JOB takes 20gb ram tops, and i dont want to run more than 3 of these at the same time because i have 64gb ram. so if a job finishes next one gets picked up, etc

For this particular case, you could set the maximum number of CPUs to 3 and set CPUs per job to 1, which means that there will only ever be 3 jobs running at the same time. I'd also turn up the value of Houdini Max Threads so that they take advantage of all the cores on the machine.

As soon as one of the jobs finishes, PDG will begin running the next job – so that there is always 3 active jobs.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB