OpenCL Ignores My vRAM?

   4244   12   0
User Avatar
Member
2529 posts
Joined: June 2008
Offline
Hi All,

I ran hgpuinfo and discovered that Houdini is not using enough of my GPU vRam. I have an 8GB 1070 card but Houdini only allocates 2GB. 2GB of ram is not enough to produce a high quality simulation.

When I try to leverage OpenCL on a production quality simulation, it always crashes. Now I can see why, it does not allocate enough ram.

Is there anyway to increase this allocation amount?

Even if I could allocate 100% of my vRam, 8Gb is not enough to do a movie quality effect, is it?
Edited by Enivob - May 31, 2017 11:37:21

Attachments:
Untitled-1.jpg (78.3 KB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
The Max allocation is for contiguous ram resource usage, the total possible available is above it at 8GB, though other resources may be using it when connected to monitors or rendering etc.

Sigh… for the uptinth time you need to use OpenCL CPU to access enough more ram for so-called production level effects. Are you sure you're not trolling?
User Avatar
Member
2529 posts
Joined: June 2008
Offline
I'm not trolling, this is news to me. I'm still trying to leverage this feature but Houdini keeps erroring out.

Are you suggesting I have to install something else beside Houdini to use OpenCL on Windows?

Also, in another thread, Jeff mentioned that OpenCL can use up to 4GB of ram. So my question is valid. If the developers says one thing and the reporting tool says another (2gb), something odd is going on…
Edited by Enivob - May 29, 2017 15:29:00
Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
No - one question. Do you understand what we mean when we have repeatably been saying use OpenCL CPU?
User Avatar
Member
2529 posts
Joined: June 2008
Offline
I guess not, I want to use my GPU for acceleration, not CPU. Isn't that what OpenCL is about?
Edited by Enivob - May 29, 2017 15:29:35
Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
Oh now it's understandable, frustration through ignorance!! OpenCL is for all devices - CPU, GPU, DSP, FPGA, Nvidia, Intel, AMD. It's all but one google search away:

https://en.wikipedia.org/wiki/OpenCL [en.wikipedia.org]

OpenCL is very very efficient code that will compile for the exact CPU features you are using, that is one reason is *better* than standard C++.

Really don't want to write how to turn it on again and again and again and again and again, so hopefully someone else will or you can just read all the other posts on how to.
User Avatar
Staff
5156 posts
Joined: July 2005
Online
If you'd like to submit your hipfile with a bug report, we can figure out why it's either a) using more than 8GB, or b) using more than 2GB in a single allocation. Newer GPUs have virtualized their allocations so that it's possible to allocate more than 8GB in a single kernel invocation, but the 2GB single allocation limit still might stand.

Using OpenCL on the CPU isn't as bad as it sounds. OpenCL is just an interface to a compute device, and forces you to code in a way that's friendly to multiple cores/shaders/AVX/SSE/etc. Usually it gives a good boost even on a CPU, as the kernel is compiled optimally for that CPU type by the CPU driver, which can include AVX, AVX2, SSE1-4 instructions. When we compile Houdini we can't assume broad support for all of those CPU features.
User Avatar
Member
2529 posts
Joined: June 2008
Offline
Ok, so does this feature require me to learn yet another Houdinig coding language?
Does it work like a wrangle?
I see OpenCL check boxes all over the place, but no text input field for code?
Edited by Enivob - May 29, 2017 23:53:58
Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Staff
5156 posts
Joined: July 2005
Online
The nodes with checkboxes are coded in C++ and OpenCL. Enabling the box runs the node using OpenCL, which the devs have written for that node. There is an OpenCL Sop, which works similar to a wrangle but with the OpenCL language rather than VEX.
User Avatar
Member
7 posts
Joined: April 2014
Offline
aRtey
Oh now it's understandable, frustration through ignorance!!

There's zero need for the attitude, there's one result when searching for “OpenCL vram” and there's virtually no documentation of OpenCL in the help file.
It's a super reasonable assumption that ‘Max Allocation’ would be set to use your entire vram.
Edited by TheTex - June 4, 2017 02:43:35
User Avatar
Member
4189 posts
Joined: June 2012
Offline
TheTex
aRtey
Oh now it's understandable, frustration through ignorance!!

There's zero need for the attitude, there's one result when searching for “OpenCL vram” and there's virtually no documentation of OpenCL in the help file.
It's a super reasonable assumption that ‘Max Allocation’ would be set to use your entire vram.

So, what's the point of your post?
User Avatar
Member
31 posts
Joined: June 2009
Offline
goat
Oh now it's understandable, frustration through ignorance!! OpenCL is for all devices - CPU, GPU, DSP, FPGA, Nvidia,
I really don't think that it's unreasonable to expect a simple simulation to run on OpenCL through the GPU, especially if there is plenty of memory. The point is that we shouldn't have to fall back to CPU.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
As this post appears to have become the go-to for OpenCL memory issues, and OpenCL has now been expanded across Houdini the answer is to set the environment variable:

export HOUDINI_OCL_MEMORY_POOL_SIZE=1

The ‘1’ should mean 100%, though current of H17.5.239 / linux, it tops out at an allocation of 50%… i.e. a 11GB GPU will allocate 6GB and a system with 64GB CPU ram will allocate 32GB.

edit: it's funny to look back at this thread and see how the context has been lost…. damn interwebs!
Edited by anon_user_37409885 - May 12, 2019 00:04:57
  • Quick Links