Karma XPU on Intel Arc through Embree

   1270   2   3
User Avatar
Member
122 posts
Joined: Sept. 2018
Offline
When Intel Arc launched I wondered if they will support Intel Embree (the ray tracing library Karma XPU, and as far as I know CPU, uses for CPU devices) on their GPUs as well. It seems that they do now. https://www.embree.org/ [www.embree.org]

I don't own any Intel Arc GPUs but based on some tests, their Ray Tracing performance is comparable with NVIDIA GPUs and they're currently quite affordable, especially the 16GB version.

I'm wondering if Karma XPU will have an easier time adding Intel Arc support because Embree is already being used?
I do not know how easy it is or how different the implementation is compared to Embree for CPU but considering Arc has real Ray Tracing Hardware and is really cheap compared to NVIDIA it might be worth to build a low power render rig with several ARC GPUs, that comes out cheaper than a single RTX 4090/4080. If Karma XPU could use their hardware.

Anybody got more insights on this or ideas?
User Avatar
Staff
469 posts
Joined: May 2019
Offline
> I do not know how easy it is or how different the implementation is compared to Embree for CPU

It would be a rewrite of the EmbreeCPU device to get it going with this new GPU-based Embree.
ie SYCL is like CUDA, it has its own way of allocating buffers, copying memory, launching "jobs", etc...

The main problem is that the DPC++ framework (what its built upon) is not practical for us to use. It requires the user to install a bunch of stuff, and also we'd need to recompile Houdini for each new driver version Intel puts out (ie their software is not backward compatible with different driver versions)
There are also other reasons (eg, a lack of runtime c++ compiler) that would make the XPU port difficult.

Also, it would not be as fast as Optix either, because it lacks many of the Optix features (eg it has no scheduler, etc...)

DPC++ seems more for high-performance computing (eg scientific computing, large studios with custom software setups, etc...) than anything practical for the end-user at the moment.

So I think it'll be a good 1-2 years (at least) before we could consider it realistically.
But who knows? Intel might sort out those issues sooner than expected, or longer...
Edited by brians - March 17, 2023 00:58:46
User Avatar
Member
122 posts
Joined: Sept. 2018
Offline
Thanks for the answer!
Too bad... Hopefully, Intel can fix these issues soon so that they become a viable option.
  • Quick Links