Daryl Dunlap
Buying 3Delight makes no sense, SideFX already has a production CPU engine in Mantra - and SideFX clearly accepts that CPU engines are a legacy product - as SideFX has literally stated, they are not investing in their Mantra codebase anymore.
So, along with their investment in USD, they invested in a GPU engine. Making a GPU engine takes a very long time, just ask ChaosGroup. Pixar is already a year behind on their initial Beta projection for their GPU offering - my point is, SideFX is going for the long game, future proof design here.
GPU is the future and AI has just begun its disruption in the 3D/VFX industry.
Oh gosh… Uncle Einstein there must be embarrassed

Just offering you a tiny bit of perspective here: at the present state CPU renderers offer more flexibility and performance, especially in complex scenarios. 3Delight has proven to be as fast – if not faster – than GPU-only renderers, it scales with complexity with zero limitations, including algorithmic ones. So if you think CPU rendering will disappear in the short term, you really need to reframe your thoughts. Stopping to blindly believe in marketing is usually a good start: think critically.
Karma, in the form it is distributed, is currently a CPU-only engine, and there is a lot of work to be done to just put it “on par” with Mantra and any other established renderer. Sidefx will surely do that, it will just take time. Karma will also use the GPU when they will feel it works well enough to be introduced / previewed, and it will most likely be not complete for quite some time as making a renderer is a damn hard job. I honestly doubt Sidefx will make Karma a GPU-only renderer, it will be most likely similar to Cycles in the way it will use the GPU, but I could be wrong on this and maybe they may really go for GPU-only to make it “stand out” as it certainly has a lot of appeal to some people. BTW, food for thought: guess who gives the data to the GPU renderer? The CPU.
CPU and GPU will continuously evolve, they both solve different problems very well. Personally, and currently, for complex and high quality output, I think nothing can beat a CPU renderer in terms of efficiency, scalability and cost per image. Nevertheless if (when) GPU power becomes accessible in a more transparent way for programmers, e.g. when you don't need to use some special compiler / some totally hardware vendor dependent API / some special 3rd party library, and also when a considerable amount of memory will be available (with nowadays scene complexity 64GB is the bare minimum, 128/256GB of RAM are needed FYI…), CPU renderer developers will be more comfortable to call GPU compute for the tasks that make sense. And when this happens a program will easily ALSO use a GPU if present on that hardware unit.
Last but not least – and more in general – the end user should NOT care whether the CPU or the GPU (or both) are behind the computation of an image. Do you care what hardware delivers to you an iCloud photo image or a Netflix video? No. All you should focus is the amount of time you spend to produce and receive an image / sequence of images (important: this includes the setup time, so complex render engines with tons of obscure parameters lose points here), at what quality, and at what actual cost. As a user you should also not be limited in what you can render: this means you should enable yourself to render either a sphere or a full frame of a super complex full CG movie, ideally with the same renderer and without any hardware nor parametric/algorithmic worries. Locally computing a render will become obsolete, big expensive boxes with many cpus and gpus become old quickly, they were (in the 90s) and are already anachronistic again (history repeats), since all non-preview rendering will be streamed as a service to the user by remote computing (the cloud), which is the true source of unlimited compute power at low cost.