In my case (may be different from everybody) I want to spend as little time as possible doing the most amount of iterations and I am willing to sacrifice displacement for the tests, motion blur and quite a bit more PROVIDED when I hit the render button at night with Mantra the next day I can see it with all the bells and wisttles.
It is development time vs render time, right now I feel development time is the bottleneck and engines like VRay and VRayRT address exactly this area.
Specially considering the comiditization of rendering on the cloud which is around the corner and the possibility of sending your render to a limitless renderfarm I think this is going to be even more acute.
Regarding CPU vs GPU, sure enough there has been lots of limitations but they are falling like dominoes by the day so I would not assume the limits of a year ago still remain. For example, the texture limits have been improved massively.
Again, this may not be everybody's view but the potential is too big to ignore. Just have a look at the GDC 2013 demos for nvidia VCA with OctaneCloud and you will instantly see what i am after when I ask for GPU rendering.
I would rather prefer if resources are spent on making Mantra faster. For that example, which of course is not globally representative, of 3 min Vs 17 seconds IMO does not justify the speed gain. No real displacement, no real transformation and deformation motion blur, no volumes, no shading language, no subdivision surfaces …)
Sorry but IMO GPU renderers have still a long way to reach the quality and reliability (cross-platform & hw-agnostic) of a good offline renderer. If the GPU can assist the CPU render in a reliable way this would be already a big success (and you will still hit driver and cross-platform issues).