Thanks for explaining that more. :-)
It seems bizarre. With their talk about hair rendering and the volume (which I mentally downgraded to a surface at least) implied by a sphere I didn't even consider a lack of backface support, it seems like it would be needed for lots of uses of spheres if not hair. In games alone I could see wanting thin-film bubbles that didn't have any significant refractive effects to rise off of something, or maybe bubbles popping out of original DOOM style glowing green radioactive waste. Simple environmental stuff that could be made cheap with this and probably add as much ambience as a complicated path traced metal shader. One of their own examples is a torus that can be set to transmissive, even. It takes a nearly ideal piece / pieces of glass to not be able to see backface effects in real life. Most cheaper camera lenses don't even manage it and you might catch a reflection of your own eye in just the right light even if it'll never affect the captured image.
Reading a more extensive description on their blog it looks like they described it that way because that default curve type (LSS) is done in software on all prior models and hardware on blackwell, so it's a new
hardware primitive but isn't actually new like you said.
DOTS had to be used to get hardware intersections on other cards and it seems to be kinda error-prone.
For someone already using the CUDA-based ray tracing API framework NVIDIA OptiX, LSS is already available as the default linear curve type, and works on all GPUs that OptiX supports. The OptiX version of LSS automatically uses a software fallback on GPUs prior to NVIDIA Blackwell GPUs, and the new hardware-accelerated primitive on GeForce RTX 50 Series GPUs, without needing any code changes.
I'm guessing most of the cost for accurate rendering is the hair shader itself, now that I think about it more. NVidia has a BCSDF Far-field hair shader as an example but I can't see fitting that into a game anywhere which is what they're talking about much of the time... The math is extremely complex for one effect but more importantly it's loaded with sin / cos / asin / etc and looks like it would tie up the special function units on an entire card if you weren't relying on their generative path-tracing neural nets to fill in most of the scene, in which case why use that hair shader in the first place?
I'd have been more impressed if they added smooth swept bezier curves (non segmented) and bezier patches / nurbs patches in hardware. Every once in a while I look at what game engines are doing and get surprised that poly models are still used as heavily as they are. I know they're nowhere near as easy to render as triangle geometry but it seems like GPUs should be powerful enough to handle it by now.
Adaptive subdivision of displaced stuff would be pretty cool. True displacement can get pretty expensive if there's enough high frequency detail once you turn dicing up enough to deal with it, but if the GPU could cheaply dice the area it's currently intersecting and only needed the map and base geometry it would only be a slight speed hit vs. potentially going over vram limits.