How so? Forcing a lower maximum tessellation level does have an IQ impact, but it's an acceptable trade-off vs performance. What the issue primarily is, IMO, is that GW does not have a built in way to do the same thing. Considering the wide range of tessellation performance on even its own cards, there should have been an easier way to adjust it.Has Nvidia been caught crippling game performance with Gameworks for literally NO visual benefits? YES.
The HairWorks drama is funny in a way because this seems to be a double eagle directed at AMD for a similar problem with TressFX1.0 (in Tomb Raider): Nvidia accused AMD of crippling performance on its hardware by doing more work than necessary. (The solution was to tighten bounds to make the hair simulation run faster... fixed thanks to the game maker, not AMD, and furthered optimized on Nvidia's driver side.)
How is it bad for the entire ecosystem, and what standards are you talking about? GW is an optional middleware library, and GW uses DX (on Windows) to render. There are no "game development standards", but there are graphics API standards.Is Nvidia exercising practices that are bad for the entire ecosystem of game development standards and hurts gamers? YES.
Boo hoo for (butt) "hurt[ing] gamers". Those people are the ones wanting to take the ball and go home when they're not the ones on the beneficial end of GPU specific optimizations, but can't stop crowing when they're the ones who do get GPU vendor specific optimizations.