Poll: Tensor (TPU) Cores on consumer GPUs: Boon or Bust?

Tensor Cores, Boon or Bust?

  • Bust - it has failed so far, and doesn't appear to have a bright future.

    Votes: 22 31.9%
  • Boon - Already providing notable benefits.

    Votes: 5 7.2%
  • Future Boon - questionable today, but expect future benefit.

    Votes: 42 60.9%

  • Total voters
    69
  • Poll closed .
I am leaning towards bust, mostly because Tensor cores are highly specialized and judging from the dev feedback nobody quite knows how to integrate/what to make of them.
That could change in the future I suppose -- but I doubt it. I am mostly basing this on those specialized PhysX cores which just disappeared as well whereas PhysX itself stuck around.
Original PhysX was not some cores but separate PCI card from Ageia
And funny fact: it supported more effects than Nvidia PhysX

PhysX convinced and probably still convince some people to get Nvidia hardware so it was success.
 
DXR is cool tech but still a long way to go for mainstream. One expensive card can run it at acceptable frames at 4k in one game and it's partial RT, not even full scene.
DLSS is an utter joke.

Future boon if they keep pouring money in it. At this point it's far from ready.

Most gamers are at 1080/1440, 4k is probably barely 1-2% of the discrete base. So judging it by 4k performance isn't representative of real world use and application. Even a 2060 can do RT well enough at 1080p and that really should be the standard baseline of performance, not 4k. People spend $1200 on 2080 ti for performance, not 4k--there are some but they aren't near the majority.

I consider myself a hardcore fps gamer that uses 1440p/144 hz and if I had a 2080 Ti i would continue doing what I do now, run competitive MP fps games on very low for max fps and SP games with eye candy turned on. I suspect most gamers do what I do.

Also current generation RT excels in SP games and Cyberpunk will likely be the game that serves as the de facto benchmark in the near future.
 
Adding more SM block would make much more sense for consumer cards than inclusion of Tensor cores. Not only shader performance would be better but additional SM blocks would also have additional RT cores so DXR performance would also be better than it is. I however really doubt that if Nvidia was to actually make powerful Tensor-less GPUs they would increase anything.

DLSS would be much better if it was also included as normal generic scaling algorithm. While at it they should also add point scaling that so many 4K users want...
 
Original PhysX was not some cores but separate PCI card from Ageia
And funny fact: it supported more effects than Nvidia PhysX

PhysX convinced and probably still convince some people to get Nvidia hardware so it was success.

Yes, the original PhysX implementation was a separate card with a processor ie. "core" from Ageia.
The point I was trying to make is that the dedicated hardware was deprecated in favor of a more "generic" software implementation.
 
Tensor cores are not for gamers. These are for people using deep learning for computer vision or similar tasks, plenty of which are using consumer GPUs because paying $10k per card for an enterprise grade card is too expensive.
 
Tensor cores are not for gamers. These are for people using deep learning for computer vision or similar tasks, plenty of which are using consumer GPUs because paying $10k per card for an enterprise grade card is too expensive.

If you think the price of hardware in enterprise are the key factor is is very clear for me you have no clue.
Hardware (even Nvidia's top of the line stuff like DXG) is dirt cheap compared to the cost of the software...

And you are speaking against know facts too...DLSS are for gamers...ignorance is a piss poor foundation FYI..
 
If you think the price of hardware in enterprise are the key factor is is very clear for me you have no clue.
Hardware (even Nvidia's top of the line stuff like DXG) is dirt cheap compared to the cost of the software...

And you are speaking against know facts too...DLSS are for gamers...ignorance is a piss poor foundation FYI..

I am not talking about large enterprise or those who have their own cloud - clearly cost is not going to matter for those willing to pay tens of thousands to train a single large-scale model, or those making money off of such groups.

On the other hand, there are plenty of startups who prefer to buy workstations with consumer-grade cards for their engineers for use in development because this is less expensive in the medium-term than paying for cloud VMs. Ditto for academic use - plenty of people are buying consumer-grade cards for their PhD students' and postdocs' exclusive use within their lab.

DLSS is a minor part of overall computational cost within a video game and other aspects of graphics will generally have the largest effect on performance.
 
Most gamers are at 1080/1440, 4k is probably barely 1-2% of the discrete base. So judging it by 4k performance isn't representative of real world use and application.

I think it was 1% 6 years ago when I was playing tomb raider at 4k60 on crossfire 7970ghz cards over displayport before hdmi2.0 was standardized. Nowadays you can get a cheap 4k tv from your local big box store for a few hundos with hdmi 2.0 and game at 4k60 with low or medium settings, no rtx and a low to midrange card. Not everyone is a competitive fps twitch gamer that needs 240hz. With that being said, its still probably only like 5% lol.
 
Yes, the original PhysX implementation was a separate card with a processor ie. "core" from Ageia.
The point I was trying to make is that the dedicated hardware was deprecated in favor of a more "generic" software implementation.
Yes but the fact that original PhysX was never part of any GPU makes this case very different from either Tensor cores or RT cores.

If this kind of tech was actually integrated and not only by Nvidia but also ATI/AMD and became standard DirectX feature then we would have truly amazing physics in games.
As it went however everyone deemed this not necessary and went with "generic approach" and because of that games today have poor physics that often is non-existent except some basic ragdolls and other simple rigid bodies placed here and there which is pretty much level of Half-Life2 but less utilized in actual gameplay.

Tensor cores are not there for games specifically. They can be used for games though and because of this they will be used here. AI and graphics generation will be there. It might take some time though... and besides, current hardware is pretty slow to do any really advanced stuff with it in real time
 
I think it was 1% 6 years ago when I was playing tomb raider at 4k60 on crossfire 7970ghz cards over displayport before hdmi2.0 was standardized. Nowadays you can get a cheap 4k tv from your local big box store for a few hundos with hdmi 2.0 and game at 4k60 with low or medium settings, no rtx and a low to midrange card. Not everyone is a competitive fps twitch gamer that needs 240hz. With that being said, its still probably only like 5% lol.

Steam hardware survey is a pretty good indicator of where most gamers are at and like I said, 4k is basically irrelevant:

1FA2B900-6450-48E8-8613-7F5E0922F2E0.png
 
DLSS is probably just a initial stepping stone for both ray tracing and VR for performance reasons. The other thing for tensor cores is sure they are good for machine learning which has various applications both inside/outside of gaming/game development. The other aspect of DLSS is the latency reduction it can be as much as 5ms less latency than no AA. That makes mGPU a bit easier naturally. It's reduces input lag. Sure it's not perfect, but Rome wasn't built in a day. To consider it a bust is a under sight for something that is only a initial first step which could improve and I imagine will improve in future generations of GPU's. On that note there is nothing to say Nvidia didn't add it to test the waters a bit on it and see where it could lead like a discrete GPU similar to the original PhysX card that's more of a co-processor for machine learning tensor core based upscale post process. Imagine where it might go with a more dedicated GPU to it that the GPU outputs into it that then is output to the main display!? Truth be told we have no idea how good it might become going forward.
 
4K is largely irrelevant because of lack of screen support with all the bits you want at reasonable prices at the moment.

4K/32 inch panels generally don’t go above 60-75hz at the moment. At least at sub $1k prices.

Without at least 120hz support not many people are going to buy it for gaming. Especially for competitions.

Then there is graphics card support. To get a 60fps minimum with a modern title you need a 2080ti. This is a freaking expensive card.

You can drive one of these screens ok on a second tier card like a 1080ti/2080/vega64 but you won’t get your minimum 60fps with all the trimmings turned on.
 
4K is largely irrelevant because of lack of screen support with all the bits you want at reasonable prices at the moment.

4K/32 inch panels generally don’t go above 60-75hz at the moment. At least at sub $1k prices.

Without at least 120hz support not many people are going to buy it for gaming. Especially for competitions.

Then there is graphics card support. To get a 60fps minimum with a modern title you need a 2080ti. This is a freaking expensive card.

You can drive one of these screens ok on a second tier card like a 1080ti/2080/vega64 but you won’t get your minimum 60fps with all the trimmings turned on.

Has competition gaming really taken off that much? There have been a few other posters on here that also got the lg oleds and switch between 1080p120 and 4k60 depending on the game.
 
Back
Top