RTX Tensor Cores are Unlocked

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Lambda Labs ran a RTX 2080 TI and the Turing Titan through a set of Tensorflow benchmarks, and compared them to Pascal and Volta-based cards. On average, the RTX cards are on par with the pricey Titan V and Volta-based Tesla V100, but are significantly faster than the 1080 TI and the Pascal Titan. The Titan RTX, for example, is about 46.8% faster in FP32 deep learning workloads than the 1080 TI, and a whopping 209% faster in FP16. Interestingly, the tester notes that the 2080 TI and RTX Titan are both utilizing their tensor cores in the benchmarks. This is a departure from some of Nvidia's previous compute-focused features, which were locked out by GeForce drivers or physically disabled in the gaming cards. Assuming the 2080 and 2070 are unlocked too, and that these benchmarks are somewhat representative of deep learning inference performance, this theoretically makes the RTX lineup particularly good at running desktop neural network jobs, like upscaling images, game textures or video frames.

All models were trained on a synthetic dataset to isolate GPU performance from CPU pre-processing performance and reduce spurious I/O bottlenecks. For each GPU/model pair, 10 training experiments were conducted and then averaged. The "Normalized Training Performance" of a GPU is calculated by dividing its images / sec performance on a specific model by the images / sec performance of the 1080 Ti on that same model... All benchmarking code is available on Lambda Lab's GitHub repo.
 
Last edited:
They're used to smooth over the craptastic ray tracing result, so barring some extravagant driver gymnastics they should stay generally available.
 
Ohh Procedural Textures / Music, and Dynamic Bones acceleration please
 
Last edited:
You realize that at this point no one cares, correct?
The point is that dropping ray tracing or not doesn’t change the amount of tensor cores on the card. It’s just a software feature that uses the tensor cores. Complaining about the raytracing in this context makes zero sense.
 
The point is that dropping ray tracing or not doesn’t change the amount of tensor cores on the card. It’s just a software feature that uses the tensor cores. Complaining about the raytracing in this context makes zero sense.
Yes I get your point and , as it stands no one truly cares if there is no product that can handle it...we can talk about quantum computing for comparison

edit Of course not the same fucking thing if any moron is watching but point is, if there is no product for consumer it might aswell bee on the the quantum comp space...i.e. unobtanium at the moment
 
Last edited:
Don't worry, at some point Nvidia will hobble them in some way via the drivers or firmware in future cards. Just like they did with the Titan line to make their professional line of cards look more worth the price tag.
 
Doesn't the Tensor cores power the DLSS feature? So why wouldn't they be unlocked?
 
So we finally have the cut RTX 2070 released.

Unfortunately it will be another six months before we see the next chip down, since NVIDIA is pulling emergency duty getting rid of GP104/106/GDDR5X.
 
https://developer.nvidia.com/rtx
https://en.wikipedia.org/wiki/GeForce_20_series
https://www.hardwarezone.com.sg/fea...turing-architecture/rt-cores-and-tensor-cores

You might want to look into that statement. because as far as I can tell, your wrong. theres CUDA, Tensor, and RT cores.

Marketing Coolaid.

Games where not ready at launch... and the RT stuff felt rushed in general for one very good reason.

All the game developers are under strict NDAs right now and can't tell you that the real RT stuff is coming with PS5 and the next Xbox powered by AMD.

If I had to guess.... Nvidia caught wind of what developers working on the next gen of Console games where being read in on. So as fast as possible they hacked together drivers to tick as many Tensor powered marketing bullets as they could. RTX and DLSS are inventions of NVs marketing dept. Yes tensor cores will be put to good use in the next gen of games for traced lighting engines... and the DLSS stuff is interesting if it pans out in any real way.

My point there is no such thing as an "RT" core. The "RT" cores are nothing but the cluster addressing units of the tensor cores. In order to take googles tensor flow layout and enable FP16 and FP 8 math the work needs to be split which requires some form of extended control unit. If the NV marketing guys want to call those "RT" cores so be it. I'm sure one day when some developer releases a game with both DLSS and RTX... we can see if turning both on makes the poor RTX cards cry real tears or just go nuclear. ;) lol (its not coincidental that the number of "RT" cores on a RTX part is the number of tensor cores divided by 8)
 
Last edited:
Back
Top