- Joined
- Mar 3, 2018
- Messages
- 1,713
Lambda Labs ran a RTX 2080 TI and the Turing Titan through a set of Tensorflow benchmarks, and compared them to Pascal and Volta-based cards. On average, the RTX cards are on par with the pricey Titan V and Volta-based Tesla V100, but are significantly faster than the 1080 TI and the Pascal Titan. The Titan RTX, for example, is about 46.8% faster in FP32 deep learning workloads than the 1080 TI, and a whopping 209% faster in FP16. Interestingly, the tester notes that the 2080 TI and RTX Titan are both utilizing their tensor cores in the benchmarks. This is a departure from some of Nvidia's previous compute-focused features, which were locked out by GeForce drivers or physically disabled in the gaming cards. Assuming the 2080 and 2070 are unlocked too, and that these benchmarks are somewhat representative of deep learning inference performance, this theoretically makes the RTX lineup particularly good at running desktop neural network jobs, like upscaling images, game textures or video frames.
All models were trained on a synthetic dataset to isolate GPU performance from CPU pre-processing performance and reduce spurious I/O bottlenecks. For each GPU/model pair, 10 training experiments were conducted and then averaged. The "Normalized Training Performance" of a GPU is calculated by dividing its images / sec performance on a specific model by the images / sec performance of the 1080 Ti on that same model... All benchmarking code is available on Lambda Lab's GitHub repo.
All models were trained on a synthetic dataset to isolate GPU performance from CPU pre-processing performance and reduce spurious I/O bottlenecks. For each GPU/model pair, 10 training experiments were conducted and then averaged. The "Normalized Training Performance" of a GPU is calculated by dividing its images / sec performance on a specific model by the images / sec performance of the 1080 Ti on that same model... All benchmarking code is available on Lambda Lab's GitHub repo.
Last edited: