Which is faster in computing RTX2080 or GTX1080Ti ?

Discussion in 'nVidia Flavor' started by M76, Jan 11, 2019.

  1. M76

    M76 [H]ardForum Junkie

    Messages:
    8,324
    Joined:
    Jun 12, 2012
    That's the question.

    The 1080ti has about 20% more cuda cores 3584 vs 2944. But the 2080 has slightly higher boost clock 1582 vs 1710.

    RTX2080 seems like a downgrade even if in games it performs slightly faster.
     
  2. CAD4466HK

    CAD4466HK Gawd

    Messages:
    799
    Joined:
    Jul 24, 2008
    I would say it depends on your workload. For example, If your project utilizes FP16, than the 1080Ti is not your card. If you use more than 8GB of VRAM, then the 2080 is a bad choice. You really can't say one is faster than the other when it comes to compute being as there is so many different variables to consider. In some projects the 1080Ti will crush the 2080 and vice versa.

    These are not the best benches, but it does give an idea of the point I'm trying to make:

    https://www.anandtech.com/show/1334...x-2080-ti-and-2080-founders-edition-review/15

    https://www.pugetsystems.com/labs/h...V-TensorFlow-Performance-with-CUDA-10-0-1247/
     
  3. Rizen

    Rizen [H]ardForum Junkie

    Messages:
    9,187
    Joined:
    Jul 16, 2000
    You'll have to look at benchmarks, they are different architectures so you can't compare them based off of core counts or clock speeds.