RTX 2080Ti & 2080 DC Performance

pututu

[H]ard DC'er of the Year 2021
Joined
Dec 27, 2015
Messages
3,104
Though we are still very early, thought that I start this thread where folks can chime in if they do find any information on how well these Turing cards will perform in DC world rather than in games. Better still, those lucky [H] who can afford one or more, please share the DC performance result.

Other than the qualitative discussion on how fast (or just meh) this Turings cards are, best to link to some BOINC/project/forum sites where actual numbers are being reported or sites that give a good indication of how well these RTX cards might perform generally in DC.

AnandTech has a review page of some of the DC-like performance comparison. All cards ran at Nvidia "reference" stock setting.

Some notable ones. Not sure about the rest.

From FAHBench, I'm guessing. Unfortunately PPD does not scale linearly. Forgot how to convert the ns/day to PPD.
upload_2018-9-19_21-37-6.png



N-body simulation (astrophysics). RTX 2080 slower than Vega 64?
upload_2018-9-19_21-49-40.png



CUDA benchmark from Geekbench. No RTX result in yet.

OpenCL benchmark from Geekbench. No RTX result in yet.

WUProp@home. No RTX result in yet.

I"m sure there are other sites but please add links or info to this thread for future reference. Please no games benchmarking :)
 
Maybe BURP is the only BOINC project potentially using the ray-tracing during rendering with blender. If they just would have work. Else my 1080ti looks still string enough ...
 
Amicable Numbers for RTX2080 run time is about 141 secs for this rig which translates to about 4.2M PPD.

For 1080Ti from WUprop, average run time at best is about 360 secs or more but I'm not sure if this is single or multiple tasks running per card.

upload_2018-9-25_20-7-57.png




upload_2018-9-25_20-8-41.png
 
Some interesting result from the links shown in post #1.


Open CL benchmark. 2080 +30%; 2080Ti +63% higher than 1080Ti. Not sure about clock speed.
upload_2018-10-3_21-39-30.png



Cuda benchmark. 2080 +73% higher than 1080Ti

upload_2018-10-3_21-42-35.png



Collatz avg time from WU prop ~354 secs per task vs 594 secs (+40% reduction in run time) for 2080 vs 1080Ti

upload_2018-10-3_21-41-32.png
 
What a beast of a GPU. Looks like I need to save some pennies for a 2080ti. Oh, and wait for them to actually be available somewhere...
 
Well, I decided I couldn't wait for the 2080 ti's to become available and come down in price, so I just ordered 2 RTX 2080 FE cards right from nVidia. I am throwing them on PG PPS Sieve first, but will hit all nVidia GPU projects eventually. Will run some test WU's on Moo, Amicable, and Enigma right away, as well, so we can get some avg completion times. These cards are each going into boxes with an i7-4770K@stock (3.7GHz on all cores), 16GB of DDR3-2133, a Samsung 850 Pro 120GB Sata SSD and running Win 10 Pro x64. I won't be going nuts with testing, as I have a new position at work and am quite busy these days, but will check WU run times in these projects at 100% power and 70% power, at least.
 
First result:

Primegrid PPS Sieve

2:32 per WU and running two WU on the GPU

123% Max power and max temp setting of 88C
+115 core
+650 mem
Custom fan curve running at 85% and GPU temp 78C
Very reasonable fan noise

GPU is hanging steady at 1920 with these settings.
 
Last edited:
GPUgrid is a no with the 2080 Ti, computation error. Not surprised though. Everytime a next gen card comes out this happens.
 
Care to try this out on Einstein? It is currently the FB sprint project. On my 1080Ti at 55% power limit, it is taking about 510-515 seconds per task (running two per GPU) for LATeah1026xxxxxxx tasks.

AMD cards do well in this project by looking at the number of top hosts running it: https://einsteinathome.org/community/stats/hosts

https://einsteinathome.org/content/cuda-1#comment-157834 per this latest post, CUDA is not well optimized. In fact there are quite a number of BOINC projects where the software for CUDA or Open CL are not optimized for Nvidia unless someone write an optimized apps.

Anyone know if there is an Einstein optimized apps for Nvidia?
 
Last edited:
Care to try this out on Einstein? It is currently the FB sprint project. On my 1080Ti at 55% power limit, it is taking about 510-515 seconds per task (running two per GPU) for LATeah1026xxxxxxx tasks.

AMD cards do well in this project by looking at the number of top hosts running it: https://einsteinathome.org/community/stats/hosts

https://einsteinathome.org/content/cuda-1#comment-157834 per this latest post, CUDA is not well optimized. In fact there are quite a number of BOINC projects where the software for CUDA or Open CL are not optimized for Nvidia unless someone write an optimized apps.

Anyone know if there is an Einstein optimized apps for Nvidia?

Once Einstein lets me dl more WU I'll do it. The funny thing is I reached my 10M goal shortly before the sprint was announced and I aborted the rest of my Einstein WU and now the project doesn't like me.
 
Amicable Numbers 10^20 2.17 OpenCL_nvidia

1:34

Same settings as above
 
Einstein

Gamma-Ray Pulsar Binary Search #1 on GPU's 1.20

Single WU on GPU

6:17

Edit: Updated true crunching time to 6:17
 
Last edited:
No, one task on GPU.
I got this mixed up earlier in my post#13. The non-optimized open CL (not cuda) in the Einstein code is likely the cause of the sub-par performance. At least based on the open CL benchmark, 2080Ti should be 63% faster than 1080Ti. We need the optimized apps.
 
I got this mixed up earlier in my post#13. The non-optimized open CL (not cuda) in the Einstein code is likely the cause of the sub-par performance. At least based on the open CL benchmark, 2080Ti should be 63% faster than 1080Ti. We need the optimized apps.

I relooked at things and it's really 6:17 for one WU of Einstein on the 2080 Ti.

Trying two per GPU now.
 
Last edited:
I relooked at things and it's really 6:17 for one WU of Einstein on the 2080 Ti. Care to share your .xml file so I can run two WU on my GPU?
You just need to go into the Einstein preferences and change GPU work share to .5. Then run an update on the project.
 
Einstein

Gamma-Ray Pulsar Binary Search #1 on GPU's 1.20

Single WU on GPU

6:17

Edit: Updated true crunching time to 6:17


11:33 per WU when running two per GPU.
 
Last edited:
I have to keep up with all you ballers on the team somehow!
 
I fund my DC habit with a brokerage account I opened at the beginning of the year with my annual bonus, strictly with the intention of upgrading my DC fleet this year. When the fund makes money above the amount I originally put in, I get to buy new hardware. The record stock market this year has given me the opportunity to purchase lots of hardware, which I didn't expect, but has been a happy surprise. I will be converting this fund into my son's college account at the end of this year, so my gravy train is almost over.
 
I threw the first 2080 in a box that already had an R9 280X. Here is an Aida64 comparison of their performance at stock speeds on Windows 10 Pro. The Haswell board uses a PCI-E switch to get X16 to both GPU's.

Sorry about the largish picture. I was too lazy to edit it.

f1c9d1b5-095c-49bf-bbd7-7851ff0744f6-original.jpg


Here is the same test on my 1080 ti:

1b451927-a637-4125-9fd6-06f4d764ae97-original.jpg
 
Last edited:
Here is a quick PG PPS Sieve comparison on Win10. All GPU's are set at 80% power output and have 2 WU's running per GPU:

(mm:ss)

RTX 2080: ~3:58

GTX 1080 ti: ~5:50

GTX 1070 ti: ~8:19


:wideyed::wideyed::wideyed:
 
Back
Top