2080ti 2080 Ownership Club

I wouldn't make a purchase based on a hack. It's entirely likely Nvidia will remove the "feature" just like with the PhysX trick.

Good point. Dunno, it seems like there's not much difference in performance (at least gaming at UHD), and AMD is cheaper besides... As you say, not the deciding factor to be sure, but it sure would be nice if true, as my ultrawide is a freesync monitor.
 
Ah, thanks for the advice. I had heard that the more modern processors would improve minimum frame rates and possibly smooth out frame times. I get quit a few spikes when I'm playing games, even if the games are frame limited to 60 Hz and the GPU isn't at 100% usage. Either way, I suspect it's only a matter of time before I cave and do an overhaul. The other draws for me are NVMe SSD (jumping from my current 480gb to 1tb in the process), ddr4 ram, and modern mobo features like usb 3.1 and usb-c.

I had read about a trick that would allow Nvidia GPUs to output variable frame rates to a freesync monitor is they had a AMD processor. That's also got me thinking about going with an AMD, if true.

If you’re interested in ray tracing keep an eye on that. DICE said they are optimizing on 12 thread CPUs.

I would hold out for 7nm if I were you.

https://hardforum.com/threads/intel...aming-but-66-pricier.1969646/#post-1043881927
 
Last edited:
There are a lot of people on the Geforce forums who are having instability issues with artifacts and blue screens and dead cards. I don't think it's about me being unlucky. I'm guessing there's a bad batch or inherent flaw with the FE cards. Almost all of the issues are limited to people with FE cards.
I just updated to latest Nvidia driver to see if it was better. Not 5 minutes into testing I got a blue screen. Maybe this was the real reason for the delay.
 
So I tried setting Windows power mode to Performance, and I also MSI Afterburner to Kernel Mode. So far so good.

Played 30 minutes of Left4Dead (which blue screened after 5 minutes previously), Superposition 1080p Extreme + 8K, Heaven on 1080p max for 2 runs, FurMark for 5 minutes, and I played about 1 hour of F.E.A.R 2. No crashes.

I usually don't like "home remedy" troubleshooting, but it looks like things are working now.

I am still kind of worried, though, as there are a ton of threads on the Nvidia forums with people having crashes and blue screens. Could be that the drivers are bunk, otherwise there is a serious QC issue.
 
Last edited:
No problems at 2100Mhz core and 8,000Mhz vram (+1000) over the weekend. Pretty awesome!!!

So I tried setting Windows power mode to Performance, and I also MSI Afterburner to Kernel Mode. So far so good.

Played 30 minutes of Left4Dead (which blue screened after 5 minutes previously), Superposition 1080p Extreme + 8K, Heaven on 1080p max for 2 runs, FurMark for 5 minutes, and I played about 1 hour of F.E.A.R 2. No crashes.

I usually don't like "home remedy" troubleshooting, but it looks like things are working now.

I am still kind of worried, though, as there are a ton of threads on the Nvidia forums with people having crashes and blue screens. Could be that the drivers are bunk, otherwise there is a serious QC issue.

Once upon a time when I had SLI one thing I noticed was if the cards throttled for any reason my frame times would go ape shit. I sometimes wondered if that was the reason some reviews had erradic charts.

Good that things are working for you.
 
Finally got a solid weekend of gaming in - between BLOPS4 and ARK (5 hour sess last night - just like old times) - helped me stabilize and tweak my OC. Still working on the memory (currently +300). OC Scanner recommended +175 for the core clock OC, but I dialed it back +15 due to BLOPS4 crashes (which I now think are just buggy software - since the system is stable - fixed by killing BLOPS4 and restarting).

Stock clock benches linked here.

2080 Ti FE on air, custom fan curve.

OC:
+100 Core Voltage, 123% Power Limit, 88C Temp Limit, +160 Core Clock, +300 Memory Clock

Time Spy 1.0 - 13,578 (stock: 12,473) 8.2% gain
https://www.3dmark.com/3dm/29441858

Time Spy Extreme 1.0 - 6,286 (stock: 5,797) 7.8% gain
https://www.3dmark.com/3dm/29442080

Fire Strike Ultra 1.1 - 8,590 (stock: 7,895) 8.1% gain
https://www.3dmark.com/3dm/29442262

Fire Strike Extreme 1.1 - 15,292 (stock: 14,581) 4.7% gain
https://www.3dmark.com/3dm/29442424
 
Anyone with 2080/2080ti can share their experiences with FFXV at 4k?

Wanted to revisit this as I just reinstalled the game with the massive UHD texture pack on steam. This thing is a beast - weighs in at something like 152GB and makes me REALLY want to upgrade to a 1TB SSD.

What I found when I installed, however, was a bit disappointing. Even the 2080ti with a healthy OC isn't good enough to run this thing at a solid 60fps at UHD resolution with maxed out settings across the board. At that, I was getting somewhere between 48-50fps. Now, this could partially be because of my CPU - I've read that FFXV is extremely demanding on CPU, and I did see a lot of spikes to over 80-90% CPU usage, leading me to believe that one of the cores may have been bottlenecking this game. I've read that it's just not well optimized for CPU usage, but I really have no way to say for sure. Most review sites put the game at around 60-63fps at UHD and "ultra" settings, but some of the ones that actually stated specifics did mention they turned off the nvidia features (shadoworks, hairworks, turf, etc).

I tried messing around with the settings to see where the worst drains on frames were, and it's definitely the nvidia features. the VXAO feature was the worst, leeching a solid 6-7fps all by itself. Second was turfeffects, which drained another 3-5 fps, and the others weren't bad. Shadowlibs was probably 1-2 fps and hairworks seemed pretty much negligible (though may have been worse if I were fighting a mob with hair...) Most of this "testing" was done just running around fighting random mobs at the desert area next to hammerhead at the beginning of the game. For all I know it could easily be worse in certain areas.

I ended up disabling all four of the nvidia effects except hairworks, and this gave me a nice constant 60fps. I honestly couldn't see the impact of the others anyway - though again, that could have been just because of the area I was in.

Disabling TAA gives a huge boost to frame rates, but came at too steep a visual quality price - the aliasing in this game is particularly nasty, especially at UHD resolution. I couldn't do without it, but I may try to see if I can find another AA inject or something that will penalize my frames less and give near equivalent performance.

Overall, I have to say I'm disappointed that the 2080ti hasn't allowed me to just crank everything "to 11" and still get 60fps at UHD resolution, but it's still a huge improvement from my Titan X pascal. When I played the game through with that, in order to keep a relatively steady 60fps I was forced to play at 75% rendering resolution (with UHD set) and still had to disable most of the nvidia features. If I wanted UHD at 100% render, I had to disable TAA, all Nvidia features, and lower a few other settings as well to get near 60fps.

As always, YMMV.

Note: I am also playing with HDR enabled. I don't know if that impacts frame rates.

Edit: one other note - I found it remarkable that this game, with the huge UHD texture pack of course, was using almost ALL of my 11GB of video RAM. I've never seen another game even come close. Insane.
 
There is a big chance it is your CPU. Everything I have read about FFXV is it is very tough on CPU's. I think more cores the better for this game. Also the 4770 might be holding back the 2080Ti in general. Overclock that CPU some more and see if it makes a difference.
 
Wanted to revisit this as I just reinstalled the game with the massive UHD texture pack on steam. This thing is a beast - weighs in at something like 152GB and makes me REALLY want to upgrade to a 1TB SSD.

What I found when I installed, however, was a bit disappointing. Even the 2080ti with a healthy OC isn't good enough to run this thing at a solid 60fps at UHD resolution with maxed out settings across the board. At that, I was getting somewhere between 48-50fps. Now, this could partially be because of my CPU - I've read that FFXV is extremely demanding on CPU, and I did see a lot of spikes to over 80-90% CPU usage, leading me to believe that one of the cores may have been bottlenecking this game. I've read that it's just not well optimized for CPU usage, but I really have no way to say for sure. Most review sites put the game at around 60-63fps at UHD and "ultra" settings, but some of the ones that actually stated specifics did mention they turned off the nvidia features (shadoworks, hairworks, turf, etc).

I tried messing around with the settings to see where the worst drains on frames were, and it's definitely the nvidia features. the VXAO feature was the worst, leeching a solid 6-7fps all by itself. Second was turfeffects, which drained another 3-5 fps, and the others weren't bad. Shadowlibs was probably 1-2 fps and hairworks seemed pretty much negligible (though may have been worse if I were fighting a mob with hair...) Most of this "testing" was done just running around fighting random mobs at the desert area next to hammerhead at the beginning of the game. For all I know it could easily be worse in certain areas.

I ended up disabling all four of the nvidia effects except hairworks, and this gave me a nice constant 60fps. I honestly couldn't see the impact of the others anyway - though again, that could have been just because of the area I was in.

Disabling TAA gives a huge boost to frame rates, but came at too steep a visual quality price - the aliasing in this game is particularly nasty, especially at UHD resolution. I couldn't do without it, but I may try to see if I can find another AA inject or something that will penalize my frames less and give near equivalent performance.

Overall, I have to say I'm disappointed that the 2080ti hasn't allowed me to just crank everything "to 11" and still get 60fps at UHD resolution, but it's still a huge improvement from my Titan X pascal. When I played the game through with that, in order to keep a relatively steady 60fps I was forced to play at 75% rendering resolution (with UHD set) and still had to disable most of the nvidia features. If I wanted UHD at 100% render, I had to disable TAA, all Nvidia features, and lower a few other settings as well to get near 60fps.

As always, YMMV.

Note: I am also playing with HDR enabled. I don't know if that impacts frame rates.

Edit: one other note - I found it remarkable that this game, with the huge UHD texture pack of course, was using almost ALL of my 11GB of video RAM. I've never seen another game even come close. Insane.


Just wanted to add my experience to this. I had been waiting for my 2080Ti to come in to finally start this game. (Also beat the main story at lvl 99 4 days later. Judgement-free zone pls)

I'm running a 2950X (in game mode) and seemed to have a slightly better experience than you did. Not by much. With everything maxed, I average about 50-53fps in more demanding areas like Lestallum, Gauldin Quay, and the open forest-y vistas. Here's what I changed to give myself a solid 60fps 99% of the time:

- Turned off all the NV effects aside from hairworks and shadowlibs.
- Turned shadows from highest, to high.
- Turned motion blur off.
- Turned off the steam overlay - this sometimes contributes to stuttering.
- Set the in-game limiter to 60fps - this had a very positive effect for me.

- I'm investigating using a reshade for an AA solution, but just need to find some time to try and test it.

With these settings, I get a mostly solid 60fps. There are a few battles that tanked my performance to under 20 fps when there were tons of particle effects and enemies onscreen. I just fought a battle in one of the royal tomb dungeons that involves 4 Red Giants, several Flans, and some of the snake ladies (Nagarini?) and it looked like a slideshow for a few moments. In Altissa, I average about 55fps.

I was also surprised to see nearly all my vram get used. More surprising, however, was seeing 21GB of use of my 64GB of RAM.

Kinda disappointing that I couldn't just crank everything. I have another 2080Ti coming in the next few weeks, so I'll be curious to see how this game scales with NVLink. I also have yet to put the waterblocks on as I'm waiting for the second GPU.
 
Wanted to revisit this as I just reinstalled the game with the massive UHD texture pack on steam. This thing is a beast - weighs in at something like 152GB and makes me REALLY want to upgrade to a 1TB SSD.

What I found when I installed, however, was a bit disappointing. Even the 2080ti with a healthy OC isn't good enough to run this thing at a solid 60fps at UHD resolution with maxed out settings across the board. At that, I was getting somewhere between 48-50fps. Now, this could partially be because of my CPU - I've read that FFXV is extremely demanding on CPU, and I did see a lot of spikes to over 80-90% CPU usage, leading me to believe that one of the cores may have been bottlenecking this game. I've read that it's just not well optimized for CPU usage, but I really have no way to say for sure. Most review sites put the game at around 60-63fps at UHD and "ultra" settings, but some of the ones that actually stated specifics did mention they turned off the nvidia features (shadoworks, hairworks, turf, etc).

I tried messing around with the settings to see where the worst drains on frames were, and it's definitely the nvidia features. the VXAO feature was the worst, leeching a solid 6-7fps all by itself. Second was turfeffects, which drained another 3-5 fps, and the others weren't bad. Shadowlibs was probably 1-2 fps and hairworks seemed pretty much negligible (though may have been worse if I were fighting a mob with hair...) Most of this "testing" was done just running around fighting random mobs at the desert area next to hammerhead at the beginning of the game. For all I know it could easily be worse in certain areas.

I ended up disabling all four of the nvidia effects except hairworks, and this gave me a nice constant 60fps. I honestly couldn't see the impact of the others anyway - though again, that could have been just because of the area I was in.

Disabling TAA gives a huge boost to frame rates, but came at too steep a visual quality price - the aliasing in this game is particularly nasty, especially at UHD resolution. I couldn't do without it, but I may try to see if I can find another AA inject or something that will penalize my frames less and give near equivalent performance.

Overall, I have to say I'm disappointed that the 2080ti hasn't allowed me to just crank everything "to 11" and still get 60fps at UHD resolution, but it's still a huge improvement from my Titan X pascal. When I played the game through with that, in order to keep a relatively steady 60fps I was forced to play at 75% rendering resolution (with UHD set) and still had to disable most of the nvidia features. If I wanted UHD at 100% render, I had to disable TAA, all Nvidia features, and lower a few other settings as well to get near 60fps.

As always, YMMV.

Note: I am also playing with HDR enabled. I don't know if that impacts frame rates.

Edit: one other note - I found it remarkable that this game, with the huge UHD texture pack of course, was using almost ALL of my 11GB of video RAM. I've never seen another game even come close. Insane.

You mentioned CPU usage, how is your GPU usage?

This is a DLSS game which should give you a signifcant boost. Just need it enabled....
 
I'm running a 2950X (in game mode) and seemed to have a slightly better experience than you did. Not by much. With everything maxed, I average about 50-53fps in more demanding areas like Lestallum, Gauldin Quay, and the open forest-y vistas. Here's what I changed to give myself a solid 60fps 99% of the time:

- Turned off all the NV effects aside from hairworks and shadowlibs.
- Turned shadows from highest, to high.
- Turned motion blur off.
- Turned off the steam overlay - this sometimes contributes to stuttering.
- Set the in-game limiter to 60fps - this had a very positive effect for me.

- I'm investigating using a reshade for an AA solution, but just need to find some time to try and test it.

Thanks for sharing your settings. I don't know much about how a TR CPU performs in games compared to a more standard CPU like a 8700k or something, but I assume it's similar.

I'm curious about the in-game limiter - did you find that helped you even if you weren't hitting 60fps for some reason? I usually set my frame limit in rivatuner (by application) and turn off in-game limiting. I guess I could try the other way as well.

I downloaded sweetFX and tried a mod that was supposed to increase sharpness with TAA enabled (found on the optimization website I used), but it didn't work for me. It did increase sharpness, but it completely threw off my color map or something, making menus disappear. I suspect this is because it wasn't built for HDR color mapping or something (which I was using).

You mentioned CPU usage, how is your GPU usage?

This is a DLSS game which should give you a signifcant boost. Just need it enabled....

GPU usage is pegged - usually 98% or better. I thought about DLSS, but I didn't know that could be enabled at this time. Is that enabled in the control panel or something? There is no option for it in the game unlike the other Nvidia features.

I'm also unsure about the quality impact of DLSS. I don't really understand exactly how it works, but I thought it kind of renders the game at a lower resolution and applies some really fancy AA to sharpen everything to *look* like it's running at full UHD. Is that true? If so, how does the quality compare to straight up UHD rendered at 100%? Agree that, with DLSS, I would suspect I could just max everything and get a solid 60fps.
 
Thanks for sharing your settings. I don't know much about how a TR CPU performs in games compared to a more standard CPU like a 8700k or something, but I assume it's similar.

I'm curious about the in-game limiter - did you find that helped you even if you weren't hitting 60fps for some reason? I usually set my frame limit in rivatuner (by application) and turn off in-game limiting. I guess I could try the other way as well.

I downloaded sweetFX and tried a mod that was supposed to increase sharpness with TAA enabled (found on the optimization website I used), but it didn't work for me. It did increase sharpness, but it completely threw off my color map or something, making menus disappear. I suspect this is because it wasn't built for HDR color mapping or something (which I was using).



GPU usage is pegged - usually 98% or better. I thought about DLSS, but I didn't know that could be enabled at this time. Is that enabled in the control panel or something? There is no option for it in the game unlike the other Nvidia features.

I'm also unsure about the quality impact of DLSS. I don't really understand exactly how it works, but I thought it kind of renders the game at a lower resolution and applies some really fancy AA to sharpen everything to *look* like it's running at full UHD. Is that true? If so, how does the quality compare to straight up UHD rendered at 100%? Agree that, with DLSS, I would suspect I could just max everything and get a solid 60fps.

Ah I meant needs to be enabled by nVidia, we can’t use it yet. Most review sites put it on par or better than FXAA. That is for DLSS. There is also supposed to be DLSS 2x which takes native res and upscales that then back down. We can’t use such things yet but you can watch videos from reviewers running the benchmark. It uses the tensor cores for AI super scaling (DLSS means deep learning super sampling). Currently those cores which are a decent portion of the die are sitting idle....

From what we know when enabled by nVidia I think it’ll get you to the performance you want.
 
Last edited:
Just got my Triple fan in, I guess I can have my membership now.
 

Attachments

  • 15396381234306560120572100057476.jpg
    15396381234306560120572100057476.jpg
    260 KB · Views: 0
This is my setup before and after... Have to do famliy stuff wasnt able to do any teaking but it boosted to 2025
 

Attachments

  • 20181015_142142.jpg
    20181015_142142.jpg
    431.8 KB · Views: 0
  • 15396410017736115295347176820089.jpg
    15396410017736115295347176820089.jpg
    383.7 KB · Views: 0
  • 3d mark 1.PNG
    3d mark 1.PNG
    228.2 KB · Views: 0
  • 3d mark 2.PNG
    3d mark 2.PNG
    33.6 KB · Views: 0
Last edited:
Thanks for sharing your settings. I don't know much about how a TR CPU performs in games compared to a more standard CPU like a 8700k or something, but I assume it's similar.

I'm curious about the in-game limiter - did you find that helped you even if you weren't hitting 60fps for some reason? I usually set my frame limit in rivatuner (by application) and turn off in-game limiting. I guess I could try the other way as well.

I downloaded sweetFX and tried a mod that was supposed to increase sharpness with TAA enabled (found on the optimization website I used), but it didn't work for me. It did increase sharpness, but it completely threw off my color map or something, making menus disappear. I suspect this is because it wasn't built for HDR color mapping or something (which I was using).

I found that the in-game limiter helped with framepacing issues in more complex areas. I didn't see an overall increase in my average fps. I used to notice quite a bit of stutter in Galdin Quay, but now its all but gone. I haven't tried using Rivatuner, but that may yield a similar result to what I am seeing.

Regarding a reshade, I've decided to wait until the game gets updated with the 2x DLSS mode before I start tinkering with AA modes. I've done everything in the game aside from two secret dungeons and about 15 hunts, so I'm going to move on to something else while I wait.
 
Well, just about an hour ago, the LED light on my 2080TI came on, while booting out of sleep mode. And this is the working card (outside of what was a broken LED). It begs the question, is the LED not coming on a software or hardware glitch? Outside of the LED issue, it's been working fine. I am worried about it being hardware, but with other people on the forums having their LED turn off, or turn different colors, I'm hoping that it's just a bug in the driver.

*EDIT*

So, when the monitor went into power savings mode, the logo disappeared. The computer wouldn't resume from that point. I've had this happen a few times with this card. But a reboot worked, and the logo came back on at that point.
 
Last edited:
Well, just about an hour ago, the LED light on my 2080TI came on, while booting out of sleep mode. And this is the working card (outside of what was a broken LED). It begs the question, is the LED not coming on a software or hardware glitch? Outside of the LED issue, it's been working fine. I am worried about it being hardware, but with other people on the forums having their LED turn off, or turn different colors, I'm hoping that it's just a bug in the driver.

*EDIT*

So, when the monitor went into power savings mode, the logo disappeared. The computer wouldn't resume from that point. I've had this happen a few times with this card. But a reboot worked, and the logo came back on at that point.

Gremlins.
 
Got the RTX 2080 today - benched my MSI Lightning Z GTX 1080 Ti prior - so everything was the same...some interesting results - a reduction in DX11 performance with Fire Strike! Is this widely known? I did not OC either card - just pushed the sliders for Core Voltage and Power Limit % to the max (as outlined below)...

Pic:
http://sk3tch.com/images/hardocp/2080_rig.jpg

MSI Lightning Z GTX 1080 Ti Results:
Intel i7-5820K CPU @ 4.4GHz (Corsair H110i 280mm GT Liquid CPU Cooler), G.SKILL Ripjaws 4 series 64GB 2133MHz DDR4, MSI NVIDIA GeForce GTX 1080 Ti Lightning Z (No OC; Core Voltage +100, Power Limit 116%), Samsung 850 PRO SSDs (128GB, 512GB, 512GB) on an ASRock X99 Extreme4 motherboard in a Corsair Obsidian Series 450D mid tower - powered by a SeaSonic X-1250 1250W PSU.

Time Spy 1.0 - 9,436
https://www.3dmark.com/3dm/29486537

Time Spy Extreme 1.0 - 4,402
https://www.3dmark.com/3dm/29486980

Fire Strike Ultra 1.1 - 7,316
https://www.3dmark.com/3dm/29487342

Fire Strike Extreme 1.1 - 13,444
https://www.3dmark.com/3dm/29487563

NVIDIA RTX 2080 FE Results:
Intel i7-5820K CPU @ 4.4GHz (Corsair H110i 280mm GT Liquid CPU Cooler), G.SKILL Ripjaws 4 series 64GB 2133MHz DDR4, NVIDIA GeForce RTX 2080 (No OC; Core Voltage +100, Power Limit 124%), Samsung 850 PRO SSDs (128GB, 512GB, 512GB) on an ASRock X99 Extreme4 motherboard in a Corsair Obsidian Series 450D mid tower - powered by a SeaSonic X-1250 1250W PSU.

Time Spy 1.0 - 9,989
https://www.3dmark.com/3dm/29490385

Time Spy Extreme 1.0 - 4,627
https://www.3dmark.com/3dm/29490672

Fire Strike Ultra 1.1 - 6,546
https://www.3dmark.com/3dm/29490913

Fire Strike Extreme 1.1 - 12,207
https://www.3dmark.com/3dm/29491498
 
Got the RTX 2080 today - benched my MSI Lightning Z GTX 1080 Ti prior - so everything was the same...some interesting results - a reduction in DX11 performance with Fire Strike! Is this widely known? I did not OC either card - just pushed the sliders for Core Voltage and Power Limit % to the max (as outlined below)...

Pic:
http://sk3tch.com/images/hardocp/2080_rig.jpg

MSI Lightning Z GTX 1080 Ti Results:
Intel i7-5820K CPU @ 4.4GHz (Corsair H110i 280mm GT Liquid CPU Cooler), G.SKILL Ripjaws 4 series 64GB 2133MHz DDR4, MSI NVIDIA GeForce GTX 1080 Ti Lightning Z (No OC; Core Voltage +100, Power Limit 116%), Samsung 850 PRO SSDs (128GB, 512GB, 512GB) on an ASRock X99 Extreme4 motherboard in a Corsair Obsidian Series 450D mid tower - powered by a SeaSonic X-1250 1250W PSU.

Time Spy 1.0 - 9,436
https://www.3dmark.com/3dm/29486537

Time Spy Extreme 1.0 - 4,402
https://www.3dmark.com/3dm/29486980

Fire Strike Ultra 1.1 - 7,316
https://www.3dmark.com/3dm/29487342

Fire Strike Extreme 1.1 - 13,444
https://www.3dmark.com/3dm/29487563

NVIDIA RTX 2080 FE Results:
Intel i7-5820K CPU @ 4.4GHz (Corsair H110i 280mm GT Liquid CPU Cooler), G.SKILL Ripjaws 4 series 64GB 2133MHz DDR4, NVIDIA GeForce RTX 2080 (No OC; Core Voltage +100, Power Limit 124%), Samsung 850 PRO SSDs (128GB, 512GB, 512GB) on an ASRock X99 Extreme4 motherboard in a Corsair Obsidian Series 450D mid tower - powered by a SeaSonic X-1250 1250W PSU.

Time Spy 1.0 - 9,989
https://www.3dmark.com/3dm/29490385

Time Spy Extreme 1.0 - 4,627
https://www.3dmark.com/3dm/29490672

Fire Strike Ultra 1.1 - 6,546
https://www.3dmark.com/3dm/29490913

Fire Strike Extreme 1.1 - 12,207
https://www.3dmark.com/3dm/29491498

That’s interesting. I do know it’s been noted RTX cards are better at DX12.

At first I read 2080ti. Heh. For a 2080 that’s not a huge red flag...
 
Got the RTX 2080 today - benched my MSI Lightning Z GTX 1080 Ti prior - so everything was the same...some interesting results - a reduction in DX11 performance with Fire Strike! Is this widely known? I did not OC either card - just pushed the sliders for Core Voltage and Power Limit % to the max (as outlined below)...

Pic:
http://sk3tch.com/images/hardocp/2080_rig.jpg

MSI Lightning Z GTX 1080 Ti Results:
Intel i7-5820K CPU @ 4.4GHz (Corsair H110i 280mm GT Liquid CPU Cooler), G.SKILL Ripjaws 4 series 64GB 2133MHz DDR4, MSI NVIDIA GeForce GTX 1080 Ti Lightning Z (No OC; Core Voltage +100, Power Limit 116%), Samsung 850 PRO SSDs (128GB, 512GB, 512GB) on an ASRock X99 Extreme4 motherboard in a Corsair Obsidian Series 450D mid tower - powered by a SeaSonic X-1250 1250W PSU.

Time Spy 1.0 - 9,436
https://www.3dmark.com/3dm/29486537

Time Spy Extreme 1.0 - 4,402
https://www.3dmark.com/3dm/29486980

Fire Strike Ultra 1.1 - 7,316
https://www.3dmark.com/3dm/29487342

Fire Strike Extreme 1.1 - 13,444
https://www.3dmark.com/3dm/29487563

NVIDIA RTX 2080 FE Results:
Intel i7-5820K CPU @ 4.4GHz (Corsair H110i 280mm GT Liquid CPU Cooler), G.SKILL Ripjaws 4 series 64GB 2133MHz DDR4, NVIDIA GeForce RTX 2080 (No OC; Core Voltage +100, Power Limit 124%), Samsung 850 PRO SSDs (128GB, 512GB, 512GB) on an ASRock X99 Extreme4 motherboard in a Corsair Obsidian Series 450D mid tower - powered by a SeaSonic X-1250 1250W PSU.

Time Spy 1.0 - 9,989
https://www.3dmark.com/3dm/29490385

Time Spy Extreme 1.0 - 4,627
https://www.3dmark.com/3dm/29490672

Fire Strike Ultra 1.1 - 6,546
https://www.3dmark.com/3dm/29490913

Fire Strike Extreme 1.1 - 12,207
https://www.3dmark.com/3dm/29491498

2080 only has 2944 cuda cores, 1080 ti has 3584 cuda cores.

Its widely known that an AIB 1080ti is faster than a 2080 on DX11 workloads, it has more brute processing capability since both are limited to roughly 2-2.1GHz.

2080 is faster at DX12 due to async compute improvements (stuff AMD introduced with the R9 290X in 2013...)

Now it isn't 20% slower in dx11 like its core count would suggest because of a slightly more efficient sm structure (64 cores per sm instead of 128) and better memory compression. So even in DX11 theres about a 10% ipc improvement.
 
Last edited:
Generic statement: Max voltage is not always your friend.

Good point. Even if it's within the standard limits NVIDIA places on the card? What do you use for OC? It has been so long since the 580 days (when things were last unlocked) that I've lost my GPU OC skills, lol.
 
Sup fellow 2080ti owners! I wanted to ask you guys...

Have you noticed higher than advertised boost clocks even when running stock with no overclocking? I keep seeing clock speeds of 1900-2030MHz, verified it in GPU-Z too. Just found it to be peculiar.
 
Reference? How do we know that the difference isn't all due to IPC improvements?

I dont understand the point of this question? Turing has much better low level API performance than Pascal. You can call that an IPC improvement or not. However descriptively the cause is deployment of async compute improvements, something similar to what AMD deployed with Hawaii. Pascal could run some things asynchronously but it was not a true async like AMD had.
 
Last edited:
You made a claim, where's the link?

What are you looking for, nvidia white papers? Or just reviewer site explanations? What's your qualification level for fulfilling your personal satisfaction of claim you (dislike? disagree?) Are you always this obtuse or just trolling?
 
What are you looking for, nvidia white papers? Or just reviewer site explanations? What's your qualification level for fulfilling your personal satisfaction of claim you (dislike? disagree?) Are you always this obtuse or just trolling?

Anything other than off-topic trolling to support your claim?
 
Sup fellow 2080ti owners! I wanted to ask you guys...

Have you noticed higher than advertised boost clocks even when running stock with no overclocking? I keep seeing clock speeds of 1900-2030MHz, verified it in GPU-Z too. Just found it to be peculiar.

Yes - that's GPU Boost.
 
Turing also does hybrid processing. It has paths for FP16 so the cuda cores don’t waste FP32 paths on FP16. Games like BFV can benefit greatly. It varies game to game.

So you can’t really compare pascal to turing cuda core count.
 
Yes - that's GPU Boost.

Thanks. I just read up a bit more on GPU boost 3.0. I messed around with overclocking a bit the day before yesterday. I decided to leave everything at stock, since all my content was running smooth at 4K anyways.
 
Last edited:
Thanks. I just read up a bit more on GPU boost 3.0. I messed around with overclocking a bit the day before yesterday. I decided to put everything at stock, since all my content was running smooth at 4K anyways.

I would at least put power to max (123%?).
 
Shunt mod round 2.

Faiiiiillllllllll. Stupid nVidia. I need longer cat5 wire lol. Still won’t come out of safe mode. Resistance too low...

22AB6AE3-A1D4-4212-9A72-8F7BB87A0EDC.jpeg
 
Last edited:
Shunt mod round 2.

Faiiiiillllllllll. Stupid nVidia. I need longer cat5 wire lol. Still won’t come out of safe mode. Resistance too low...

View attachment 112511

Ok, a 5” wire did the trick. Down to ~98% power limit from 125% (130 max, which it would occationally hit). I don’t get power limit trues in afterburner anymore. #worthit #readyforrtx
 
So, net result of selling my MSI GTX 1080 Ti Lightning and buying a RTX 2080 FE:

$859.12 - $704.77 (eBay sale price of $849.99 with free shipping - eBay/PayPal fees and shipping) = $154.35 to "upgrade" to RTX 2080...I'm gambling it all that DLSS and RT will be game changers...plus I get bored with GPUs. :)
 
Sup fellow 2080ti owners! I wanted to ask you guys...

Have you noticed higher than advertised boost clocks even when running stock with no overclocking? I keep seeing clock speeds of 1900-2030MHz, verified it in GPU-Z too. Just found it to be peculiar.
Yeah mine jumps to 2100 then settles at 2025. I havent tried mnually doing anything as Ive been pleased with what it has been doing.
 
Back
Top