2080 Ti @ only 8x ??? Any thoughts or advice

SixFootDuo

Supreme [H]ardness
Joined
Oct 5, 2004
Messages
5,825
I only have 24 lanes, 4 lanes for south bridge, 8 lanes for 2 x NVMe 4x and 8 lanes for 2080 ti. with 4 lanes left over. I really hate that I only have lanes for 1 4x NVMe

is this hurting my performance?
 
Thanks. I have 2 WD Black NVMe's in Raid0 ... both at 4x each. I'm stuck with 8x with my 2080 Ti. Doesn't look like that much of a lost.

I think I have this system sold.

I might make the move to AMD this fall especially if they beat Intel in gaming / single core performance which is a real possibility. I hope AMD has more than 24 PCI lanes.
 
Thanks. I have 2 WD Black NVMe's in Raid0 ... both at 4x each. I'm stuck with 8x with my 2080 Ti. Doesn't look like that much of a lost.

I think I have this system sold.

I might make the move to AMD this fall especially if they beat Intel in gaming / single core performance which is a real possibility. I hope AMD has more than 24 PCI lanes.

Do you do some kind of work related stuff that benefits from raid 0?
 
You just got me to check mine, also stuck @ 8.0x with 2 x NVME drives(one is a black) on a z370 board. Crappy :)

Is this just a limitation? running 2x NVME drives on a z370 board will be limiting the gpu to 8x?
 
It can vary by board but I think there were not too many z370 boards that were set up for 2 m.2 at pcie 3.0 x4 and a full x16 available for GPU simultaneously. There was a ton of PCIE lane discussion when they came out with most z370 boards being the same or similar enough to z270 except supporting higher core CPUs.
 
You just got me to check mine, also stuck @ 8.0x with 2 x NVME drives(one is a black) on a z370 board. Crappy :)

Is this just a limitation? running 2x NVME drives on a z370 board will be limiting the gpu to 8x?


4 lanes for the south bridge and 4 lanes per NVMe drive, 8 lanes for the GPU. Its a stupid limitation as intel fears big data companies would use lower cost hardware to save money. The truth is, they wouldn't.

AMD has 28 lanes but 8 of the are software based or something.
 
4 lanes for the south bridge and 4 lanes per NVMe drive, 8 lanes for the GPU. Its a stupid limitation as intel fears big data companies would use lower cost hardware to save money. The truth is, they wouldn't.

AMD has 28 lanes but 8 of the are software based or something.

sounds like It may be time to switch, I was just about to buy a block and watercool this thing too
 
Do you do some kind of work related stuff that benefits from raid 0?

For me, that is besides the point. When I spend $3500 on a new system, it serious pisses me off that I have to decide between Raid0 or 16x for my GPU. I understand your logic, that 16x on the GPU may be better than Raid0. Why am I being forced to choose between the the two? I would like both.

if I can get an AMD 4xxx this fall that does indeed beat Intel in gaming performance / single thread performance as I am a gamer first and not a kid that drops everything else for the sake of a few more cores, then I will finally make the move to AMD.

My 2 x WD Black 500gb NVMe's read and writes are in the 6,000MB/s range and yes, while there is a bit more latency with smaller files but Raid0 still has more benefits than not.

I was even lucky enough to have Kevin OBrien, the Lab Director for Storagereview.com respond back to me and tell me that the pro's of Raid0 NVMe still far outweigh the cons.

I reached out here to HardOCP forums about Raid0 and all I got back was rainy day responses. No one person was able to share any positives which told me I was asking the wrong people. Surely if anyone understood the pros and cons, you would have had at least a few people speak to both the pros and cons. So I had to look elsewhere.

Of course Storagereview.come people, especially the director is going to have the best knowledge in this area.
 
I have an i7-9700K and a Gigabyte z390 Aorus board with both M.2 slots populated with Gen.3 x4 NVMe drives, various tools (CPU-Z, GPU-Z, etc.) tell me the GPU is running at x16. Based on what I'm seeing, z370 and z390 have the same number of PCIe lanes, so I'm not sure why you're being limited to x8 on the GPU unless you have something else in another PCIe slot. My M.2 drives aren't running in RAID, but not sure that would matter. With both M.2 slots populated, my board disables some of the SATA ports, so perhaps it's able to maintain x16 on the GPU because it steals resources from elsewhere. *shrug*
 
For me, that is besides the point. When I spend $3500 on a new system, it serious pisses me off that I have to decide between Raid0 or 16x for my GPU. I understand your logic, that 16x on the GPU may be better than Raid0. Why am I being forced to choose between the the two? I would like both.

if I can get an AMD 4xxx this fall that does indeed beat Intel in gaming performance / single thread performance as I am a gamer first and not a kid that drops everything else for the sake of a few more cores, then I will finally make the move to AMD.

My 2 x WD Black 500gb NVMe's read and writes are in the 6,000MB/s range and yes, while there is a bit more latency with smaller files but Raid0 still has more benefits than not.

I was even lucky enough to have Kevin OBrien, the Lab Director for Storagereview.com respond back to me and tell me that the pro's of Raid0 NVMe still far outweigh the cons.

I reached out here to HardOCP forums about Raid0 and all I got back was rainy day responses. No one person was able to share any positives which told me I was asking the wrong people. Surely if anyone understood the pros and cons, you would have had at least a few people speak to both the pros and cons. So I had to look elsewhere.

Of course Storagereview.come people, especially the director is going to have the best knowledge in this area.

I think you're missing the point. Regardless of what storagereview.com says, if you aren't actually using the increased speed of RAID 0 (or PCIe 4.0 in AMD systems) for anything other than e-peen benchmarks, it's no surprise that you're getting "rainy day responses." Meanwhile you're actually losing some performance from your video card...

Do you do some kind of work related stuff that benefits from raid 0?

So back to this question... I'm not saying that it's impossible that you have a use case for 6000MB/s over 3500MB/s, but it's unlikely given that you're a "gamer first."
 
Again it depends on how the board is designed and the lanes available. Some of the Pcie lanes are flexible and get used for sata and or m.2 depending on what you populate. It you want all the lanes it’s HEDT time.

Market segmentation at work.

My X570 unify can run 3 nvme drives at full x4 and the gpu at x16 but only has 4 sata ports for example. X570 and Ryzen has a few more to play with than Z370 and Coffeelake.
 
I think you're missing the point. Regardless of what storagereview.com says, if you aren't actually using the increased speed of RAID 0 (or PCIe 4.0 in AMD systems) for anything other than e-peen benchmarks, it's no surprise that you're getting "rainy day responses." Meanwhile you're actually losing some performance from your video card...



So back to this question... I'm not saying that it's impossible that you have a use case for 6000MB/s over 3500MB/s, but it's unlikely given that you're a "gamer first."

I'm not running Raid on my NVME drives, and hit with the same limitation :/.

I have an i7-9700K and a Gigabyte z390 Aorus board with both M.2 slots populated with Gen.3 x4 NVMe drives, various tools (CPU-Z, GPU-Z, etc.) tell me the GPU is running at x16. Based on what I'm seeing, z370 and z390 have the same number of PCIe lanes, so I'm not sure why you're being limited to x8 on the GPU unless you have something else in another PCIe slot. My M.2 drives aren't running in RAID, but not sure that would matter. With both M.2 slots populated, my board disables some of the SATA ports, so perhaps it's able to maintain x16 on the GPU because it steals resources from elsewhere. *shrug*

it is possible that you're running your NVME drive in SATA mode? Which would free up the PCIE lanes.
 
I'm not running Raid on my NVME drives, and hit with the same limitation :/.

That's different because each situation is unique. The OP built a "gaming first" box, and seemingly inexplicably is choosing faster storage over better performance in his video card. I don't actually know what YOU are doing with YOUR computer ;).

And as TheHig pointed out, it's market segmentation. I don't like it either, but I don't run either company.
 
Last edited:
That's different because each situation is unique. The OP built a "gaming first" box, and seemingly inexplicably is choosing faster storage over better performance in his video card. I don't actually know what YOU are doing with YOUR computer ;).

And as TheHig pointed out, it's market segmentation. I don't like it either, but I don't run either company.

Yeah that's why I was asking. 0 benefit in gaming w/ raid 0 vs. small benefit for going back to x16 on the GPU, not sure why that compromise would be made unless work related stuff was involved.
 
Yeah that's why I was asking. 0 benefit in gaming w/ raid 0 vs. small benefit for going back to x16 on the GPU, not sure why that compromise would be made unless work related stuff was involved.

The compromise would be, he would need to run one of this NVME Drives in SATA mode, or remove it and not use it all together.
 
I get that. I'm just saying the choice is obvious (to me) if you're going for 100% gaming performance.

Hmm, I never thought I'd have to chose between a drive and a GPU running full speed :) How the times have changed. On the plus side, this lends more strength to AMDs arguments on their platform being better, even for gamers over all.
I want to get rid of any SATA drives, and use NVME exclusively. Less cables, less mess in my case, my main use is gaming right now, but that does change from time to time.
 
Hmm, I never thought I'd have to chose between a drive and a GPU running full speed :) How the times have changed. On the plus side, this lends more strength to AMDs arguments on their platform being better, even for gamers over all.
I want to get rid of any SATA drives, and use NVME exclusively. Less cables, less mess in my case, my main use is gaming right now, but that does change from time to time.

Does a SATA m.2 drive rather than NVMe m.2 lower the primary PCIe x16 slot speed also? I feel like you have to read the fine print on every single motherboard because each one has a different implementation of how the PCIe lanes are assigned when certain peripherals are in use.
 
it is possible that you're running your NVME drive in SATA mode? Which would free up the PCIE lanes.
That seems like a logical assumption (I should see what the MB manual says), CrystalDiskInfo says both M.2 drives are using the NVMe interface and PCIe 3.0 x4 transfer mode while it shows my actual SATA SSDs using the SATA interface and SATA/600 transfer mode.
 
Hmm, I never thought I'd have to chose between a drive and a GPU running full speed :) How the times have changed. On the plus side, this lends more strength to AMDs arguments on their platform being better, even for gamers over all.
I want to get rid of any SATA drives, and use NVME exclusively. Less cables, less mess in my case, my main use is gaming right now, but that does change from time to time.

I'm on Z370 and can't use all of my SATA ports because I have a pci-e soundcard installed. Honestly it's not a problem I have more than enough storage and when I move to nvme I'm going to upgrade some of the SSDs from 1tb -> 2 so I can have less stuff in general.
 
Does a SATA m.2 drive rather than NVMe m.2 lower the primary PCIe x16 slot speed also? I feel like you have to read the fine print on every single motherboard because each one has a different implementation of how the PCIe lanes are assigned when certain peripherals are in use.

I think you lose 2 sata ports if you use it in SATA mode, I have to read the manual :)
 
For what it's worth, I run my old Titan Xp at 8x so I can have a 4x NVMe Samsung 970 Evo+ running in the 2nd PCIe 8x lane. If I didn't do that, my options would limit that SSD to a 1/4 of its performance.

And I like the idea of my CPU talking directly to my OS SSD, not going via a DMI bus. I figure I'd likely notice load hitching and other storage related issues far more than a 1% performance difference in games. (y)
 
as I recall from some years ago a few folks did some testing 16x vs 8x and found it made no noticeable difference during gaming regarding GPU performance. Has something changed regarding today's GPU designs to where it now matters? Most mid level MoBo's I've bought the past 2-3 years came as 16x/8x for SLI
 
Back
Top