NVIDIA GeForce GTX 1060 Founders Edition Review @ [H]

And do you have any games that are not straight out of the AMD Gaming Evolved program to compare to?
 
I got a 1080 and its true. I think Nvidia cards were already maxed out to their potential when AMD Cards start seeing great results in dx12 and vulkan.
The problem is, right now AMD's got nothing that competes with the GTX 1080, not even under DX12 or Vulvan.
Perhaps some day AMD will come out with something better. And perhaps someday NVidia will get better at DX12 and Vulcan.
It could be a lot of fun to watch, and might bring us, the end users, some better stuff at better prices.
 
"Why do people seem to ignore the actual performance numbers on ROTR? The GTX 1060 is faster under DirectX 12 with the same lead over the 480 as it has in DirectX 11. The 480 gains nothing in DirectX 12 in that non-AMD game. But fire up one of the AMD titles (all the other ones listed), and suddenly the future belongs to AMD.

Isn't it also a reasonable conclusion to state that AMD games favor AMD hardware? Without a neutral benchmark, I'm going to assume that Nvidia's preemption hardware works fine and the ROTR results have as much validity to the future as AotS and Hitman do."

Let me tell you why the short answer to your question is NO.

ROTR is horribly coded for DX12. If you look at the performance increase even on the Nvidia cards, I'm pretty sure it is nearly nothing bc it was not coded well. So the poor DX12 improvements show equally poor gains on both amd and nvidia cards in ROTR.

On the other hand, there are games that have been coded properly to show DX12 gains on AMD. I do not believe there are any that show large gains on Nvidia in DX12, except perhaps vulkan/doom with good gains on nvidia, but better gains on amd still.
Incidentally, the games that have been coded "properly" involved AMD's assistance, totally not GameWorks scenario. Also, gains don't matter, the absolute performance does and admittedly, in AMD assisted stuff, AMD has a lead tier-for-tier.

Also, i suppose you did not even entertain the thought that RotR may just be the sort of game that really cannot make good use of Dx12 on modern systems that have far superior hardware to consoles.
 
Why do people seem to ignore the actual performance numbers on ROTR? The GTX 1060 is faster under DirectX 12 with the same lead over the 480 as it has in DirectX 11. The 480 gains nothing in DirectX 12 in that non-AMD game. But fire up one of the AMD titles (all the other ones listed), and suddenly the future belongs to AMD.

Isn't it also a reasonable conclusion to state that AMD games favor AMD hardware? Without a neutral benchmark, I'm going to assume that Nvidia's preemption hardware works fine and the ROTR results have as much validity to the future as AotS and Hitman do.
Even the developer themselves have stated that the DX12 implementation in ROTR wasn't complete. The fact that AMD GPUs get almost no FPS boost by switching to DX12 code path kind of proves it. I dunno. For an entirely new low level API it sure doesn't look like it was a game written for DX12.
 
"
ROTR is horribly coded for DX12. If you look at the performance increase even on the Nvidia cards...
That sounds like unsupported speculation to me.
Perhaps it's not badly coded for DX12, but is instead extremely well-coded for DX11. Unless you've seen the actual code, how would you know?
 
A few years ago, whether you purchased NV or AMD card was largely based on the games you played. For the last few years, it has been perf/watt or price/perf ratios. Looks like the age of "which games do you play?" will be the new question on whether you get AMD or NV.

I'd really like to see a few more games to compare these to. I really hope you plan on benching Deus Ex:Mankind Divided when it comes out!
Yeah, funny about that. Doom 3 was Nvidia's game. Now with Vulkan, Doom 4 is AMD's game!
 
"Incidentally, the games that have been coded "properly" involved AMD's assistance"

OK great, sure maybe thats the case, but they are still having huge gain in DX12/vulkan.

Now lets think about that from an outside perspective and unbiased view point:
Don't you think if Nvidia could do the same thing to gain an advantage in dx12/vulkan they would? They have a budget that is 10X amd budget, yet there has not been a single "Nvidia optimized" DX12/vulkan game. Nvidia is ignoring it because their hardware does not support as well as AMD. Nvidia will have this fixed in the next hardware release after Pascal for sure, so this is something that will probably only affect pascal gen

Until Nvidia shows Huge DX12/vulkan games, even if its an "NVIDIA hairworks" sponsored title, I think you can safely assume that Nvidia is ignoring if for a a good reason. They are not stupid and if they could make the same gains as AMD in those api's they would have had them done well before amd did
 
I don't think either cards are a decent value. Those of us who like to spend around $300 on GPUs will have to skip this generation or move up a price tier. I'm paying $30 less than I did almost two years ago for a minuscule performance upgrade and 2GB more of VRAM. I have a 970 with a decent OC. There is nothing in the range I paid for it that is a worthwhile upgrade, unless I want to spend ~$430-450 before tax.

We need a HD 6950, or at least 560ti.

TBH, I wouldn't mind seeing a "modern-day" GTX 560 Ti 448 again (had one back in the day). With a minimal overclock, you could get GTX 570-level performance at a decent discount. There's still a big gap from the GTX 1060 to the GTX 1070 performance-wise, which a "Ti 448-style" model could plug up.
 
TBH, I wouldn't mind seeing a "modern-day" GTX 560 Ti 448 again (had one back in the day). With a minimal overclock, you could get GTX 570-level performance at a decent discount. There's still a big gap from the GTX 1060 to the GTX 1070 performance-wise, which a "Ti 448-style" model could plug up.
Doubt you'll see that, Nvidia is more worried about cannibalizing its own products than AMD these days. No SLI is purely a decision to protect the the 1070/80 sales.
 
Even the developer themselves have stated that the DX12 implementation in ROTR wasn't complete. The fact that AMD GPUs get almost no FPS boost by switching to DX12 code path kind of proves it. I dunno. For an entirely new low level API it sure doesn't look like it was a game written for DX12.
That's exactly my point. How do we know how well games are coded? Isn't it possible that AotS and Hitman are also poorly coded, but in a way that appears to show favoritism?

People attribute the AMD advantage in these games to DirectX 12, but those game engines have both strongly favored AMD for years. Heck, AotS was originally developed as the first showcase for Mantle. How can anyone look at those results and claim DirectX 12 dominance without also acknowledging that it's a highly optimized AMD game, regardless of API.
 
Doubt you'll see that, Nvidia is more worried about cannibalizing its own products than AMD these days. No SLI is purely a decision to protect the the 1070/80 sales.

Sadly, I agree with you on NVIDIA's planning -- was hoping for a nicer backup card until the big guns (Titan/Ti/Vega) come out. But as it stands, looks like a custom AIB Rx 480 will have to do.
 
  • Like
Reactions: muxr
like this
Could we get a 4k performance review on the card? I have a 980 and am interested in getting heat/power draw down. With the additional memory, I was hopeful that I might get improved 4k performance from the 1060 vs the 980, but there is no information on that here, though it looks as if it might be a mixed bag. It does seem possible the 1060 might fair better vs the 980 at higher resolutions.

Also, I'm surprised we don't see more 4k reviews in general. This is increasingly relevant, even for the lower end cards. Not all the games we play are the latest games with nightmare mode enabled. I constantly pick up slightly older games on steam which run just fine with maximum (or close to maximum) graphics on the 980, and I'm always very interested in updated cards that are more 4k friendly than what I'm currently running.

Please do include more 4k (especially for current cards like the 1060/1070/1080, current coverage is spotty at best.

Well [H] has specifically gone out of their way to look at 4k in the recent past so I'm not sure what you want? And a 1060 isn't meant for 4k in new games. Neither is the 480. The 480 needs crossfire, to even run BF4 multiplayer at 60fps, average. and that's not with all settings maxed. 4K isn't ready for mid-range primetime. Its only just now becoming realistic, for high end cards.

And testing a new game like Far Cry Primal, won't tell you anything about how it will run an older game, such as Far Cry 3.
 
Last edited:
I didn't say the 780 was mid-range. You totally misread my post.

No, not really. You are the one saying the midrange hasn't changed much in the past 1-2 years, yet the 1060 is almost 100% faster than the GTX 960. Honestly, you always tell people they misunderstood you if they disagree with you, but maybe it's because nobody knows what you're actually trying to say.

I realize it may not be what you're looking for, but the 1060 is still around 50% faster than your 780 depending on the game.
 
"Could we get a 4k performance review on the card?

Yea thats an odd request.... The 1060/480 are barely capable of 1440P gaming. 4k is just unwise on this card
 
  • Like
Reactions: dgz
like this
That's exactly my point. How do we know how well games are coded? Isn't it possible that AotS and Hitman are also poorly coded, but in a way that appears to show favoritism?

People attribute the AMD advantage in these games to DirectX 12, but those game engines have both strongly favored AMD for years. Heck, AotS was originally developed as the first showcase for Mantle. How can anyone look at those results and claim DirectX 12 dominance without also acknowledging that it's a highly optimized AMD game, regardless of API.
My understanding (and I am a software developer although don't work on graphics).. is that the new APIs significantly lower the draw call overhead.. they also allow for each "object" in a given game to execute their own draw calls.. In the past on DX11 it was the driver that executed draw calls when it deemed them necessary.. New approach makes the whole process more asynchronous, multithreaded and more efficient. However the game has to be coded for it.

My take on ROTR is that it started its development as a DX11 game and DX12 was something they tacked on later, according to developers own admission even.

Id Soft has shown what can be done in Vulkan Doom patch. Even without utilizing async compute heavily the Vulkan clearly shows the advantage of this approach and not just on AMD cards.

We will likely see games with poor implementation, that's always been the case even on DX11.. but those that push the boundaries will implement it properly. To me this is not too different to when we switched from single core to multicore. It took games a long time to scale across multiple cores.. now they have even more of an incentive to tap into this new found low overhead power, especially if their games also target GCN based consoles.
 
Why do people seem to ignore the actual performance numbers on ROTR? The GTX 1060 is faster under DirectX 12 with the same lead over the 480 as it has in DirectX 11. The 480 gains nothing in DirectX 12 in that non-AMD game. But fire up one of the AMD titles (all the other ones listed), and suddenly the future belongs to AMD.

Isn't it also a reasonable conclusion to state that AMD games favor AMD hardware? Without a neutral benchmark, I'm going to assume that Nvidia's preemption hardware works fine and the ROTR results have as much validity to the future as AotS and Hitman do.

Nope the games that support dx12 and coded from the start always seem to perform better on amd. ROTR for xbox one was coded with async compute from the start I know but PC version was Nvidia sponsored and didn't have shit support form the go. It seems like they held that back and patched it later. Being an nvidia owner I really call bullshit on that. That is holding technology back, now it has async finally for pc as well and dx12 support. It seems like all these titles that were Nvidia sponsored were pushed out with dx11 support first and then patched to dx12. I wonder why, it may have something to do with nvidia sponsoring the pc title and holding back dx12 support. While titles support dx12 out of the box seems to be much better. Its more nvidia gimping it from the start for PC than anything.

Its just that AMD is pushing for better implementation of dx12. If ROTR had async for xbox from the beginning why do you think they came out with pc version without it and dx11? You can thank nvidia for that.
 
  • Like
Reactions: Ronin
like this
Nope the games that support dx12 and coded from the start always seem to perform better on amd. ROTR for xbox one was coded with async compute from the start I know but PC version was Nvidia sponsored and didn't have shit support form the go. It seems like they held that back and patched it later. Being an nvidia owner I really call bullshit on that. That is holding technology back, now it has async finally for pc as well and dx12 support. It seems like all these titles that were Nvidia sponsored were pushed out with dx11 support first and then patched to dx12. I wonder why, it may have something to do with nvidia sponsoring the pc title and holding back dx12 support. While titles support dx12 out of the box seems to be much better. Its more nvidia gimping it from the start for PC than anything.

Its just that AMD is pushing for better implementation of dx12. If ROTR had async for xbox from the beginning why do you think they came out with pc version without it and dx11? You can thank nvidia for that.

Does AMD get a free pass for penalising their cards 10-30% in DirectX 11 and prior? I currently have a GTX680, but had a 4850 before that.. if I had a previous gen AMD card today though.. I'd be a bit pissed at AMD for not being able to write decent DX11 drivers that took full advantage of my card..
 
"Could we get a 4k performance review on the card?

Yea thats an odd request.... The 1060/480 are barely capable of 1440P gaming. 4k is just unwise on this card

Perhpas that statement should be qualified. I have been gaming on a GTX 970 at 4K for a bit, quite enjoyably, waiting to get my GTX 1080. I just could not turn most of the eye candy on.
I preferred 4K on the GTX 970 to 1440p on the GTX 970, the higher dot pitch really improved the look of the game. So I'm sure the 1060 can do 4K gaming, just not with as much eye candy as [H]ardOCP readers probably want.

Now, course, with my GTX 1080, I have that finer dot pitch and much more eye candy too. Looks great.

What's the unit of measure for eye candy, anyway? Mega-pez per second?
 
"Does AMD get a free pass for penalising their cards 10-30% in DirectX 11 and prior? I currently have a GTX680, but had a 4850 before that.. if I had a previous gen AMD card today though.. I'd be a bit pissed at AMD for not being able to write decent DX11 drivers that took full advantage of my card.."

Yes great Nvidia tactic there, instead of looking to the future lets talk about the past
 
  • Like
Reactions: Ronin
like this
The only reason I'm considering a 1060 is due to them having AIBs already, AMD wtf, I want customs plz.


You are probably going to have a bit more of a wait either way, all the cards are sold out. But I agree, the custom 480s should be here by now, the card launched a month sooner, and nvidias partners for the 1060 have custom versions available day 1? WTF is going on?

Jayz2shilling already did a video comparing an evga sc 1060 to a thermally throttled reference rx 480 that was certainly not able to stay at the 1266 speed.


AMD needs to stop having non optimal cards at launch, and if so, they need to ramp up the delivery. Also, they need to help bring out as many dx12/vulkan games as possible, some of the most popular franchises are still best tested with all they have, dx11, like gta 5, that came out ages ago. When the new versions of tentpole games launch, it is absolutely imperative they launch with well coded games that can actually take advantage of amd cards like hitman/ashes/doom - please god no more tomb raider.

What engine did tome raider use? (goes to look)

Foundation... ok, we need a new foundation guys.
 
Unless you've been asleep for the past 3 years AMD has not gotten any free passes. And they seem to play games just fine in the present. Wow, some of the comments here...
 
I mean, someday, maybe it will be revealed that Nividia was paying devs to hold back use of new API's. Until then......we are pretty much nailing it right now, with AMD's own proposed timeline for general use of GPU compute, in games. Someone was saying earlier that the consoles are the reason we are even here, now, at all. Probably true. What is also true, is that AMD said it would be a couple of years (from the PS4/Xbone launch) before GPU compute would really start becoming a fundamental addition to game development. The takeaway was that they simply had to get devs trained up on it and get the tools out there. And now, we are seeing that. And as I mentioned in another thread, AMD has rather public ally been gunning R&D and marketing, towards GPU compute and asynchronus methods. Whereas, that maybe has not been as much the case, with Nvidia.

Sure, aggressive dev teams with custom/semi-custom engines, or focusing on a single platform, can do this stuff yesterday. But most games are made on middlewares and/or in environments where multiple targets/platforms, are being developed for. Without good tools and/or familiarity, cutting edge techniques take longer to trickle into a typical development setup.

*you also have to consider time, itself. Tomb Raider's development was likely started before there were ANY DX12 tools and GPU compute was still a little cilantro to sprinkle into DX11. Carmack is a nerd, so I'm sure he made someone lose sleep to get Vulkan into Doom 4. It probably won't be until next year, that just about every game ships with a codepath for a new API.
 
Last edited:
You are probably going to have a bit more of a wait either way, all the cards are sold out. But I agree, the custom 480s should be here by now, the card launched a month sooner, and nvidias partners for the 1060 have custom versions available day 1? WTF is going on?

Jayz2shilling already did a video comparing an evga sc 1060 to a thermally throttled reference rx 480 that was certainly not able to stay at the 1266 speed.


AMD needs to stop having non optimal cards at launch, and if so, they need to ramp up the delivery. Also, they need to help bring out as many dx12/vulkan games as possible, some of the most popular franchises are still best tested with all they have, dx11, like gta 5, that came out ages ago. When the new versions of tentpole games launch, it is absolutely imperative they launch with well coded games that can actually take advantage of amd cards like hitman/ashes/doom - please god no more tomb raider.

What engine did tome raider use? (goes to look)

Foundation... ok, we need a new foundation guys.

There are a bunch of 1060s for order on Newegg with "Ships in 1 to 2 days" so that isn't much of a wait for thisewanting an AIB 1060 before 480s are available.
 
Why do people seem to ignore the actual performance numbers on ROTR? The GTX 1060 is faster under DirectX 12 with the same lead over the 480 as it has in DirectX 11. The 480 gains nothing in DirectX 12 in that non-AMD game. But fire up one of the AMD titles (all the other ones listed), and suddenly the future belongs to AMD.

Isn't it also a reasonable conclusion to state that AMD games favor AMD hardware? Without a neutral benchmark, I'm going to assume that Nvidia's preemption hardware works fine and the ROTR results have as much validity to the future as AotS and Hitman do.


I think what's reasonable to assume is you need to build vendor specific optimizations into your game engine to achieve the best results, after that the particular mix of effects takes over, but I don't think tomb raider is making proper use of gcn hardware titles like hitman/ashes/doom is using. That or the way they structured how things are rendered is just based off dx11, with dx12 lipstick slapped on the pig.
 
No, not really. You are the one saying the midrange hasn't changed much in the past 1-2 years...

Once again, not what I said. I said that the current mid-range is not enough of a noticeable improvement over previous generations' high ends to be considered a truly appealing upgrade path. You improve on relative performance, but if you're only after relative performance, you upgraded last gen, and the point is moot.

If the RX 480 and GTX 1060 were 10-15% faster at their current price points, the upgrade would be a no-brainer. Unfortunately, I think I'll be waiting another generation to get that improvement.
 
Does AMD get a free pass for penalising their cards 10-30% in DirectX 11 and prior? I currently have a GTX680, but had a 4850 before that.. if I had a previous gen AMD card today though.. I'd be a bit pissed at AMD for not being able to write decent DX11 drivers that took full advantage of my card..
Why should anyone be pissed in the first place? Theoretically they should have known what they were buying when they went in. If the performance was deemed good enough at that time, then it was good enough. In that light, improved efficiencies for AMD under DX12/Vulkan for older cards are bonus.
 
Once again, not what I said. I said that the current mid-range is not enough of a noticeable improvement over previous generations' high ends to be considered a truly appealing upgrade path. You improve on relative performance, but if you're only after relative performance, you upgraded last gen, and the point is moot.

If the RX 480 and GTX 1060 were 10-15% faster at their current price points, the upgrade would be a no-brainer. Unfortunately, I think I'll be waiting another generation to get that improvement.

It definitely would be if you compare it to your 780 and that Dell U2413 monitor. I certainly can't fault you for wanting more performance though! :)
 
Unless you've been asleep for the past 3 years AMD has not gotten any free passes. And they seem to play games just fine in the present. Wow, some of the comments here...

I've had multiple AMD cards in the past, but all I keep hearing for the last 6 months is AMD will be awesome for games *NEXT YEAR* when DX12 takes over..

AMD's hardware Asynch compute seems to give it an advantage in 3 of the 4 DX12/Vulcan titles currently available (Ashes, Doom, Hitman.. Tomb Raider is a wash).. and it's certainly possible Nvidia's driver compute pre-emption might not work as well, or at all.

but that's only 4 games I could buy today if I wanted too... not one of the 70 games in my Steam Library supports dx12.. I buy 1-2 new releases per year..

seems like at least another year before DX12 is going to make big inroads to mainstram gaming.

Edit: wikipedia DX12 gaming list (here) lists a couple more indie games with DX12 support apparently.. 4 or 5 games with broken/beta DX12 support, and then a list of about 11-12 upcoming games with DX12 support.. several of which are UWP Microsoft games though..
 
I've had multiple AMD cards in the past, but all I keep hearing for the last 6 months is AMD will be awesome for games *NEXT YEAR* when DX12 takes over..

AMD's hardware Asynch compute seems to give it an advantage in 3 of the 4 DX12/Vulcan titles currently available (Ashes, Doom, Hitman.. Tomb Raider is a wash).. and it's certainly possible Nvidia's driver compute pre-emption might not work as well, or at all.

but that's only 4 games I could buy today if I wanted too... not one of the 70 games in my Steam Library supports dx12.. I buy 1-2 new releases per year..

seems like at least another year before DX12 is going to make big inroads to mainstram gaming.

Not true. There are 12 - DX12 games coming in the latter half of 2016 alone...some big AAA games out of that.
 
I mean, someday, maybe it will be revealed that Nividia was paying devs to hold back use of new API's. Until then......we are pretty much nailing it right now, with AMD's own proposed timeline for general use of GPU compute, in games. Someone was saying earlier that the consoles are the reason we are even here, now, at all. Probably true. What is also true, is that AMD said it would be a couple of years (from the PS4/Xbone launch) before GPU compute would really start becoming a fundamental addition to game development. The takeaway was that they simply had to get devs trained up on it and get the tools out there. And now, we are seeing that. And as I mentioned in another thread, AMD has rather public ally been gunning R&D and marketing, towards GPU compute and asynchronus methods. Whereas, that maybe has not been as much the case, with Nvidia.

Sure, aggressive dev teams with custom/semi-custom engines, or focusing on a single platform, can do this stuff yesterday. But most games are made on middlewares and/or in environments where multiple targets/platforms, are being developed for. Without good tools and/or familiarity, cutting edge techniques take longer to trickle into a typical development setup.

*you also have to consider time, itself. Tomb Raider's development was likely started before there were ANY DX12 tools and GPU compute was still a little cilantro to sprinkle into DX11. Carmack is a nerd, so I'm sure he made someone lose sleep to get Vulkan into Doom 4. It probably won't be until next year, that just about every game ships with a codepath for a new API.


It's a problem, of the major middleware 3d engines, which has excellent performance on amd hardware? Unity is kind of lower end so who cares.

But UE4? That seems to favor nvidia, cryengine? Not sure on the last. The engines that work best with amd so far are glacier 2... (who other than the hitman dev uses that?)

nitrous (ashes, who other than they use that?)

and id tech 6 (who other than id tech uses that?) Maybe their publisher Bethesda could build an elder scrolls 6 game off that engine, but for now they seem content to stick with dx (f*cking 9) for skyrim with nothing new in sight other than a milking/remake of the same game with updated visuals in glorious dx9...


Actually, just looked up mankind divided for deus ex, it's using the dawn engine which looks to be based off the glacier 2 engine used in hitman, so we should see solid results for amd in that title.

But EVERY other game uses that shill engine UE 4, who expects them to build in robust support for the kinds of performance boosts gcn can attain? Anyone expecting shader intrinsic support?
 
Does AMD get a free pass for penalising their cards 10-30% in DirectX 11 and prior? I currently have a GTX680, but had a 4850 before that.. if I had a previous gen AMD card today though.. I'd be a bit pissed at AMD for not being able to write decent DX11 drivers that took full advantage of my card..
it has nothing to do with drivers. Its the support for dx11. The architecture itself doesn't get fully utilized until dx12 kicks in. Drivers can only do so much when API itself won't take advantage of the hardware itself.
 
Just saw toms review and he says it beats the 480 in every chart. Then in summary I don't know what the hell all the double talk was.
I think amd is in one pocket and nvidia in the other....
 
Once again, not what I said. I said that the current mid-range is not enough of a noticeable improvement over previous generations' high ends to be considered a truly appealing upgrade path. You improve on relative performance, but if you're only after relative performance, you upgraded last gen, and the point is moot.

If the RX 480 and GTX 1060 were 10-15% faster at their current price points, the upgrade would be a no-brainer. Unfortunately, I think I'll be waiting another generation to get that improvement.

See, that part wasn't there before and was why none of us understood what you were saying. It makes more sense now. This gen is impressive, but mostly on the performance/watt not on straight performance over last gen.
 
Not true. There are 12 - DX12 games coming in the latter half of 2016 alone...some big AAA games out of that.

None that were designed around DX12. Only DX11 games with random DX12 features tacked on as afterthought and then called "DX12" in the marketing. It'll be years before we see DX12-only games, least from any non-Microsoft studios.
 
If Vulkan truly does favor AMD cards, then I'm afraid we won't see much of it. With only 20% of the GPU market running AMD cards, an API favoring such a small minority is unlikely to be widely adopted. It grew out of Mantle and will suffer that same fate of irrelevance if it has an unequal benefit for the vast majority of gamers.

...but it does benefit all gamers! Unlike GameWorks or some other black box tricks Vulkan is open source and ALL hardware can use it. Already seen gains from all boards switching to it, but just so happens that AMD have the 'better' hardware to take advantage of it right now.
 
get one here if you want. at 249.99. This is the same one i got from bestbuy this morning. Lifetime warranty with registration to original buyer and looks clean as hell.

GeForce GTX 1060 6GB
 
Back
Top