Petitioning for 970 Refund

"landmine" is a bit of a stretch.

"splinter" would be a more accurate term.

This is what I'm kind of referring to with the overreacting.

How about surprise. My goodness, the word mincing ...
 
Let me know when people on single cards are actually getting stutter. I have gone over 3.6-4gb Vram in a number of games @ 1440p on my old monitor and have yet to experience any stutter, outside of games like FC4 & The Evil Within -- which everyone experienced stuttered on release.

FYI SLI has micro-stutter on almost all configs, it is the name of the game. You should do the research before hand and see that games, as in actually playing recent titles not benchmarks, is where SLI has its issues; whether it is compatibility, micro-stutter, etc. You have no one to blame but your self for going multi-gpu and complaining about the lack of smoothness. Mulit-gpu is too niche, devs don't support it properly and it just isn't there yet. The problem is compounded even more by the issues Nv are having with Maxwell specifically with SLI (voltage discrepancy, Gsync+ Dsr,etc)

Now back to the beating a dead horse.
 
Oh, like this one?

Sorry, but you should discount many of the sites running so-called "objective tests" on this issue as they clearly are not objective or done properly :p.

Let me say first of all that the GB / GiB discrepancies in so much of the testing and articles was unfortunate because it left people not knowing exactly where on the boundary the tests fell, not to mention the ridiculous rounding. PCPer had one like this also, but at least other memory usage scenarios were done (3.8 GB, etc.) such that there was no doubt.

But I think in this case the article was referring to 3.6 GiB because I assume they're talking 3.6 out of 4.
 
Considering how US review sites reacted, or lack of reaction, I trust GoldenTiger over any of them. As previously mentioned the Germans didn't have any issue creating an issue on multiple titles (with playable frame rates) either.

But as noted earlier in the [H] review they did suggest 980 SLI over 970s due to smoothness.
 
I've done my own tests and it is visible very readily even if you don't run any logging.

I'm not looking to pick on you or anything :), but the problem I have with your test is that you're comparing a 4k rendered resolution + scaler (which adds latency) to a 1440p native resolution. Now 4k is more than twice the resolution of 1440p. You experience some performance hits and conclude that it must be related to the segmented memory. But why? To me the reasonable expectation is that doubling the resolution is going to introduce performance issues. Throw in the fact that SLI is not such a great technology for smoothness and your results don't look particularly peculiar to me.

The thing is, the technical "drawback" of the 970 memory controller that people are criticizing is basically replicated in super-magnification in SLI tech. The latency involved in communicating between GPUs is going to be significantly more than that little memory access. It's like complaining about a paper cut when your leg's chopped off and bleeding out.
 
if there was an issue Kyle would be the first one to call NV on there bullshit
if you have been here long enough you know he is not afraid of NV or any one else and HAS called NV, ATi, AMD, Intel and others on shit like this

if Kyle says its not a big deal its just not end of story
 
if there was an issue Kyle would be the first one to call NV on there bullshit
if you have been here long enough you know he is not afraid of NV or any one else and HAS called NV, ATi, AMD, Intel and others on shit like this

if Kyle says its not a big deal its just not end of story

IIRC I don't think Kyle's really gone either way about his personal feelings. He's pointed to the review of the 970s where the 980s (and I remember one saying 290x's as well) were suggested for 4k due to stuttering. They basically recognized this issue in an indirect way before we even knew what it really was. Kudos for that.

He's also said if you feel wronged you should pursue a refund, ect.

I've been reading [H] for 13 years and I can't remember ever seeing a bias in the reviews & that is why I continue to come here / use commissioned links.

If your post was responding to mine I was mainly talking about other sites - the most obvious site is guru3d, where they barely/not at all utilized the affected RAM and said everything is dandy. I'll trust GoldenTiger over that review. :)
 
Last edited:
Buying a video card for the future is not stupid. Some of us don't have the money to upgrade often, we want hardware that will have the longest lifespan possible.
Also plenty of the GPUs I have owned have lasted more than two years at popular resolutions. Maybe those GPUs didn't hit 60fps in every title, but every title was playable and it allowed me to skip on any upgrades for a while.

Sell your old card online. And your upgrade will be alot cheaper. So the money issue is BS. If you didn't blow your load on an SLI setup or 500$+ video cards u would be spending less and getting more for your money. If you are buying cards for 4K resolutions.. Then Money isn't an issue for you. Takes alot of horsepower to get 4K@60fps. So thats at least a grand on Video cards.
 
yup, only GoldenTiger and the Germans know how to test the 970's VRAM issue...PC Perspective, Hardware Canucks, Tech Report, AnandTech, Guru3D all have no clue how to test hardware
 
Oh, like this one?

I have to say, we may not have gotten along throughout our history on this forum, but I have new-found respect for you.
You've really gone above and beyond for these issues and more than proven yourself.

You're not the golden boy.
You're the GoldenTiger. :cool:

Stay [H] and keep up the good work!
 
I'm not looking to pick on you or anything :), but the problem I have with your test is that you're comparing a 4k rendered resolution + scaler (which adds latency) to a 1440p native resolution. Now 4k is more than twice the resolution of 1440p. You experience some performance hits and conclude that it must be related to the segmented memory. But why? To me the reasonable expectation is that doubling the resolution is going to introduce performance issues. Throw in the fact that SLI is not such a great technology for smoothness and your results don't look particularly peculiar to me.

The thing is, the technical "drawback" of the 970 memory controller that people are criticizing is basically replicated in super-magnification in SLI tech.

While I understand where you're coming from, the issue is still definitely there. I actually was running 4k native on an Acer B326HK when this all broke, and I learned that a portion of the issues I was having were related to this whole 3.5gb segmentation problem... the monitor got returned before I could confirm which was which thing's fault for sure though due to its return period ending shortly and wanting to play it safe. I now am back on my 2560x1440 monitor for the time being and plan to go back to 4k later.

On the performance thing though, the issue is the frametimes primarily, despite the low framerate impact. In SLI especially, but even in single cards on some places' testing like PCGH (and even PCper's SLI article is showing extremely high numbers of double-frametime hits which is practically the same effect you'd see with frameskipping), the frametimes become very, very erratic and this results in 50fps even for example becoming unplayable due to the hitching, pausing, and stuttering. You may have 50 frames inside of that second still, but as testing shows, you get them very unevenly and get stuck with the same image for far more time and far more often each second than the normal performance level changes on both other video cards and even the same chip except disabled SMM's (GTX 980), showing the GTX 970's problems aren't within the realm of typical variation. While most cards will lose X percent of their framerate going up in resolution, their frame timing remains about the same relative to the longer rendering times with minimal amounts of extra swinging. On the GTX 970 setups, it becomes extremely frequent and the swings in frametimes are far more severe as a result once you've gone into using the last 512MB of vram (in actual use, not just cached). This goes for multi-card setups like SLI/CF too, where the frametimes vary at a pretty consistent level regardless of resolution.

IIRC I don't think Kyle's really gone either way about his personal feelings. He's pointed to the review of the 970s where the 980s (and I remember one saying 290x's as well) were suggested for 4k due to stuttering. They basically recognized this issue in an indirect way before we even knew what it really was. Kudos for that.

He's also said if you feel wronged you should pursue a refund, ect.

I've been reading [H] for 13 years and I can't remember ever seeing a bias in the reviews & that is why I continue to come here / use commissioned links.

If your post was responding to mine I was mainly talking about other sites - the most obvious site is guru3d, where they barely/not at all utilized the affected RAM and said everything is dandy. I'll trust GoldenTiger over that review. :)

I have to agree, while I have my pet peeves with things just like any place, I haven't ever seen anything I could honestly say "Damn, that's really biased!" out of this site, and on the whole Hardocp puts forth their best effort in my opinion, even if it's not the same opinion I have on something.



I have to say, we may not have gotten along throughout our history on this forum, but I have new-found respect for you.
You've really gone above and beyond for these issues and more than proven yourself.

You're not the golden boy.
You're the GoldenTiger. :cool:

Stay [H] and keep up the good work!

Thanks Falcon... I know I am very assertive with my posts the majority of the time (especially when I have a very strong opinion on something), and that doesn't come off well in a lot of cases, so it means a lot to me to see this :). I don't often agree with you, but that's not on a personal basis so don't take it that way! As the Brits say... "Cheers!" :cool:.
 
Considering how US review sites reacted, or lack of reaction, I trust GoldenTiger over any of them. As previously mentioned the Germans didn't have any issue creating an issue on multiple titles (with playable frame rates) either.

But as noted earlier in the [H] review they did suggest 980 SLI over 970s due to smoothness.

Here is exactly what we said about the 970 and its limitations at larger resolutions.

Gaming at 4K vs. NV Surround 5760x1200

There is something we need to discuss, and that is what cards are more appropriate for what resolutions of gaming. After this evaluation, we strongly feel that if you are going to be gaming on a 4K display you should spend the extra cash and spring for GeForce GTX 980 SLI. While GeForce GTX 970 SLI can deliver an "OK" 4K gaming experience, we did have to make image quality sacrifices in every game. GeForce GTX 970 SLI cannot "max out" 4K gaming. You will simply have a much better gameplay experience going with GeForce GTX 980 SLI.

Now, if you don't want to spend the extra money for two GTX 980 GPUs, or just don't have the cash, then at least AMD Radeon R9 290X CrossFire would be the better alternative at 4K for the same money. Since R9 290X CF is the same price as GTX 970 SLI, all things being equal we'd go for R9 290X CF for 4K. Performance was faster, we were able to have higher in-game settings, and you'll just have a better experience. However, if you do have the extra cash and go for GTX 980 SLI, then that will definitely be faster and better than R9 290X CF. Basically, for 4K, go with 980 SLI if you can first, if you can't due to money, then go with R9 290X CF. We would not go below that for a 4K experience.


if there was an issue Kyle would be the first one to call NV on there bullshit
if you have been here long enough you know he is not afraid of NV or any one else and HAS called NV, ATi, AMD, Intel and others on shit like this

if Kyle says its not a big deal its just not end of story

I think there certainly is an issue here, but since we use real world gaming to test cards with, the specifications have little interest to us. All we want is the hardware and a driver when it comes to reviewing video cards.

IIRC I don't think Kyle's really gone either way about his personal feelings. He's pointed to the review of the 970s where the 980s (and I remember one saying 290x's as well) were suggested for 4k due to stuttering. They basically recognized this issue in an indirect way before we even knew what it really was. Kudos for that.

He's also said if you feel wronged you should pursue a refund, ect.

I've been reading [H] for 13 years and I can't remember ever seeing a bias in the reviews & that is why I continue to come here / use commissioned links.

If your post was responding to mine I was mainly talking about other sites - the most obvious site is guru3d, where they barely/not at all utilized the affected RAM and said everything is dandy. I'll trust GoldenTiger over that review. :)

We get told we have bias almost every single day one way or another. Others go as far to say that we are "paid off" etc. What is funny to me is that many times you can find these comments going for Team Red or Team Green on the same day. Anyway....

NVIDIA messed up here, there is no doubt about that. It should have had its specs together from the get-go. If you have an issue with that, I highly suggest you attempt to return the card(s) for a refund. But I find issue with people getting upset with using this cards for high resolution gaming when you should have known going in that these cards were NOT good performers at those resolutions.

That all said, get that GTX 970 traded in and grab a R9 290X card(s) as those are cheap as hell right now.
 
On the performance thing though, the issue is the frametimes primarily, despite the low framerate impact. In SLI especially, but even in single cards on some places' testing like PCGH (and even PCper's SLI article is showing extremely high numbers of double-frametime hits which is practically the same effect you'd see with frameskipping), the frametimes become very, very erratic and this results in 50fps even for example becoming unplayable due to the hitching, pausing, and stuttering. You may have 50 frames inside of that second still, but as testing shows, you get them very unevenly and get stuck with the same image for far more time and far more often each second than the normal performance level changes on both other video cards and even the same chip except disabled SMM's (GTX 980), showing the GTX 970's problems aren't within the realm of typical variation. While most cards will lose X percent of their framerate going up in resolution, their frame timing remains about the same relative to the longer rendering times with minimal amounts of extra swinging. On the GTX 970 setups, it becomes extremely frequent and the swings in frametimes are far more severe as a result once you've gone into using the last 512MB of vram (in actual use, not just cached). This goes for multi-card setups like SLI/CF too, where the frametimes vary at a pretty consistent level regardless of resolution.

I think I would feel more comfortable if it wasn't 4k w/ Ultra quality vs 1440p w/ High quality. I know you have to do something to bump the VRAM up but the ideal test would have no changes in rendering settings.

The thing that annoys me is that nvidia could easily make something to allow for a clear test. Like write a program or special driver that simply reserves the first 1 GB of VRAM and can't be evicted by the OS, driver, or whatever. Then run a game that requires 3 GB of VRAM for framebuffers. Run the game with the other application consuming memory, and run the game alone. Both cases, identical settings. Compare the results. That's what we need.
 
I want to ask of the community who have purchased GTX 970s this question. Are you happy with the performance you are getting in your games with the 970? Does it deliver a good gameplay experience for you?

Just like any card, I think this is the most important question to ask yourself. If it doesn't provide this for you, and you have the money, then by all means find something that does satisfy this.
 
I'm not looking to pick on you or anything :), but the problem I have with your test is that you're comparing a 4k rendered resolution + scaler (which adds latency) to a 1440p native resolution. Now 4k is more than twice the resolution of 1440p.
I don't think the frame time variance here could be reasonably attributed to an issue in shading performance or general performance. Increasing the average frame time (through 'natural' means) will always increase relative frame time variance, but the graphs do tend to look very 'night and day' when it comes to exceeding the memory threshold. Not clear-cut, but still compelling.

You experience some performance hits and conclude that it must be related to the segmented memory. But why?
It's a conclusion not found in direct evidence, but I think it's a reasonable conclusion in this instance that the segmentation is partially to blame. In other tests where differences in frame time variance are less pronounced, I think it's very difficult to say what the exact combination of causes is.

This much is certain: based on the architecture and its limitations alone, we know that it's a contributor to reduced performance, and thus greater frame time variance, in any cases where memory throughput is a factor. It's just not clear how much of a factor it is in various cases. It's not something that's reasonably testable.

This is one of those cases where somewhat subjective opinions about how the performance 'feels' at stressful settings is what should guide your purchasing decision. That information has been out there for a while.
 
I want to ask of the community who have purchased GTX 970s this question. Are you happy with the performance you are getting in your games with the 970? Does it deliver a good gameplay experience for you?

Just like any card, I think this is the most important question to ask yourself. If it doesn't provide this for you, and you have the money, then by all means find something that does satisfy this.

Works perfectly fine for me and the ~6-7 people I know that use them. The only issue I have had is the ''low utilisation'' stability problem -- I had to flash my card to fix it. My friend has also had the GSYNC+DSR+SLI+ voltage issue.

Other than that I have played used a 1440p screen, Gsync, over 3.6-4gb Vram with no signs of stutter.

There is a lot of hyperbole being thrown around about the actual in-game performance of these cards. Sli historically has had issues with stutter/microstutter and I don't see it changing anytime soon. For those of us actually playing games we aren't seeing these 'graphs' translate into gameplay. I will leave that to the forum warriors who preach more than they actually play games.

It does bother me that Nvidia appears to have covered up the 'specs' of the card, but for the price I got the GTX 970 and its performance I will give Nvidia a second chance as I have never had a problem with them in the past and their driver support is far above AMDs.

I can empathize will owners demanding refunds/exchanges for the false advertising though and I get it. But for myself I will just keep on using the card until something much better comes along.

But yes outside of typical SLI issues no one I know has any issues with the 970. Fantastic card for the price.
 
GoldenTiger,

Could you clarify why you think Guru3D is not utilizing the last 512 mb of VRAM in their testing? I'm not understanding your explanation.

4 gb = 4,096 mb
3.6 gb = 3,686.4 mb
3.5 gb = 3,584 mb

When they say that they are using almost 3.6 gb (or almost 3,686 mb) VRAM usage how is this not utilizing the last 512 mb of VRAM? 3.6 gb is not 3,600 mb. (though even 3,600 mb is technically still utilizing the last 512 mb.) Am I missing something?

I think that they could have probably pushed it a bit more than "almost" 3.6 gb but I believe this still is using VRAM in the last 512 mb.
 
Last edited:
I want to ask of the community who have purchased GTX 970s this question. Are you happy with the performance you are getting in your games with the 970? Does it deliver a good gameplay experience for you?

Just like any card, I think this is the most important question to ask yourself. If it doesn't provide this for you, and you have the money, then by all means find something that does satisfy this.

I ran 970 SLI and was wondering why I was getting below normal performance for me in SLI. I have since sold my 970 SLI.

In the last year or so I have owned.970/980 are the best Nvidia cards I have owned but 970SLI there was just something not right running games even at 1920x1080.
I would not buy anymore 970 cards no matter what anyone says on the forums.

680SLI
780TiSli
970SLI
980SLI
R9 290 Crossfire
R9 290X Crossfire
 
I want to ask of the community who have purchased GTX 970s this question. Are you happy with the performance you are getting in your games with the 970? Does it deliver a good gameplay experience for you?

Just like any card, I think this is the most important question to ask yourself. If it doesn't provide this for you, and you have the money, then by all means find something that does satisfy this.

I'm absolutely happy with mine. Keeping it.
 
GoldenTiger,

Could you clarify why you think Guru3D is not utilizing the last 512 mb of VRAM in their testing? I'm not understanding your explanation.

4 gb = 4,096 mb
3.6 gb = 3,686.4 mb
3.5 gb = 3,584 mb

When they say that they are using almost 3.6 gb (or almost 3,686 mb) VRAM usage how is this not utilizing the last 512 mb of VRAM? 3.6 gb is not 3,600 mb. (though even 3,600 mb is technically still utilizing the last 512 mb.) Am I missing something?

I think that they could have probably pushed it a bit more than "almost" 3.6 gb but I believe this still is using VRAM in the last 512 mb.

I might be able to clear it up for you. When they say they are using almost 3.6 GB of VRAM they mean actual GB (See: https://en.wikipedia.org/wiki/Gigabyte) instead of GiB (See: https://en.wikipedia.org/wiki/Gibibyte). 1 GB = 1000 MB so they are using ~3600 MB of VRAM. 1 GiB = 1024 MiB, so the 4 "GB" is actually 4 GiB and has 4x1024 MiB or 4096 MiB of VRAM. The cards have eight 512 MiB VRAM modules to make up 4 GiB. Take the 512 MiB, subtract it from 4096 MiB and you get 3584 MiB. With that confusion you can get to "almost 3.6 GB" and not experience any issues because you technically have to get into the last 512 MiB pool which is past the 3584 marker.
 
I want to ask of the community who have purchased GTX 970s this question. Are you happy with the performance you are getting in your games with the 970? Does it deliver a good gameplay experience for you?

Just like any card, I think this is the most important question to ask yourself. If it doesn't provide this for you, and you have the money, then by all means find something that does satisfy this.

I wonder if that should be a poll question.

1. Yes, so far I've only tested BF4 on ultra settings @ 2560x1440 with two 970s and they performed well.
2. Yes, although I spend 10x more time reading this forum than I do using the cards so I've only had a chance to test BF4 out and I didn't experience the issue since VRAM usage didn't exceed 3 GB.

The way I see it is that even though the specifications aren't accurate, I didn't really buy the cards for X amount of VRAM, n ROPS, or n L2 cache. I took a look at review websites like HardOCP which showed the card performed close to a 290X all while using less power and for a substantially lower cost than the 290X (at the time). Essentially I could buy GTX 780 performance for $150 or so cheaper once it was released (and could get 780 Ti performance for cheaper too). I did originally get a GTX 980 just so I could have a current gen X80 card but ended up selling it and later buying a 970 because I didn't feel the extra performance increase was worth paying $200 more.
 
I might be able to clear it up for you. When they say they are using almost 3.6 GB of VRAM they mean actual GB (See: https://en.wikipedia.org/wiki/Gigabyte) instead of GiB (See: https://en.wikipedia.org/wiki/Gibibyte). 1 GB = 1000 MB so they are using ~3600 MB of VRAM. 1 GiB = 1024 MiB, so the 4 "GB" is actually 4 GiB and has 4x1024 MiB or 4096 MiB of VRAM. The cards have eight 512 MiB VRAM modules to make up 4 GiB. Take the 512 MiB, subtract it from 4096 MiB and you get 3584 MiB. With that confusion you can get to "almost 3.6 GB" and not experience any issues because you technically have to get into the last 512 MiB pool which is past the 3584 marker.
Why do you think they are referring to 1000 mb per gb vs. 1024 mb per gb? Do they indicate this anywhere in the article? How do you know what they "mean"? When companies sell hard drives and flash storage it is common to use 1,000 mb per gb. I have never heard anyone referring to gb of memory on a video card as 1000 mb per gb. Do you think that Guru3D thinks 3.6 gb of memory on this video card is 3,600 mb?

Like I said:
4 gb = 4,096 mb
3.6 gb = 3,686.4 mb
3.5 gb = 3,584 mb
 
To further back this up, look at what HardOCP said in the Watchdogs portion of that review:



We know now one possible reason why HardOCP had to turn the GTX 970 SLI settings down to Medium instead of High, but since the results of what settings were playable still matched 290 CrossFire, no red flags went up. 970 SLI performed so well with all of the other games in the review that no reasonable person would detect that there was a problem with the 970 to go looking for.

If anyone noticed that 970 SLI wasn't working quite as well with Watchdogs as it did with all the rest of the games, they would have wondered what was wrong with Watchdogs. Sure, 20/20 hindsight tells us this was an early detection of the 970 VRAM issue, but neither HardOCP nor its readers could have known that.

I had the same problem with Dragon Age. Before we knew of this 3.5gb bullshit. I even posted a thread here on Hard about the massive stuttering I was getting in DA:I @ 4k with 2 970's.

Low and behold it never was a driver problem.....

Anyway yea I guess people should take [H] reviews more seriously now since basically they said 970's in SLI @ 4k stutter alot.
 
I knew 4k was not going to be a pleasant experience at all, so I never even went near it. Even from benchmarks I knew 4k was barely on the playable threshold on the 970's, and this was without the VRAM issue beforehand, thus this VRAM revelation has a smaller impact on me.

Also when I change my game settings, I only strive to find the best possible playable settings, I don't really look at why I can't play at higher settings (mainly because I really can't do very much about it even if I knew), hence why I never noticed this VRAM issue before, if it stutters, I assume it was out the card's league so I turn it down. I am fine with that since I often start with things that have the least amount of visual impact anyway (more often than not I cannot find any difference between the top 2 texture settings for example).

My biggest issue with my SLI 970 has actually to do with drivers (I still haven't got DSR or MFAA working with SLI and swift yet), and it's getting a little aggrevating at this point.
 
Judging from what I'm now reading in this thread, it seems Nvidia correctly predicted that the few people that were savvy enough to find their cut corners would do nothing. Nvidia knowingly withheld information, and we're here forced to argue amongst ourselves whether we're gonna just take it.

Even if we are stuck with Green or Red at the end of the day, that's the fact that's going to piss me off for a good while.
 
Yeah, unforunately because I value personal experience much more than business principle, I often have to deal with the devil. I am not that kind of person to switch teams (and gimp my gaming experience as a result) just on business grounds :(.

I used to like nVidia, now I am starting to like their cards only, and even that is pushing it. AMD simply isn't offering a viable alternative for me either (and I am pretty much stuck on nVidia for both my monitor and games I play).
 
Yeah, unforunately because I value personal experience much more than business principle, I often have to deal with the devil. I am not that kind of person to switch teams (and gimp my gaming experience as a result) just on business grounds :(.

I used to like nVidia, now I am starting to like their cards only, and even that is pushing it. AMD simply isn't offering a viable alternative for me either (and I am pretty much stuck on nVidia for both my monitor and games I play).

Yea and it looks like G-sync might even be able to work without that special monitor.

I learned from Nvidia's marketing bullshit when they tried to push Physx. G-sync is the same bullshit to me.

Just like back in the day certain motherboards couldnt SLI, but could Crossfire....why? Because they didn't want to pay the Nvidia royalties for it (eventhough it would work with a modified driver).

Then this 970 bullshit? Yea..
 
So far the G-Sync thing, from what I can understand, it that it only works on monitors with inherent adaptive-sync (meaning, currently only laptops panels), and G-Sync desktop monitors definitely predates DP 1.2a anyway, so I am not entirely sure what the G-Sync fiasco is really about

Anyone enlighten me on this? Because I haven't found any evidence of G-Sync working without the module on a DESKTOP monitor.
 
So far the G-Sync thing, from what I can understand, it that it only works on monitors with inherent adaptive-sync (meaning, currently only laptops panels), and G-Sync desktop monitors definitely predates DP 1.2a anyway, so I am not entirely sure what the G-Sync fiasco is really about

Anyone enlighten me on this? Because I haven't found any evidence of G-Sync working without the module on a DESKTOP monitor.

As of right now there isn't any Adaptive sync monitors. If/when they do come out, there is a possibility you will not need the G-sync module to even use G-sync, as in it works on that laptop monitor without the module.

Now I am not saying it will work, but what IF when free/adaptive sync goes mainstream, does Nvidia keep charging extra for the Module? Or just say fuck it and make g-sync work without the modules? You will have alot of pissed off customers.
 
chenw said:
So far the G-Sync thing, from what I can understand, it that it only works on monitors with inherent adaptive-sync (meaning, currently only laptops panels), and G-Sync desktop monitors definitely predates DP 1.2a anyway, so I am not entirely sure what the G-Sync fiasco is really about

The "fiasco" was born from a conspiracy theorist with delusions of grandeur who has since deleted his web pages because he couldn't deliver on his claims that he could write a hacked Nvidia driver that would allow all DP monitors to do G-Sync. Just because it's popular at the moment to think that everything Nvidia does has deliberate malicious intent towards its customers doesn't make it true.

Go back a year, to January 8 2014, to see Nvidia explain why they made the G-Sync module:

"Laptops, he explained, have a different display architecture than desktops, with a more direct interface between the GPU and the LCD panel, generally based on standards like LVDS or eDP (embedded DisplayPort). Desktop monitors use other interfaces, like HDMI and DisplayPort, and typically have a scaler chip situated in the path between the GPU and the panel. As a result, a feature like variable refresh is nearly impossible to implement on a desktop monitor as things now stand. That, Petersen explained, is why Nvidia decided to create its G-Sync module, which replaces the scaler ASIC with logic of Nvidia's own creation. To his knowledge, no scaler ASIC with variable refresh capability exists—and if it did, he said, "we would know." Nvidia's intent in building the G-Sync module was to enable this capability and thus to nudge the industry in the right direction."

http://techreport.com/news/25878/nvidia-responds-to-amd-free-sync-demo

Nvidia's nudge is working: the first eDP Tcon for 4K desktop monitors was announced on October 22, 2014:

“Traditionally only used between a GPU and embedded display, eDP is now making inroads as the panel interface within a computer monitor,” said Jimmy Chiu, Executive VP of Marketing at Parade Technologies. “As monitors move towards higher resolution such as 4K and beyond, the pin and wire count to support the existing LVDS interface, or even MIPI, is simply not practical."

http://www.paradetech.com/2014/10/parade-announces-dp667/

New technology typically takes a full year after announcement to appear in new monitors. Does anyone care to guess how much longer the monitor industry would have taken to get around to making variable refresh scalars if Nvidia hadn't started it with G-Sync?
 
Why do you think they are referring to 1000 mb per gb vs. 1024 mb per gb? Do they indicate this anywhere in the article? How do you know what they "mean"? When companies sell hard drives and flash storage it is common to use 1,000 mb per gb. I have never heard anyone referring to gb of memory on a video card as 1000 mb per gb. Do you think that Guru3D thinks 3.6 gb of memory on this video card is 3,600 mb?

Like I said:
4 gb = 4,096 mb
3.6 gb = 3,686.4 mb
3.5 gb = 3,584 mb

Educated guess but from what it seems they saw mid to high 3500 and called it "almost 3.6 GB" which would explain why the issues didn't surface. I think a lot of people would have said the same thing. If you saw 3000 MB usage you'd probably say 3 GB instead of mathing it out since it's close enough.
 
I want to ask of the community who have purchased GTX 970s this question. Are you happy with the performance you are getting in your games with the 970? Does it deliver a good gameplay experience for you?

Just like any card, I think this is the most important question to ask yourself. If it doesn't provide this for you, and you have the money, then by all means find something that does satisfy this.

Are returns being allowed in the EU with no questions asked? Why are US customers second-class?
 
Yea and it looks like G-sync might even be able to work without that special monitor.

I learned from Nvidia's marketing bullshit when they tried to push Physx. G-sync is the same bullshit to me.

Trashing on g-sync is the most stupid thing one can do. It's one of the biggest revolutions in display technologies as of late (if not the biggest since LCD is stagnating so horribly) and we can and should thank nvidia for it. Without nvidia I can guarantee that we would have been left with shitty fixed refresh monitors for many many many years to come. Eh, g-sync even makes upgrading your rig less important since you can enjoy smooth gamplay without a steady 60+ fps.

That and the fact it fucking works without any bugs or hiccups (on single GPUs that is), even with old or open-gl games it works just fine.

It's the real deal, if you don't have a g-sync monitor/never touched one then you may not understand but rarely has the expression "game-changer" been more appropriate tbh.

And I highly doubt that AMD can touch nvidia's dedicated G-Sync module with their solution. It may be about as good and cheaper but I have a hard time believing it will be equally good (and don't believe for a second that it will be better). I care about the price of things but I also care about quality/reliability etc - I do go for "high end" gear sometimes (my Rog Swift & my PSU.for example, to most people they would seem WAAAY overpriced). My point is even though g-sync is more expensive I don't mind at all knowing that it is dedicated hardware and better (but maybe the difference isn't worth the premium, we don't know that yet, still have to wait a few more months)
 
Last edited:
I wonder if it's possible to re-enable the disabled L2 cache and ditch that shared memory controller with a firmware update? Of course if the L2 cache is actually damaged then I guess that wouldn't work too well. It depends on how bad that particular module is. If it's just a little slower, then maybe it would work. If it's completely non-functional then it wouldn't help any. But it could be one method that nvidia could use to handle the situation, rather than dealing with product returns.
 
Educated guess but from what it seems they saw mid to high 3500 and called it "almost 3.6 GB" which would explain why the issues didn't surface. I think a lot of people would have said the same thing. If you saw 3000 MB usage you'd probably say 3 GB instead of mathing it out since it's close enough.
I'm going to assume that he was not referring to almost 3600mb because of the nature of the article. In the paragraph after the "almost 3.6 GB" statement he also refers to using the last 512mb of memory. I find it highly unlikely that a site like Guru3D that is writing an article about using above the 3.5 gb of memory would actually not go above 3.5gb. I have sent Hilbert, the author of the article, an e-mail to clarify.
 
Back
Top