[AnandTech] Nvidia G-Sync Review

Status
Not open for further replies.
My understanding is that the display drivers need to be able to talk to the G-Sync module, and AMD's drivers naturally would not be able to talk to an Nvidia product. So while the method used by Nvidia is standards-based, it's still a proprietary module that only talks with Nvidia products.

AMD is purportedly working on a "response" to G-Sync, but quite frankly I doubt they're going to just pull something out that's as good or better. If it was that easy, it would have been done years ago.

There is specific hardware in Kepler that is needed for G-Sync.

Just like G-Sync should "have been done years ago..." :rolleyes:
 
What specific hardware in Kepler is required?

There isn't one. LordEC is full of shite again.

The only hardware requirement on the GPU side is Display Port 1.2. That is why the requirement is 650 TI Boost or higher. The 650 and 650 TI don't have DP 1.2.

So it has nothing to do with the Kepler chip.
 
There isn't one. LordEC is full of shite again.

The only hardware requirement on the GPU side is Display Port 1.2. That is why the requirement is 650 TI Boost or higher. The 650 and 650 TI don't have DP 1.2.

So it has nothing to do with the Kepler chip.

Oh really...
SKYMTL said:
Because the Kepler architecture integrates an output clock generator for the monitor, something Fermi didn't.
 
I see we've graduated from non-answers to vague answers. Who is Michael "SKYMTL" Hoenig?
 
The only hardware requirement on the GPU side is Display Port 1.2. That is why the requirement is 650 TI Boost or higher. The 650 and 650 TI don't have DP 1.2.
Bingo.

Oh really...
Yes really. All you've pointed out is that Nvidia's older cards left out a now-standardized feature of DisplayPort that's required for variable refresh rates to work.

Any graphics card can support variable v-blank intervals, it's not specific to Kepler nor is is proprietary Nvidia tech in any way. It's part of the DisplayPort spec.


Edit 1: Like I've been pointing out to you, over and over, there are already displays that incorporate variable refresh rates. It's being used as a power-saving feature in mobile devices, but it works in exactly the same way as G-Sync (altering v-blank to tell the display when to refresh). The first monitor to support this standard and non-proprietary feature of the DisplayPort spec was announced back in 2012: http://www.paradetech.com/2012/10/p...-on-chip-frame-memory-for-panel-self-refresh/

Edit 2: And here's a 100% non-Nvidia controller for the LCD itself, enabling variable refresh rates (among other features): http://www.paradetech.com/products/...troller-products/dp633-643-653-edp-psr-tcons/
The DP633/643/653 is a family of LCD timing controller (Tcon) products that include an image frame buffer memory and support the PSR (Panel Self Refresh) feature defined in the VESA Embedded DisplayPort (eDP) Standard version 1.3.
 
Last edited:
I see we've graduated from non-answers to vague answers. Who is Michael "SKYMTL" Hoenig?
LoL

Yes really. All you've pointed out is that Nvidia's older cards left out a now-standardized feature of DisplayPort that's required for variable refresh rates to work.

Any graphics card can support variable v-blank intervals, it's not specific to Kepler nor is is proprietary Nvidia tech in any way. It's part of the DisplayPort spec.

I thought I asked for a link awhile ago.. I have been searching for this "standard" and have been unable to find anything.
If it is indeed a DP1.2 standard it should be possible through MST, which it isn't.
 
Oh really...

Oh wow, you posted another users comment so it MUST BE A FACT!

BTW, Fermi architecture works with G-Sync.

Quadro 6000 is G-Sync compatible (http://www.nvidia.com/object/product-quadro-6000-us.html). Imagine that it has a Fermi GPU.

"G-Sync is designed to work over DisplayPort (since it’s packet based), with NVIDIA manipulating the timing of the v-blank signal to indicate a refresh."

G-Sync is only limited to Kepler on Geforce cards because Fermi cards and lower Kepler cards don't have a DP.

NV will keep G-Sync NV by controlling drivers for G-Sync. Theoretically AMD could release drivers that use the G-Sync module with a little reverse engineering.
 
Okay, so dumb question here. If all this is doing is adjusting VBlank periods on the fly, then this can technically work on a CRT too, right?
 
I thought I asked for a link awhile ago.. I have been searching for this "standard" and have been unable to find anything.
If it is indeed a DP1.2 standard it should be possible through MST, which it isn't.
Re-read my post, I provided multiple links.

I've also mentioned eDP (Embedded DisplayPort) by name, multiple times, and a quick Google search of that term would have taken you directly to pertinent information.

Okay, so dumb question here. If all this is doing is adjusting VBlank periods on the fly, then this can technically work on a CRT too, right?
DisplayPort is a packet-based digital signal. A CRT would have no way to handle it without a specialized DisplayPort controller

Also, most CRT's detect the refresh rate being sent to them and then lock it in. Any large variation would trigger a full mode-reset and re-detection. You might be able to fool some CRT's into handling a free-running refresh rate by using sync-on-green, but even if you did, you'd run into issues with flicker.
 
Last edited:
Oh wow, you posted another users comment so it MUST BE A FACT!

BTW, Fermi architecture works with G-Sync.

Quadro 6000 is G-Sync compatible (http://www.nvidia.com/object/product-quadro-6000-us.html). Imagine that it has a Fermi GPU.

That is a typo... They meant Q-Sync, aka Quadro Sync, not G-Sync.

Re-read my post, I provided multiple links.

I've also mentioned eDP (Embedded DisplayPort) by name, multiple times, and a quick Google search of that term would have taken you directly to pertinent information.
Which has nothing to do with G-Sync or what we are talking about, though it is similar.
 
Last edited:
Which has nothing to do with G-Sync or what we are talking about.
How does it have nothing to do with what we're talking about?

It's part of the DisplayPort spec.
It alters v-blank timings to instruct the display when to refresh.
It uses a controller on the display-side with its own built-in memory to control refresh rates.
This enables the GPU to tell the monitor to hold an image, then refresh on-demand.

This is EXACTLY what G-Sync does.

Edit 1: Like I've been pointing out to you, over and over, there are already displays that incorporate variable refresh rates. It's being used as a power-saving feature in mobile devices, but it works in exactly the same way as G-Sync (altering v-blank to tell the display when to refresh). The first monitor to support this standard and non-proprietary feature of the DisplayPort spec was announced back in 2012: http://www.paradetech.com/2012/10/p...-on-chip-frame-memory-for-panel-self-refresh/

Edit 2: And here's a 100% non-Nvidia controller for the LCD itself, enabling variable refresh rates (among other features): http://www.paradetech.com/products/...troller-products/dp633-643-653-edp-psr-tcons/
The DP633/643/653 is a family of LCD timing controller (Tcon) products that include an image frame buffer memory and support the PSR (Panel Self Refresh) feature defined in the VESA Embedded DisplayPort (eDP) Standard version 1.3.
Quoted just in case you missed the edits...

This is not new tech. This is not something Nvidia invented.
 
Is PSR defined only in eDP 1.3 or is it also defined in DP 1.2?
Only defined in eDP 1.3, which is why I was confused since they were making it sound like a DP 1.2 spec.

FYI- There is more to G-Sync than that. Again, they are a similar idea but cannot really be compared.
 
That is a typo... They meant Q-Sync, aka Quadro Sync, not G-Sync.

LOL! A typo? So if it is a typo, cause you seem to know more than NV, then its a typo on dozens and dozens of pages/press releases.

Q-Sync is on Quadro K GPU's (ie Quadro K6000). G-Sync is on Quadro GPU's (ie Quadro 6000).

BTW Q-Sync and G-Sync are completely different modules.
 
Only defined in eDP 1.3, which is why I was confused since they were making it sound like a DP 1.2 spec.

FYI- There is more to G-Sync than that. Again, they are a similar idea but cannot really be compared.
How can they not be compared when PSR and G-Sync BOTH:

Alter v-blank timing to instruct the display when to refresh.
Use a controller on the display-side with its own built-in memory to control refresh rates.
Enable the GPU to tell the monitor to hold an image, then refresh on-demand.

Seems pretty damn comparable to me.


Edit: Turns out Intel has already demonstrated variable refresh rates on their GPU's (using PSR through eDP). They first announced support at IDF in 2011.

It's worth mentioning that eDP is a super-set of DisplayPort and is 100% backwards compatible with standard DisplayPort displays. There's nothing stopping an OEM from implementing some (or all) of the additional eDP features on their card and then sticking a standard DisplayPort connector on it. This appears to be exactly what Nvidia has done, and it doesn't violate anything within the spec.
 
Last edited:
LOL! A typo? So if it is a typo, cause you seem to know more than NV, then its a typo on dozens and dozens of pages/press releases.

Q-Sync is on Quadro K GPU's (ie Quadro K6000). G-Sync is on Quadro GPU's (ie Quadro 6000).

BTW Q-Sync and G-Sync are completely different modules.

http://www.nvidia.com/docs/IO/127059/quadro-sync-user-guide.pdf

http://www.nvidia.com/docs/IO/40049/Quadro_GSync_install_guide_v4.pdf

If you need an addon card, it means the architecture isn't capable of supporting it natively.

Edit- When you click on the Quadro G-Sync link under features, it takes you to a Q-Sync page which is why I thought it was a typo.
FYI- Quadro G-Sync has nothing to do with Geforce G-Sync.
Quadro G-Sync has been around since 2006.
 
Last edited:
Quadro G-Sync has been around since 2006.
Just to clarify your point about old vs. new G-Sync:

The old Quadro G-Sync module's primary purpose was for genlock. It synced the graphics card to an external clock source. It did not make the graphics card itself a clock source, and it did not control displays directly in any way. You're meant to use an external clock source and sync both the monitor and the graphics adapter to it.

By contrast, consumer G-Sync uses the graphics card as a clock source and controls the monitor directly.
 
If you need an addon card, it means the architecture isn't capable of supporting it natively.

HOLY S! What do you think the G-Sync module in the monitor is? Its an ADDON card.

Here let me help you:
For the GeForce model, the DP cable transfers data from the GPU to the G-Sync module in the monitor. In the Quadro G-Sync setup, since most industrial monitors / projectors DON'T run DP but rather BNC that is what the internal ribbon cable between the Quadro GPU and G-Sync board is for.

Do you actually read stuff you post or just throw stuff at the wall and hope something sticks.

Edit- When you click on the Quadro G-Sync link under features, it takes you to a Q-Sync page which is why I thought it was a typo.
FYI- Quadro G-Sync has nothing to do with Geforce G-Sync.
Quadro G-Sync has been around since 2006.

I was never talking about Q-Sync in the first place, you brought it into the mix by mis-reading...again.

You got called out for saying G-Sync would never work on Fermi chips and required something special, you have yet to mention what that was, in Kepler chips.

I called BS and told you the only GPU hardware requirement is a DP. You have yet to submit any fact to backup your statement and simply try to cloud the argument by telling another user to GOOGLE IT. Really Google it?

I don't know what your purpose in posting is other than you are a simple fanboy attempting to crap on a technology you don't know or want to know anything about. All of your arguments against G-Sync are the same illogical and irrational type my 3 year old throws out in a disagreement. I expect it from a 3 year old.

Just call G-Sync a smell poopy-head and leave. You have no other argument than that at this point.
 
Last edited:
HOLY S! What do you think the G-Sync module in the monitor is? Its an ADDON card.

Here let me help you:
For the GeForce model, the DP cable transfers data from the GPU to the G-Sync module in the monitor. In the Quadro G-Sync setup, since most industrial monitors / projectors DON'T run DP but rather BNC that is what the internal ribbon cable between the Quadro GPU and G-Sync board is for.

Do you actually read stuff you post or just throw stuff at the wall and hope something sticks.

I was never talking about Q-Sync in the first place, you brought it into the mix by mis-reading...again.

You got called out for saying G-Sync would never work on Fermi chips and required something special, you have yet to mention what that was, in Kepler chips.

I called BS and told you the only hardware requirement is a DP. You have yet to submit any fact to backup your statement and simply try to cloud the argument by telling another user to GOOGLE IT. Really Google it?

So the fact that you can't comprehend what the difference is, that is my fault?

I have mentioned what that hardware in Kepler is, try to keep up.
I didn't tell you to google it, I googled SKYMTL for someone asking who he is.

FYI- Since you are having trouble, Q-Sync is basically an updated/upgraded Quadro G-Sync board meant for Kepler. This is not related to the G-Sync that Nvidia announced on October 18th.

Just to clarify your point about old vs. new G-Sync:

The old Quadro G-Sync module's primary purpose was for genlock. It synced the graphics card to an external clock source. It did not make the graphics card itself a clock source, and it did not control displays directly in any way. You're meant to use an external clock source and sync both the monitor and the graphics adapter to it.

By contrast, consumer G-Sync uses the graphics card as a clock source and controls the monitor directly.

Correct, which is why I linked him the Quadro G-Sync pdf.
 
Last edited:
I have mentioned what that hardware in Kepler is, try to keep up.
He's keeping up just fine, you're the one who isn't reading... When asked why it was an Nvidia-specific feature requiring Kepler, all you mentioned is that Kepler contains an output clock generator.

Does Kepler contain an output clock generator? Yes.
Is that a feature exclusive to Kepler? Nope.

Recent Intel GPU's also contain a clock generator in order to support PSR. I can't speak for AMD's cards just yet (only intel has actually demonstrated variable refresh rates on real working hardware), but it's probable they include one as well.

Like I said before, just because older Nvidia cards leave out the required hardware doesn't mean it's anything new or exclusive. It's just another thing that can be included on any DisplayPort implementation :rolleyes:
 
I think there is a bit of confusion tied along with some speculation, since nV (nor anyone else) has not "cracked open" the G-Sync Module and specified exactly how it operates.. doubt that will happen anywho..


/rant on

By all appearances, the "new" G-Sync module is using standard Display Port technology, part from eDP part from DirectDrive. They are then "packaging" these into a proprietary module (PLEASE look up what proprietary means before responding) that is ONLY compatible with very specific hardware configurations. This would be akin to say, Intel creating a new x86 CPU that uses a new CPU socket using for example PCIe as communication.. YES x86 and PCIe are "Standards" however the exclusion of ALL other devices EXCEPT those specified, make it proprietary by nature. (ie similar from Intel's move to Socket 7 to Slot 1 , minus the PCIe, it was an attempt in part to lock out AMD/Via and Citrix from offering "drop in replacements" to S7 parts by creating a new "format" that required a specific set of hardware to operate).. not much different except here, the middleman appears to be a hardware module enabling standard protocols using, most likely a hardware device lock, think DRM for hardware.

I can't and wont take credit for ALL the homework (links/sources) discussed in this thread, however I can say I was amongst the very 1st to look beyond the initial "PRE-views" and dive deeper in to the available VESA specifications well over 2 months ago when G-Sync was 1st announced.. many so called "sources" tend to refer back the the very same documentation and cited links I provided back in October. IIRC, DP 1.2 bumps the AUX channel from 1 mbpsto 720 mbps. I have a bit of history when looking into these types of things,.. dating back to AMD's Rage MAXX Win2K fiasco, onward to Via's implementation of Intel's AGP specification and then on to AGP 3.0. I am hardly infallible however I feel that with a good amount of certainty I have a good grasp of what goes on beyond packaging/marketing.

G-Sync is a half step forward and a half step to the left..it appears to be pushing VESA standards into an as of yet, undelivered market using proprietary hardware. This is good and bad, this has the potential to cause additional fragmentation BUT could be the catalyst to bring eDP/DD to the desktop space.. similar to what PhyX has done, just I certainly hope at a MUCH faster pace. The major roadblocks being Display manufacturers.. what incentives do THEY have to include such tech outside of a niche segment ? AMD/Intel/Nv should ALL be pushing for the implementation of such tech into mainstream. The additional controllers/modules required for such NEED to come in at a lower cost.

/rant off

More Reading Material: http://www.home.agilent.com/upload/cmc_upload/All/CEATEC07_Agilent_Seminar.pdf
http://www.eetasia.com/STATIC/PDF/200910/EEOL_2009OCT12_OPT_EMS_AN_01.pdf

DP 1.2 Presentation
http://dl.cubieforums.com/LapdockMaterials/DisplayPort技术概述.pdf
DP 1.2a
http://graniteriverlabs.com/wp-content/uploads/2012/05/DisplayPort-CTS-Presentation_4282012_GRL.pdf

iDP (internal DisplayPort)
http://www.vesa.org/wp-content/uplo...t-DevCon-Presentation-iDP-Dec-2010-rev-2-.pdf
eDp
http://www.vesa.org/wp-content/uploads/2010/12/DisplayPort-DevCon-Presentation-eDP-Dec-2010-v3.pdf

DirectDrive
http://www.atmel.com/Images/doc8103.pdf
www.atmel.com/Images/doc2569.pdf
http://www.zilog.com/force_download...jeTloY0hCdWIzUmxjeTlzWTJSZllYQnVkQzV3WkdZPQ==
 
Last edited:
Yes, the G-Sync module is very much proprietary. Whether that ends up mattering in the grand scheme of things remains to be seen.
 
He's keeping up just fine, you're the one who isn't reading... When asked why it was an Nvidia-specific feature requiring Kepler, all you mentioned is that Kepler contains an output clock generator.

Does Kepler contain an output clock generator? Yes.
Is that a feature exclusive to Kepler? Nope.

Recent Intel GPU's also contain a clock generator in order to support PSR. I can't speak for AMD's cards just yet (only intel has actually demonstrated variable refresh rates on real working hardware), but it's probable they include one as well.

Like I said before, just because older Nvidia cards leave out the required hardware doesn't mean it's anything new or exclusive. It's just another thing that can be included on any DisplayPort implementation :rolleyes:

I'm reading and comprehending just fine, thank you very much.
All my points from previous posts still stands.

It may not be an exclusive feature to Kepler but how it communicates and interacts with the data certainly is. This isn't PSR.
 
So the fact that you can't comprehend what the difference is, that is my fault?

Actually I am not the one who is failing to comprehend. You are the one who seems slightly confused.

I have mentioned what that hardware in Kepler is, try to keep up.

And I showed where you were wrong. The synchronization has NOTHING to do with the GPU. The GPU is just in charge of creating the frames. The module syncs it. You stated output clock generator is why Fermi will not be compatible with the new G-Sync. It is not compatible because Fermi lacks a DP and driver support from NV.

I gave the example of the Fermi GPU based Quadro cards which work with the G-Sync 2 module to show the synchronization has nothing to do with the GPU. The G-Sync modules do all the work. Not once did I say G-Sync 2 is the new G-Sync or that they use the exact same tech. So stop asserting I made a Finkle is Einhorn comparison.

I simply and easily proved your Kepler only theory, and proposed backup is BS. It is because of the DP. How else will the datapackets get to the G-Sync module?

FYI- Since you are having trouble, Q-Sync is basically an updated/upgraded Quadro G-Sync board meant for Kepler. This is not related to the G-Sync that Nvidia announced on October 18th.

Again I am not the one having issues. I know very well what Quadro Sync and G-Sync 2 are and what they can do. You brought Quadro Sync into the convo by not reading and then made the rest of the assertions all on your own.



I am still waiting for you to prove why the new G-Sync technology is not an advancement and why it will fail.
 
Last edited:
It may not be an exclusive feature to Kepler but how it communicates and interacts with the data certainly is. This isn't PSR.
Proof? Because as far as I can tell, this is just PSR with an Nvidia-designed display-side controller.

We know the G-Sync module watches the value stored in v-blank to control the refresh rate, which is exactly what PSR does. There's nothing encrypted, there's no additional data-stream, it's straight manipulation of a data value that is specifically designed to be manipulable.

There isn't even really anything to reverse engineer. We already know intel graphics chips can alter this value in real-time, since they support PSR. A couple minutes of their driver team testing values to determine what scale factor is in-use, and intel could have a G-Sync ready graphics chip.
 
Last edited:
How can it be PSR if DP 1.2 doesn't support PSR?
eDP and DP are inter-operable.

A DP 1.2 device can inform an eDP controller that it doesn't support eDP's extended instructions. The eDP controller will drop back to normal DP 1.2 protocols.

Similarly, eDP displays support a super-set of DP 1.2 standards. They're perfectly happy operating with the more limited DP 1.2 protocol.

Nvidia can add a clock generator and variable v-blank to their DisplayPort implementation if they like, it doesn't break from the spec. A display that doesn't support variable v-blank will report itself as DP 1.2 and the card will use a fixed v-blank interval.

Note that this would still FULLY qualify as a DP 1.2 display controller, and could be labeled as such. It would also be a partial eDP implementation.
 
So, what Unknown-One and crackmonkey are saying are that GSYNC will work with AMD?

That is what I am asking.

You say it is a standard that GSYNC is implementing on the monitor end of DP. So all AMD needs to do is update a software driver and they should be able to meet the other standard on the GPU end of the DP? If they can't do it in software, maybe a hardware upgrade (new PCB or GPU) and they can do it? Surely they just need to implement the other end of the DP variable refresh standard? Send the frames with the timings on the signal side.

Crackmonkey you mentioned they would have to reverse engineer it. If nVidia is implementing a standard, why would reverse engineering be necessary?

Thanks,
 
So all AMD needs to do is update a software driver and they should be able to meet the other standard on the GPU end of the DP?

The answer is pretty complex, since the full implementation of Nvidia's G-Sync setup is yet to be understood. In theory, and only in theory, AMD (or Intel) should be able to 'reverse engineer' the protocol, if that's necessary, and implement it. They may be able to do it in software, or they may have to make a change in the BIOS, or maybe it'll take a hardware change. Maybe they'll have to wait till they can license it, if that's even possible; we don't know what's been patented yet either.
 
So, what Unknown-One and crackmonkey are saying are that GSYNC will work with AMD?

That is what I am asking.

You say it is a standard that GSYNC is implementing on the monitor end of DP. So all AMD needs to do is update a software driver and they should be able to meet the other standard on the GPU end of the DP? If they can't do it in software, maybe a hardware upgrade (new PCB or GPU) and they can do it? Surely they just need to implement the other end of the DP variable refresh standard? Send the frames with the timings on the signal side.

Crackmonkey you mentioned they would have to reverse engineer it. If nVidia is implementing a standard, why would reverse engineering be necessary?

Thanks,

The answer is pretty complex, since the full implementation of Nvidia's G-Sync setup is yet to be understood. In theory, and only in theory, AMD (or Intel) should be able to 'reverse engineer' the protocol, if that's necessary, and implement it. They may be able to do it in software, or they may have to make a change in the BIOS, or maybe it'll take a hardware change. Maybe they'll have to wait till they can license it, if that's even possible; we don't know what's been patented yet either.

From what is known/understood, there is no real "protocol" they need to "reverse engineer".. the Module most likely is using the DP AUX channel for "Authentication" (ie handshake). sort of like Digital Rights Management for HDMI. The G-Sync module is (most likely) relying upon the aforementioned DisplayPort specifciations HOWEVER it undoubtedly is using the "handshake" as an artificial barrier to "ensure compatibility and minimize support issues" (ie vendor/product lock). in short.. will Module (using eDP/DD) will work when a G-Sync Authorized device is detected and driver reports back proper authentication

a sort of

IF GPU = NV
THEN WORK
ELSE FAIL

routine

This sounds similar to Batman AA fiasco some time back.. using in place "Standard" (DX AA), however locked to specific hardware/vendor.. my best guess would be that instead of paying any sort of "fee" for an "entrance key" (again, doubtful nV would even offer as it would remove "exclusivity"), we may see AMD/Intel push for Display Manus to use in place DP 1.2 specifications.. ie DirectDrive enabled monitors with eDP which would work with ANY supported 1.2 device.
 
From what is known/understood, there is no real "protocol" they need to "reverse engineer".. the Module most likely is using the DP AUX channel for "Authentication" (ie handshake). sort of like Digital Rights Management for HDMI. The G-Sync module is (most likely) relying upon the aforementioned DisplayPort specifciations HOWEVER it undoubtedly is using the "handshake" as an artificial barrier to "ensure compatibility and minimize support issues" (ie vendor/product lock). in short.. will Module (using eDP/DD) will work when a G-Sync Authorized device is detected and driver reports back proper authentication

a sort of

IF GPU = NV
THEN WORK
ELSE FAIL

routine

This sounds similar to Batman AA fiasco some time back.. using in place "Standard" (DX AA), however locked to specific hardware/vendor.. my best guess would be that instead of paying any sort of "fee" for an "entrance key" (again, doubtful nV would even offer as it would remove "exclusivity"), we may see AMD/Intel push for Display Manus to use in place DP 1.2 specifications.. ie DirectDrive enabled monitors with eDP which would work with ANY supported 1.2 device.

Thanks to both of you for your answers, this is what I was looking for.
 
And I showed where you were wrong. The synchronization has NOTHING to do with the GPU. The GPU is just in charge of creating the frames. The module syncs it. You stated output clock generator is why Fermi will not be compatible with the new G-Sync. It is not compatible because Fermi lacks a DP and driver support from NV.

I gave the example of the Fermi GPU based Quadro cards which work with the G-Sync 2 module to show the synchronization has nothing to do with the GPU. The G-Sync modules do all the work. Not once did I say G-Sync 2 is the new G-Sync or that they use the exact same tech. So stop asserting I made a Finkle is Einhorn comparison.

I simply and easily proved your Kepler only theory, and proposed backup is BS. It is because of the DP. How else will the datapackets get to the G-Sync module?

You didn't prove anything... All you did was show that you don't understand what Quadro G-Sync II is/does.

If you have an issue with Fermi not working with G-Sync, take it up with Nvidia. Pretty sure SkyMTL isn't making this up...
 
Status
Not open for further replies.
Back
Top