How NVIDIA made the 9600 GT gain extra performance..secretly

how would the clocks being misread improve performance? am i missing something here? regardless of the clock's true speed, the card still performed the way it did....650mhz, 725mhz, 783mhz...doesnt really matter what the speed is imo.
 
Very interesting. I wouldn't call it "shady", but "clever" lol. If this is true then we'll need a double check by the [H] staff on their review.

Being misread = misleading clocks. For example: One program would say the clock is 10, another would say the clock is 12. The product could be advertised as running at a clock of 10. Simply put, it could be running faster than the user/reviewer thinks. They assume that it's all fine. The upside to this is that this could mean higher OCs lol.
 
Very interesting. I wouldn't call it "shady", but "clever" lol. If this is true then we'll need a double check by the [H] staff on their review.

It won't matter, they didn't use an nvidia chipset. It ran at "true" stock frequencies.
 
That is interesting for sure.

I wonder if the crystal will vary from brand to brand.
 
Why not just mark them at that clock speed? I don't see why they would cover up something that would make their product seem better.
 
I'll be cranking up my PCI-e frequency...

I noticed the discrepancy myself...you have to hack RivaTuner to get it to work with a 9600 GT. When I saw my core clock running at 810 Mhz, as reported by Riva Tuner, I just thought it was an error and RivaTuner needed to be formally upgraded by its author for 9600 GT.
 
It won't matter, they didn't use an nvidia chipset. It ran at "true" stock frequencies.

From the article: "Also it could send a message to customers that the card performs considerably better when used on an NVIDIA chipset? Actually this is not the case, the PCI-Express frequency can be adjusted on most motherboards, you will see these gains independent of Intel/AMD CPU architecture or Intel/NVIDIA/AMD/VIA chipset."
 
From the article: "Also it could send a message to customers that the card performs considerably better when used on an NVIDIA chipset? Actually this is not the case, the PCI-Express frequency can be adjusted on most motherboards, you will see these gains independent of Intel/AMD CPU architecture or Intel/NVIDIA/AMD/VIA chipset."

Sorry, but I'm guessing that hard tests with everything @ stock ;)
 
I'll be cranking up my PCI-e frequency...

I noticed the discrepancy myself...you have to hack RivaTuner to get it to work with a 9600 GT. When I saw my core clock running at 810 Mhz, as reported by Riva Tuner, I just thought it was an error and RivaTuner needed to be formally upgraded by its author for 9600 GT.

What exactly is the advantage of running a higher PCIe frequency ? :eek: I am searching for one legitimate explanation
 
What exactly is the advantage of running a higher PCIe frequency ? :eek: I am searching for one legitimate explanation

On "normal" VGA cards, when you increase the PCI-Express bus frequency you increase the theoretical bandwidth available between card and the rest of the system, but do not affect the speed the card is running at. On the GeForce 9600 GT, a 10% increase in PCI-Express frequency will make the card's core clock run 10% faster!

As I understand it, you get more bandwidth.....I run my PCIe @ 115....
been doing it that way for a while now....
I honestly cant say if it matters in games, but 3d mark scores are a bit better...:)
 
What exactly is the advantage of running a higher PCIe frequency ? :eek: I am searching for one legitimate explanation


I was only half-joking. According to the w1zzard article,

"The [automatic] increase of 25 MHz on the PCI-Express bus frequency yields an increase of 25% or 162.5 MHz over the stock clock (assuming a 650 MHz clock board design). With a final clock of 812.5 MHz you can bet this card [9600 GT] will perform much better, when used by an unsuspecting user, on an NVIDIA chipset motherboard with LinkBoost. "


I have a 650i chipset...I don't think I have LinkBoost. Even so, one could do it manually in the BIOS. But i'll stick with normal overclocking, as w1zzard suggests in the article.
 
Interesting article. Perhaps we'll see a new trend of raising PCI-E frequencies to boost gains on future Nvidia cards.:D
 
Very interesting! Thanks for the read. I will have to remember this when I eventually upgrade to the 9xxx series...

To bad it makes no difference at all on a 8xxx series card if I read correctly.
 
interesting. so that explains why a 9600gt is not far off a 8800gt!!

No, it doesn't... because most people don't have LinkBoost and don't overclock the PCIe bus and they still see a well performing card.

I think the source of this is that they tried to cost reduce the PCB by removing the 25MHz crystal and using the divided down PCIe clock instead. This was then not properly communicated to the driver guys which results in the faulty calculation of the clock speed in the panel.

It happens, those are big companies and all big companies make little mistakes due to lack of communication.

IOW, don't attribute to malice what can be explained by ... etc.
 
No, it doesn't... because most people don't have LinkBoost and don't overclock the PCIe bus and they still see a well performing card.

I think the source of this is that they tried to cost reduce the PCB by removing the 25MHz crystal and using the divided down PCIe clock instead. This was then not properly communicated to the driver guys which results in the faulty calculation of the clock speed in the panel.

It happens, those are big companies and all big companies make little mistakes due to lack of communication.

IOW, don't attribute to malice what can be explained by ... etc.

Dude. No.

The entire issue IS the crystal. Its not the standard 25mhz, its a 27mhz crystal that messes up the PCI-E clock calculations. No matter what mobo you run it on, the 27mhz crystal changes the volatge math, so it automatically is pumping more juice through the PCI-E line.

It changes the core clock, because the PCI-E calulations are scewed all because of the 27mhz crystal. Technically its not overclocking, but yeah, yeah it is.

These cards are overclocked. Nvidia is hiding that.
This has nothing to do with Linkboost. Linkboost was brought up, because it does the exact same thing.
 
Admittedly, I'm still kinda confused. If I go out and plop down my money for a 9600GT right now, I will get a card that runs better than nVidia says it does... I haven't paid any more, so why do I care that it's OCed? Free OC = complain? If it were actually underclocked, I'd see the problem. But OCed...
 
I just don't see why Nvidia would do something like this intentionally. They could have just clocked the cards higher in the first place and gotten the same results. I'm no Nvidia fanboy, but I don't see the benefit to doing this. Seems more logical to make sure EVERYONE gets the boost, not just people who happen to overclock their PCIe bus.
 
No matter how the GPU clock is being calculated, the real question is why would reviewers being testing video cards on PCIe busses that are running out of spec?
 
This is all speculation based on a program which is not intended by the program's author to be used with 9600gt (yet). I know for a fact, Rivatuner does not support mobile GPUs and therefore reports wrong stuff on them too, you don't see anyone running wild, screaming that nvidia is oc'ing mobile GPUs... RivaTuner's programmer is prolly in the process of updating RivaTuner to work on 9600 but for now, I'd take such news with a huge grain of salt ;)
 
Tehquick has a point it could just be an error from the Riva tuner hack, I can see it both way that crystal at 27mhz could put it off track and that software is bad link here but regardless the 900gt is still a damn good card for the money.
 
Dude. No.

The entire issue IS the crystal. Its not the standard 25mhz, its a 27mhz crystal that messes up the PCI-E clock calculations. No matter what mobo you run it on, the 27mhz crystal changes the volatge math, so it automatically is pumping more juice through the PCI-E line.

It changes the core clock, because the PCI-E calulations are scewed all because of the 27mhz crystal. Technically its not overclocking, but yeah, yeah it is.

These cards are overclocked. Nvidia is hiding that.
This has nothing to do with Linkboost. Linkboost was brought up, because it does the exact same thing.

Go re-read the article please.

As many of you know, the PCI-Express bus clock frequency which connects the card to the rest of the system is running at 100 MHz by default. What NVIDIA did is to take this frequency and divide it by four to get their reference frequency for the core clock.

On systems where the PCI-E bus runs at 100 MHz, this results in the 25 MHz used for the default clock indeed. But once the PCI-E clock is increased beyond that, the GPU will magically run at higher clock speeds, without any changes to the graphics configuration, BIOS, driver or any software.

On "normal" VGA cards, when you increase the PCI-Express bus frequency you increase the theoretical bandwidth available between card and the rest of the system, but do not affect the speed the card is running at. On the GeForce 9600 GT, a 10% increase in PCI-Express frequency will make the card's core clock run 10% faster!

With a standard PCI-E clock (100mhz) the card is running at "correct" speeds, it is not overclocked. It is only when paired with an nvidia nforce chipset that has LinkBoost that the PCI-E bus gets boosted automatically. No linkboost, no overclock, and the PCI-E bus is humming along at 100mhz as it is supposed to.
 
This is all speculation based on a program which is not intended by the program's author to be used with 9600gt (yet). I know for a fact, Rivatuner does not support mobile GPUs and therefore reports wrong stuff on them too, you don't see anyone running wild, screaming that nvidia is oc'ing mobile GPUs... RivaTuner's programmer is prolly in the process of updating RivaTuner to work on 9600 but for now, I'd take such news with a huge grain of salt ;)



w1zzard wrote ATiTool and GPU-Z. He knows video cards inside and out.

I wouldn't be too quick to dismiss what he's saying.


As far as him operating the PCI-e bus out of spec, he did so to prove his theory, Kyle. The point of overclocking is to run things out of spec, isn't it?
 
Be as it may, w1zzard didn't write RivaTuner, Unwinder did, and AFAIK, Unwinder didn't release updated RivaTuner that supports 9600gt properly.
 
Be as it may, w1zzard didn't write RivaTuner, Unwinder did, and AFAIK, Unwinder didn't release updated RivaTuner that supports 9600gt properly.



Well, according to the article, w1zzard seemed to display a fair amount of knowledge about how RivaTuner derives its core clock measurements, so even though he didn't write it, its fair a bet he knows how it works.


So according to you, there are basically only two possiblities...he is either lying about his observations or he's incorrect.


Which one is it?
 
With a standard PCI-E clock (100mhz) the card is running at "correct" speeds, it is not overclocked. It is only when paired with an nvidia nforce chipset that has LinkBoost that the PCI-E bus gets boosted automatically. No linkboost, no overclock, and the PCI-E bus is humming along at 100mhz as it is supposed to.

If you dont want to call it an overclock, whatever. The card is running out of spec, because of the crystal. This behavoir is not exclusive to a Linkboost capable mobo. It is exclusive to that card, because of the modified crystal.

The core clock is increased. It is running out of spec.

What exactly do you call that?
 
If you dont want to call it an overclock, whatever. The card is running out of spec, because of the crystal. This behavoir is not exclusive to a Linkboost capable mobo. It is exclusive to that card, because of the modified crystal.

The core clock is increased. It is running out of spec.

What exactly do you call that?

Did you even read the article? The 27mhz crystal is for the RAM, not the GPU.

Please also note that RivaTuner's monitoring clock reading is wrong. It uses 27 MHz for its calculation which is incorrect. When the PCI-E bus is 100 MHz, the core clock is indeed 650 MHz on the reference design. A RivaTuner update is necessary to reflect GPU clock changes cause by PCI-E clock properly though.

Jeez man, at least read what you are commenting on :rolleyes:
 
If you dont want to call it an overclock, whatever. The card is running out of spec, because of the crystal. This behavoir is not exclusive to a Linkboost capable mobo. It is exclusive to that card, because of the modified crystal.

The core clock is increased. It is running out of spec.

What exactly do you call that?


Well, maybe it isn't running out of spec. What if nvidia intended for it to run faster on mobo's with LinkBoost and just didn't tell anyone.

I mean... 9600 GTs are highly overclockable, much more so than 8800 GTs.


If you put a non-overclocked 9600 GT on a board with LinkBoost, it should be okay if you use ntune, because ntune knows how high to push the bus frequency before instability occurs...at least it's supposed to.

W1zzard thinks this might give the card an unfair advantage when reviewers benchmark the card, but he also speculates this might be practical for the casual user who owns a 9600 GT and a mobo with Linkboost.

That's the point of the article isn't it?
 
Did you even read the article? The 27mhz crystal is for the RAM, not the GPU.



Jeez man, at least read what you are commenting on :rolleyes:





So how is it the core frequency is being increased by an increase in the pci-e bus frequency?

Even though the driver uses 25 mhz for it's core speed calculations , The 27 mhz crystal affects the core when the pci-e bus speed is pushed over 100 mhz- something that can happen automatically if one has an nvidia chipset motherboard with Link Boost.

I don't think this is necessarily "out of spec", anyway on a non-OC'd 9600 GT. 9600 GT OC 's very well....and maybe that was for a reason.


Maybe on an SSC 9600 GT, if one increased their bus speed, one could have problems. That I could understand.
 
So how is it the core frequency is being increased by an increase in the pci-e bus frequency?

Even though the driver uses 25 mhz for it's core speed calculations , The 27 mhz crystal affects the core when the pci-e bus speed is pushed over 100 mhz- something that can happen automatically if one has an nvidia chipset motherboard with Link Boost.

I don't think this is necessarily "out of spec", anyway on a non-OC'd 9600 GT. 9600 GT OC 's very well....and maybe that was for a reason.


Maybe on an SSC 9600 GT, if one increased their bus speed, one could have problems. That I could understand.

No it doesn't. According to the article, the GPU core speed is based on the PCI-E bus using a divider. Eg, 100mhz / 4 = 25mhz. THAT is what the GPU core speed is based on. The 27 mhz crystal has absolutely zero affect on the GPU core speed (again, according to the article). Increasing the PCI-E bus speed increases the GPU core speed not because of the crystal (which has nothing to do with anything when it comes to the GPU's core speed), but because 120/4 != 25, etc... It is PCI-E bus divided by 4 times some multiplier == GPU core speed. If the system isn't running on an nforce chipset w/ linkboost, the PCI-E bus will be 100mhz, meaning the GPU will be running at its reference speed.
 
They probably discovered it was cheaper to use the 27MHz oscillator on the card than a 25MHz oscillator. For all we know, the 27MHz part was more standard and therefore readily available, or they may have already had several hundred thousand of them and needed to use them.

Of course, it could also be a software bug not reading the frequencies correctly.
 
No it doesn't. According to the article, the GPU core speed is based on the PCI-E bus using a divider. Eg, 100mhz / 4 = 25mhz. THAT is what the GPU core speed is based on. The 27 mhz crystal has absolutely zero affect on the GPU core speed (again, according to the article). Increasing the PCI-E bus speed increases the GPU core speed not because of the crystal (which has nothing to do with anything when it comes to the GPU's core speed), but because 120/4 != 25, etc... It is PCI-E bus divided by 4 times some multiplier == GPU core speed. If the system isn't running on an nforce chipset w/ linkboost, the PCI-E bus will be 100mhz, meaning the GPU will be running at its reference speed.



"It is PCI-E bus divided by 4 times some multiplier == GPU core speed." I think what he was trying to imply that the "some multiplier" or other variable is affected by the 27 mhz crystal when the bus goes passed 100 MHz.....or why even mention the crystal at all [other than it was causing an error in core speed reporting when using riva tuner]? They have to be related somehow. Even though it seems w1zzard didn't make it crystal clear (pardon the pun) what the relationship is, he seems to imply that there is one....at least that's what I got from the article.


The 25 Mhz base frequency is a driver limitation, not a physical one.
 
"It is PCI-E bus divided by 4 times some multiplier == GPU core speed." I think what he was trying to imply that the "some multiplier" or other variable is affected by the 27 mhz crystal when the bus goes passed 100 MHz.....or why even mention the crystal at all [other than it was causing an error in core speed reporting when using riva tuner]? They have to be related somehow. Even though it seems w1zzard didn't make it crystal clear (pardon the pun) what the relationship is, he seems to imply that there is one....at least that's what I got from the article.


The 25 Mhz base frequency is a driver limitation, not a physical one.

The only reason he talks about the 27mhz crystal is because that is the only one he could find on the card, and thus couldn't explain where 25mhz was coming from. That is why on the second, third, and fourth pages he stops talking about the 27mhz crystal, because it isn't relevant. Look on the second page where he provides a chart showing how the PCI-E bus affects the core speed. (100 / 4) * 29 == 725. (105 / 4) * 29 == (ready for it?) 761. (115/4) * 29 == 834, etc... The 27mhz crystal is completely, 100% irrelevant. It has nothing to do with the GPU core clock speed at all. It is PCI-E bus / 4 * 29. Stock, that PCI-E bus is 100mhz, meaning the card will be running at its proper, rated speed, and will match what the driver is reporting
 
The only reason he talks about the 27mhz crystal is because that is the only one he could find on the card, and thus couldn't explain where 25mhz was coming from. That is why on the second, third, and fourth pages he stops talking about the 27mhz crystal, because it isn't relevant. Look on the second page where he provides a chart showing how the PCI-E bus affects the core speed. (100 / 4) * 29 == 725. (105 / 4) * 29 == (ready for it?) 761. (115/4) * 29 == 834, etc... The 27mhz crystal is completely, 100% irrelevant. It has nothing to do with the GPU core clock speed at all. It is PCI-E bus / 4 * 29. Stock, that PCI-E bus is 100mhz, meaning the card will be running at its proper, rated speed, and will match what the driver is reporting



Ok I see. The 27 Mhz crystal led him to see that the GPU core timing is based off the pci-e bus and not the crystal. Very good then.


What I also didn't know, was that 780i boards run 125 MHz default PCI-e bus....that's why w1zzard seems to think this is a potentially "shady trick" for nvidia reviews.

Viper John:

"Simply plugging the card into a motherboard that runs a higher default PCIe bus frequency, such as
a reference 780i (125Mhz default PCIe), will cause the 9600GT to run higher core clock speeds and
give higher benchmark scores, than plugging it into a motherboard than runs a 100Mhz default PCIe
bus frequency. The user/tester doesn't have to do anything else and may not even notice the core
clock speed difference...just the higher benchmark scores."
 
That makes more sense. So we're not really talking about nVidia trying to pull a fast one to make the 9600GT look better, but rather nVidia trying to pull a fast one to make the 780i look better.
 
What I also didn't know, was that 780i boards run 125 MHz default PCI-e bus....that's why w1zzard seems to think this is a potentially "shady trick" for nvidia reviews.

Yeah, that is the LinkBoost thing nVidia introduced with the nForce 590. I didn't know they continued to carry it forward, as I've never heard anything else about it. I have a 590 and I'm pretty sure LinkBoost (and the "SLI memory") settings default to "off" (even though I have both an nvidia card and the EPP profile needed for the memory), so I'm not sure who this will really affect...
 
Just Nvidia getting caught cheating once again in benchmarks. And yes its not fair, because they are secretly overclocking the card while the other cards remain at their default speed.
 
Just Nvidia getting caught cheating once again in benchmarks. And yes its not fair, because they are secretly overclocking the card while the other cards remain at their default speed.

Er, what? It isn't cheating... What is the difference between LinkBoost over clocking your card or BFG, XFX, eVGA, MSI, etc...? I'd guess that most reviews of the 9600GT were running @ stock frequencies, anyway, and I doubt a whole lot of people overclock the PCI-E bus (since it doesn't really help with anything).

Besides, how can nvidia "secretly overclock" the card. They are the ones that define what "stock" is anyway :confused:
 
Back
Top