Whats your GPU-Z ASIC score ?

Spyhawk

Limp Gawd
Joined
Jul 5, 2008
Messages
400
The latest GPU-Z (0.5.8) has an interesting new feature that scores your gpu. My Gigabyte 7970 scored a 70.9%. This score apparently is based on how much leakage the chip has so the higher the % the less leakage there is. Im not sure how they go about figuring this out but I think it has to do with the ammount of voltages applied.

You can read up about this here. http://www.techpowerup.com/159098/TechPowerUp-GPU-Z-0.5.8-Released.html

Btw you can find this new feature by right clicking ing the context menu.
 
my 460:

Leakage469.png
35c547p.png

Leakage469.png
 
"ASIC quality reading not supported on this card" :(

Both of my 6950s crossfire disabled or enabled.
 
"ASIC quality reading not supported on this card" :(

Both of my 6950s crossfire disabled or enabled.

From TechpowerUp;

We've found the ways in which AMD and NVIDIA segregate their freshly-made GPU ASICs based on the electrical leakages the chips produce (to increase yield by allotting them in different SKUs and performance bins), and we've found ways in which ASIC quality can be quantified and displayed. Find this feature in the context menu of GPU-Z. We're working on implementing this feature on older AMD Radeon GPUs.
 
Not possible on a 6990. Returns the error 'ASIC data cannot be read on this card'
 
how do you run this test i cant right click on anything

NEVER MIND ANSWERED MY OWN QUESTION
 
Not possible on a 6990. Returns the error 'ASIC data cannot be read on this card'

I'm guessing that if the ASIC is super low quality the program can't make an accurate decision.

So far it looks like AMD uses low qualiy GPUs.
 
I'm guessing that if the ASIC is super low quality the program can't make an accurate decision.

So far it looks like AMD uses low qualiy GPUs.

Well AMD's "low quality" stuff is currently b*tchslapping NV's best. So what you are telling us is NV quality must be far lower than that of AMD. Better watch it... stuff like that will get you off the NV payroll. ;)


Oh and mine are 75.4 and 66.9. Both do the CCC max at 1.175v +20%.
 
Gigabyte 7970 asic quality is 65.5%

Oh well, hits CCC max easily and I've benched it at 1300mhz. Can't be too bad
 
All the Nvidia results are invalid because people have been reporting scores of over 100%. W1zzard is working on fixing that. It seems Nvidia ASIC values are arbitrary and don't follow the same metric that AMD places on their dies.

P.S: SonDa5 is begging to be banned once again. His thread crapping and trolling got old 2 locked threads of his ago.
 
Both mine Sapphires are 65.5% and they both hit 1200/1700 on default 1.17v rock solid..

Actually on AMD cards the 7970's looks like its reversed the low scoring cards 60-70% are most overclockable while over 80 can't even reach stable 1100,
Just look over at overclocker.net people start exchanging there high ASIC 7970's for hope to get ones with lower 60-70 score.

I think all its BS and total Random.. people look too much in to useless numbers!
Or it might have some truth to it looking at mine both 65.5 and OC 1200/1700 default V. or Pure Luck!!!
 
Last edited:
Sounds like Wizard needs to fine tune this tool. Great idea if it works.
 
So, practically, what does this actually tell you? How likely a voltage increase is to work? I don't really understand the purpose of this.
 
So, practically, what does this actually tell you? How likely a voltage increase is to work? I don't really understand the purpose of this.
The following is only true for Tahiti (7970/7950):

Lower % = higher leakage chip = easier to overclock because it can take a higher voltage. But a higher leaking chip requires a higher voltage to operate at the same frequency (a low leakage chip may require only 1.05V to operate at 925Mhz while a high leakage chip requires 1.125V), so power consumption and heat are increased.
 
Last edited:
hmm sounds interesting, just wish GPU-Z .5.8 actually worked for me.. all i get is a blank screen even though .5.6 works fine.

nevermind i guess i had to re-enable sli to get it to work even though it only see's the second card in SLI and not the primary card. but my old ass 8800GT's aren't supported anyways.. :D
 
Seems like more leakage would be a bad thing. Though I'm not even sure what "leakage" means in this case. Some sort of capacitance issues?
 
Seems like more leakage would be a bad thing. Though I'm not even sure what "leakage" means in this case. Some sort of capacitance issues?

no its just voltage leakage in the processor/gpu.. a good example is the old phenom I processors. had a massive voltage leak in them which kept them from being overclocked and ran hotter then hell in the process. but how accurate this little feature is its hard to say.
 
It measure the conductive quality of the GPU.

Using Ohms law is should work. The problem may be that the program was designed to work with a few chips for reference and every chip is probably unique and the reference values are not accurate. I'm thinking that the program creates a load on the chip that has a value of voltage and based on how close the voltage sensor is to reference load voltage a calculation is used to measure the ASIC score.

Since it is software controlled by predetermined voltage values the scores are probably going to be off and some gpu materials may cause odd voltage readings that the software isn't programmed to calculate.


I think the way the factory bins the gpus is they probably have a very fine high quality voltage input on 1 conductive point in the GPU and complete the circuit through the gpu on another exit contace point. A high quality sensitive meter determines the ASIC of each gpu. In short I think the factory uses electrical measuring instruments to determine the exact ASIC of each GPU for binning. Software will be tough to do that.
 
Last edited:
It measure the conductive quality of the GPU.

Using Ohms law is should work. The problem may be that the program was designed to work with a few chips for reference and every chip is probably unique and the reference values are not accurate. I'm thinking that the program creates a load on the chip that has a value of voltage and based on how close the voltage sensor is to reference load voltage a calculation is used to measure the ASIC score.

thats what i was thinking as well. more then likely its based on the reference voltage given for each gpu. so for example the 7970 comes in 2 different voltage one being the referenced voltage from AMD and the other voltage slightly lower then that voltage. which could throw it off massively. its the same problem i've seen with CPU-z's TDP numbers being way off as well since its based on the referenced numbers the manufactures give which usually aren't correct anyways. its also why you see numbers that are over 100% which they technically can't be.

i guess the easy way to check it would be to have to one undervolt their gpu and then overvolt it and see what the percentage is for each one.
 
Seems like more leakage would be a bad thing. Though I'm not even sure what "leakage" means in this case. Some sort of capacitance issues?

If leakage isn't a bad thing they should change the name...I'm pretty sure I would want as little "leakage" as possible :( Call it...erm...porousness...:D
 
Last edited:
No, all GPU-Z does is read a hexadecimal value already embedded in the chip by the fab during the validation/binning process.

P.S: Someone mentioned undervolting.
The flip side of high leakage is that a low leakage chip can be undervolted and still work at the specified frequencies. Many people have undervolted their 7970s to 0.9V (I think someone got theirs as low as 0.85V but don't quote me on that) and it runs perfectly fine at stock 925Mhz. As you can deduce, power consumption and thus heat are greatly reduced.

Semi-conductor manufacturing is never a exact science. There are always ranges and engineers work their designs to fine tune that range, because no 2 silicon wafers are identical. You can even have great variation from the center of the wafer to the edge of the same wafer.
 
Last edited:
No, all GPU-Z does is read a hexadecimal value already embedded in the chip by the fab during the validation/binning process.

P.S: Someone mentioned undervolting.
The flip side of high leakage is that a low leakage chip can be undervolted and still work at the specified frequencies. Many people have undervolted their 7970s to 0.9V (I think someone got theirs as low as 0.85V but don't quote me on that) and it runs perfectly fine at stock 925Mhz. As you can deduce, power consumption and thus heat are greatly reduced.

Semi-conductor manufacturing is never a exact science. There are always ranges and engineers work their designs to fine tune that range, because no 2 silicon wafers are identical. You can even have great variation from the center of the wafer to the edge of the same wafer.

Very interesting post. Many thanks for your input.:)
 
This score apparently is based on how much leakage the chip has so the higher the % the less leakage there is. Im not sure how they go about figuring this out but I think it has to do with the ammount of voltages applied.

Is this for certain? I was actually wondering whether for the reading given if a higher percentage means higher leakage or lower leakage, but it doesn't actually seem to state this on the news page.
 
No, all GPU-Z does is read a hexadecimal value already embedded in the chip by the fab during the validation/binning process.

Before they create a hexadecimal value i think they have to have some type of electrical instrumentation do a test on the gpu/chip to get an actual raw ASIC score which is used for binning.

This ASIC test is a new feature in GPU-Z. I think it works using the voltage sensors under load. There is a little 3d rendering program included in GPU-Z now that will all detect the exact PCI speed rating. I have a GTX 470 and it is not compatible with the ASIC testing but i think it works with the rendering test. The little rendering program creates an exact load and based on the sensors changes a calculation is done to determine the ASIC quality. That is how I think the ASIC testing part works.
 
Before they create a hexadecimal value i think they have to have some type of electrical instrumentation do a test on the gpu/chip to get an actual raw ASIC score which is used for binning.
Do you not understand what validation and binning means?

This ASIC test is a new feature in GPU-Z. I think it works using the voltage sensors under load. There is a little 3d rendering program included in GPU-Z now that will all detect the exact PCI speed rating. I have a GTX 470 and it is not compatible with the ASIC testing but i think it works with the rendering test. The little rendering program creates an exact load and based on the sensors changes a calculation is done to determine the ASIC quality. That is how I think the ASIC testing part works.
I don't think you understand the feature at all.
 
It is a fact that the ASIC score is not related to the quality of the card. My card has 77.7% and it overclocks greatly. Others with card scores near to 100% had much worst overclocking "walls" whilst other with high scores had great OC potential, and vice versa with low scores. Look on older threads on this forum about ASIC and see for yourself.
 
Back
Top