NV fudge

i'm really beginning to hate nvidia. i had such high hopes when i switched from ati. first with all of the bs with the 7 series cards and now this (if it's true). i sooooooo want a refund but evga won't allow it.

ok. so how does one prove this article wrong or right?
 
outrageous, I´m gonna throw away all my nvidia hardware right now!
 
It has actually been talked about many times its just that this new benchmark is shedding more light on the subject. Almost all the reviews of nv cards talk about it as having 16 bit only ( I actually believed it was 24 bit). Its probably the reason for the shimmering affect maybe don't know don't care. I buy cards based on picture quality to me thats the big deal . Many go for speed. To me a graphics card should be judged on picture quality first than everthing else.
 
$BangforThe$ said:
It has actually been talked about many times its just that this new benchmark is shedding more light on the subject. Almost all the reviews of nv cards talk about it as having 16 bit only ( I actually believed it was 24 bit). Its probably the reason for the shimmering affect maybe don't know don't care. I buy cards based on picture quality to me thats the big deal . Many go for speed. To me a graphics card should be judged on picture quality first than everthing else.

Really? wow. I've got a video card here that has EXCELLENT picture quality I'll sell you. But it can only 10-15 FPS max with most games. :D
 
I've been following the devellopment of Rydermark 2006. It's a highly anticipated next gen benchmarking program.

If this proves to be true it would explain many things. But it being the Inquirer... let's wait and see before starting the lynching fair deal?
 
vanilla_guerilla said:
amazing that no one has mentioned this before.
LOL. The truth has to be distorted to match Fuad the moron's postings.

And why does $bang have to spam every forum with theinq links?
 
pxc said:
LOL. The truth has to be distorted to match Fuad the moron's postings.

And why does $bang have to spam every forum with theinq links?


Well my dad worked at Hormel and they make the best spam. If its true it is interesting news .
 
So no one noticed this...until now?
(And still don't?)
Not even Futuremark under their driver validations?
And how did NVIDIA get WHQL drivers from Microsoft then?

Terra - This one is a dud...
 
Terra said:
So no one noticed this...until now?
(And still don't?)
Not even Futuremark under their driver validations?
And how did NVIDIA get WHQL drivers from Microsoft then?

Terra - This one is a dud...

well nVidia is in bed with all those listed, so they just let it slide.

j/k
 
wow....so this would mean that there is false advertising and such on all these 7 series video cards?
 
I'm not right up on the tech here but:

If nvidia aren't compliant, where's the whql coming from (and don't tell me ATI wouldn't be shouting about this).?

If you need the 24/32 bit thingys to do normal maps, and the nv stuff only supports 16bit thingys, how have nvidia cards been doing normal maps for the last few years?

I call shens.
 
MartinX said:
If you need the 24/32 bit thingys to do normal maps, and the nv stuff only supports 16bit thingys, how have nvidia cards been doing normal maps for the last few years?
I have a better question: why do you need to ask questions about Fuad's article? I'll tell you why... Fuad is an ignorant moron. He has no clue what he writes about. There may or may not be a problem, but Fuad mixes up even simple things like internal precision vs texture formats.

(Edit) The fine folks at B3D are having a laugh at how stupid Fuad is, bringing up the same points as above: http://www.beyond3d.com/forum/showthread.php?t=31786

FWIW, supplying higher than the minimum requested precision (24-bit, for example) is allowed by the DX spec. Remember the mixed mode path in HL2? It gained speed by running some shaders in 16-bit precision (instead of the non-nv-supported 24-bit precision), otherwise by default it would run in 32-bit precision at a huge performance hit.
 
pxc said:
I have a better question: why do you need to ask questions about Fuad's article? I'll tell you why... Fuad is an ignorant moron. He has no clue what he writes about. There may or may not be a problem, but Fuad mixes up even simple things like internal precision vs texture formats.

(Edit) The fine folks at B3D are having a laugh at how stupid Fuad is, bringing up the same points as above: http://www.beyond3d.com/forum/showthread.php?t=31786

FWIW, supplying higher than the minimum requested precision (24-bit, for example) is allowed by the DX spec. Remember the mixed mode path in HL2? It gained speed by running some shaders in 16-bit precision (instead of the non-nv-supported 24-bit precision), otherwise by default it would run in 32-bit precision at a huge performance hit.

That's three articles on the inq, on the last three days, by three different authors about three different topics where they've just flatout made shit up to be controversial.

That pretty much sucks.
 
Its isnt just that Fuad nub who is stupid at Inquirer, I have come to the uber leet conclusion that most of the writers at the inquirer havent got a fuxing clue as to what they are writing.

Maybe thats why it is just one huge laughable website, cuz all there writers are retarded and got weird names that sound as though they live in the desert and havent even got electricity nevermind a fuxing computer.

Inquirer is the biggest bullshitting "tech" website there is, closely followed by uncle TOM. :(
 
EVIL-SCOTSMAN said:
Its isnt just that Fuad nub who is stupid at Inquirer, I have come to the uber leet conclusion that most of the writers at the inquirer havent got a fuxing clue as to what they are writing.

Maybe thats why it is just one huge laughable website, cuz all there writers are retarded and got weird names that sound as though they live in the desert and havent even got electricity nevermind a fuxing computer.

Inquirer is the biggest bullshitting "tech" website there is, closely followed by uncle TOM. :(

Well, his name is FUaD. take out the ''a' and you have FUD....hmmm.
 
ryan_975 said:
Well, his name is FUaD. take out the ''a' and you have FUD....hmmm.

Dude great minds think alike, I was gunna call him Fud, but I dont know if it has the same meaning in the US as it does in the Uk, in the Uk it is the part of the female body that we spend the first 9 months of our lives trying to get out of, and the rest of our lives trying to get back into.

I dunno if it means the same in the states, but whatever, he is still a Fud. :D
 
EVIL-SCOTSMAN said:
Dude great minds think alike, I was gunna call him Fud, but I dont know if it has the same meaning in the US as it does in the Uk, in the Uk it is the part of the female body that we spend the first 9 months of our lives trying to get out of, and the rest of our lives trying to get back into.

I dunno if it means the same in the states, but whatever, he is still a Fud. :D

hmm. I always thought FUD meant Fear, Uncertainty,and Doubt,
 
pxc said:
I have a better question: why do you need to ask questions about Fuad's article? I'll tell you why... Fuad is an ignorant moron. He has no clue what he writes about. There may or may not be a problem, but Fuad mixes up even simple things like internal precision vs texture formats.

(Edit) The fine folks at B3D are having a laugh at how stupid Fuad is, bringing up the same points as above: http://www.beyond3d.com/forum/showthread.php?t=31786

FWIW, supplying higher than the minimum requested precision (24-bit, for example) is allowed by the DX spec. Remember the mixed mode path in HL2? It gained speed by running some shaders in 16-bit precision (instead of the non-nv-supported 24-bit precision), otherwise by default it would run in 32-bit precision at a huge performance hit.
Depends how you look at it. To be DX9 Compliant hardware needs to be able to run at 24bit Precision or higher (High Quality Cinematic Computing) wether or not YOU CAN run at 16bit is not what Fuad is saying. He's just trying to play by Microsofts rules.

Now before anyone goes off on a Tangent.. I've had my own fair share of experiences with Fuad.
I emailed him regarding his x1900XTX vs. 7900GTX article. Which the 7900GTX came out faster in almost all the tests. I tried to tell him using High Quality AF on ATi cards was not the same as on nVIDIA cards. And that if he did use High Quality AF he would have to take that into consideration in his conclusion (Like HardOCP and other places do).

He didn't want to listen. So yeah he's not the brightest individual.. but then again he's a reporter.. not a Techy.
 
ElMoIsEviL said:
Depends how you look at it. To be DX9 Compliant hardware needs to be able to run at 24bit Precision or higher (High Quality Cinematic Computing) wether or not YOU CAN run at 16bit is not what Fuad is saying. He's just trying to play by Microsofts rules.
No, it doesn't "depend" how you look at it. Standard shader precision is 24-bit. Texture formats are different and various depths and formats are allowed depending on the operation and framebuffer format. Fuad doesn't have a clue what he's writing about.

It takes a lot more work to try to twist around what he wrote into something that resembles reality than it's worth. But as an ATI partisan, you look to seize on anything you can bash nvidia with, even if it makes no sense. I read your post on the B3D thread. :rolleyes:
 
Banko said:
I think it is pretty wrong considering when the 6 series came out everything would run in FP32 and only in FP16 if you specified it.


QFT! Even still, unless you specifi FP16 it runs 32. But then again this is the Inq. this is coming from, so.. take it with a grain of salt.


-Proxy
 
pxc said:
I have a better question: why do you need to ask questions about Fuad's article? I'll tell you why... Fuad is an ignorant moron. He has no clue what he writes about. There may or may not be a problem, but Fuad mixes up even simple things like internal precision vs texture formats.

(Edit) The fine folks at B3D are having a laugh at how stupid Fuad is, bringing up the same points as above: http://www.beyond3d.com/forum/showthread.php?t=31786

B3D is full of raging Nvidiabots these days, like "Vysez", so that means nothing.

INQ should clarify what the heck exactly they're talking about, if anything, though.
 
Sharky974 said:
B3D is full of raging Nvidiabots these days, like "Vysez", so that means nothing.
I see fanboys of all types at B3D. But it's a pretty consistent opinion that Fuad doesn't know what he's talking about. :p
 
DX9 allows partial precision of 16Bit when there isn't a need for the 24Bit "Full' Precsion or Shader Model 3.0 32 Bit Precision. We will have to see if there is a image quality degradation before we starting accussing Nvidia of anything.

On another note Hmm, why is bang banned?
 
coldpower27 said:
DX9 allows partial precision of 16Bit when there isn't a need for the 24Bit "Full' Precsion or Shader Model 3.0 32 Bit Precision. We will have to see if there is a image quality degradation before we starting accussing Nvidia of anything.

On another note Hmm, why is bang banned?
He spammed the forum with inquirer links?
 
pxc said:
No, it doesn't "depend" how you look at it. Standard shader precision is 24-bit. Texture formats are different and various depths and formats are allowed depending on the operation and framebuffer format. Fuad doesn't have a clue what he's writing about.

It takes a lot more work to try to twist around what he wrote into something that resembles reality than it's worth. But as an ATI partisan, you look to seize on anything you can bash nvidia with, even if it makes no sense. I read your post on the B3D thread. :rolleyes:
I thought we were talking about shader precision. I'm no ATi partisan (I don't see where I mentioned Microsoft DX9 specs requiring 32bit texture precision). Currently I preffer ATi's x1K over nVIDIA's 7x00 series, but then again I always preffer what is more advanced (technology wise). So if we're not talking about Shader precision (which I believed is what Fuad was saying) then why the hell is this even news?

Also it's worth noting that only DX10 cards will support full FP32 (HDR). Current 3D cards run there high quality HDR with FP16 precision.

So if we're not talking about Shader Precision and not talking about HDR precision then I would assume you're talking about. Texture formats are technically recommended to be 32bit (24bit on older ATi hardware). To use 16bit texture formats was a way for people to gain extra FPS back in the day (especially in OpenGL). Why should we still be using 16bit texture formats when we're trying to achieve True Cinematic Rendering.

I assume this is what you're talking about. I'm pretty sure Fuad mean't lowering Shader Precision (like what the GeForceFX compiler would do).

But then again PXC.. you've always been one of those big nVIDIA supporters..:) I mean with the poster in your room and all.
 
Sable said:
But then take a look at this thread on anandtech where Gstanfor shows the picture has been photoshopped. you can even see it in the original images around the edges of the water.

*ROFL*
That's one way of giving your benchmark a piss-poor rep, before it's even out :D

Terra...
 
Several points were brought up on Ace's and B3D. Who is this "Rydermark" maker, just to begin with?

This looks like a huge hoax being pulled on theinq. If you are going to pull such an inconsistent hoax, FUaD is your guy because he's so clueless. Or worse, this could be something theinq is in on to pull in clicks. Theinq is a joke anyways, so it doesn't matter which of the two it is.
 
Back
Top