JayteeBates
[H]ard|Poof
- Joined
- Jul 21, 2007
- Messages
- 5,115
This is why I read the [H]. Thanks for the great follow up and real game testing!
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
also reiterating what kyle said:
from asus's page, i have one question.
isn't nf200 chip optimized for nvidia? or maybe its not said like that, but i would not be surprised if the chip had something like "amd detected, cripple performance!" haha jk but the results make me want to question everything.. just like how some games are optimized for nvidia or amd. i would be interested in seeing the test rerun on the amd side on a board without the lucid chip
the lucid chip was originally an nvidia only thing as i don't believe it was present on early 775 boards that were intel boards where one could run crossfire x?
So what I take away from this is that Nvidia's drivers use the CPU more vs. ATI? That is really interesting. I'm wondering if they are doing some shaders (or post-processing on frames) in the driver/CPU vs. the GPU. Not saying that's necessarily wrong but just interesting on why they would need a better CPU and ATI not.
surround is software driven... eyefinity is hardware driven... so yes.
Not to be a drag, but that nf200 decelerator is actually hurting the Amd setup and gives it less bandwidth.
It works like this:
The nf 200 takes 16 lanes of the PCiE bus,(from the chipset) and splits them into 32 lanes down to the graphics cards. So no matter how high bandwidth the card has, it is still communicating with the chipset at the same speed.
This hurts the Amd setup the most, because as before the 6990 would have 16 lanes available, it now in reality has 8, or 4 lanes per gpu. The 6970 had 16 lanes before, now it has 8.
Good Redux. Pretty much ended up like I said it would. Use a good CPU over clock like a gamer would and don't put the third 580 in a 4x slot and it walks over the AMD setup.
That's what really surprised me, so for anyone who doesn't have a 4.5GHz+ plus machine, your 3-way SLI rig is being gimped.
On the plus side, Tri-Fire isn't, and runs fine at slower CPU speeds.
Zarathustra[H];1037202845 said:I've been running my i7-920 at stock speeds (2.67Ghz) with my GTX580 since I got it.
My reasoning for this was that my CPU loads never exceeded 35% while playing games, so I figured it was always GPU limited anyway, especially since I run at 2560x1600...
Now I am going to have to try to overclock and see what happens (this won't happen for a while though as I need a new case, mobo and cooler)
also reiterating what kyle said:
from asus's page, i have one question.
isn't nf200 chip optimized for nvidia? or maybe its not said like that, but i would not be surprised if the chip had something like "amd detected, cripple performance!" haha jk but the results make me want to question everything.. just like how some games are optimized for nvidia or amd. i would be interested in seeing the test rerun on the amd side on a board without the lucid chip
the lucid chip was originally an nvidia only thing as i don't believe it was present on early 775 boards that were intel boards where one could run crossfire x?
Zarathustra[H];1037203381 said:Vega, I have a question suited to your very particular level of knowledge.
I currently have a single 1.5gb GTX580 (which doesn't seem to like to overclock AT ALL).
I plan on Going SLI. Would I gain anything by selling it and going with two 3GB versions instead, if I only plan on playing at 2560x1600 on one screen? (and occasionally maybe three screens in PLP setup using SoftTH)
In other words, is there any game/setting (including heavy AA) where the 1.5Gb memory limitation becomes an issue on a single 2560x1600 monitor?
And if I sell, maybe a more cost effective method would be to get three HD6950's, flash them to 6970's and tri-sli them...
... maybe someday if I win the lottery
Good Redux. Pretty much ended up like I said it would. Use a good CPU over clock like a gamer would and don't put the third 580 in a 4x slot and it walks over the AMD setup.
Zarathustra[H];1037203381 said:Vega, I have a question suited to your very particular level of knowledge.
I currently have a single 1.5gb GTX580 (which doesn't seem to like to overclock AT ALL).
I plan on Going SLI. Would I gain anything by selling it and going with two 3GB versions instead, if I only plan on playing at 2560x1600 on one screen? (and occasionally maybe three screens in PLP setup using SoftTH)
In other words, is there any game/setting (including heavy AA) where the 1.5Gb memory limitation becomes an issue on a single 2560x1600 monitor?
And if I sell, maybe a more cost effective method would be to get three HD6950's, flash them to 6970's and tri-sli them...
You predicted an anomalous result? Seriously, if one setup LOSES performance significantly, while the other gains significantly from a system change that should increase performance for each, I don't see how you can draw any credible conclusions.
I'm using tri-sli 580s w/ 1.5 GB RAM on a 2560 x 1600 monitor, and it seems fine and/or overpowered for the games I've tried so far.
I think it will depend on the game; I'm sure the newer the game and higher resolution the textures the less AA you'll be able to apply. I was playing Company of Heroes over the weekend with 16xQ antialiasing and fantastic framerates.
I figure by the time I feel like I need a larger set of RAM for 2560 x 1600 the Next Big Thing in GPUs will be out, and I'll get that, anyway.
Wow that's really useful information, well actually no it's not.
Majority don't care what scales faster with 4.8GHz+ systems for the simple reason we don't use them, this article was a huge waste of time to satisfy the fanboys.
Even after update the crossfire solution is still the best option for most users.
Oh and I use GTX580.
Considering you can get a 4.8ghz cpu/mobo for $350 bucks before the heatsink, I'm pretty damn sure there will be plenty of people using them. Most of my friends in the hobby have sold their gen 1 icore/phenom stuff to switch to 2500k setups, because the cost to swap was so low.
Wow that's really useful information, well actually no it's not.
Majority don't care what scales faster with 4.8GHz+ systems for the simple reason we don't use them, this article was a huge waste of time to satisfy the fanboys.
Even after update the crossfire solution is still the best option for most users.
Oh and I use GTX580.
Am I the only one that gets the impression that on HARDOCP when an NVIDIA configuration beats an AMD configuration the entire subject is dropped until the next generation BUT when an AMD configuration beats an NVIDIA configuration the topic gets revisted over and over and over again until NVIDIA takes the lead again?! Shouldn't all these revisits be 3x 580 versus 3x 3970 anyways (as a 3x 3970 configuration should be marginally faster than 1x 3990 and 1x 3970).
I couldn't care less which one happens to be on top this week but it's getting a bit silly :/