I've seen the same issues in 5980x1080 (bezel corrected). Otherwise the game runs very, very smooth. It's surprising, actually. A menu or two is stretched, and subtitles stretch all the way across the screen, but otherwise it's okay. I've certainly seen worse support than this (BL1 comes to...
I'm just speculating here, but I don't see why not. Just SLI the second card but keep all your monitors hooked up to card 1.
I see it as a huge advantage over AMD that NV allows 3-monitor hookups using 2xDVI and HDMI. DP adapters are a huge pain.
I suppose. And the ending two numbers are different as well, AFAIK there was never differentiation in the tens place in the old generations, just 9700 and 9800, nothing like 9870 or the like.
Still makes me wonder what they'll do in a few years once they have to roll over again.
And you could softmod the old ones with the right-angle memory layout! Man, good times.
Makes me wonder if they are really going to call the next series 7000 - could lead to confusion in the future once they release a 7500 series part, not to mention when we hit the 9000 series again...
There is some info in another thread about it, who knows if it is at all reliable. I think we can count on a simple die shrink and expansion. I would love a 7870 with the power of the 6970 and half the thermal envelope... or of course a 7970 with twice the power of the 6970.
Looking forward to the continuation of this review the the AUSUM (what an awful name) switch on and some overclocking.
I agree with the above posters that the 6990 doesn't really make sense compared to 2x 6950 unlocked or 2x 6970. But I would love to see 2x 6990 running 6x Eyefinity or 3x...
I ask because the tearing I get in some games, especially Oblivion, is completely out of control and distracting. I don't need to pull 120 frames and waste electricity on frames my monitor can't display, and the tearing kills me.
Has no one figured out a way to hack this in?
I have been trying for a long time to get vsync going on my 6970 Eyefinity setup to no avail. What happened with this? I've heard mumbling of AMD dropping the ball on this but is there any way to force it on with Eyefinity?
CFX in the 5000 generation was just totally FUBAR. My 5970 was faster with CFX off, the microstutter was too much. I've heard the 6000 series fixed this but I'll hold onto my money for a while.
That release is nonsense, the 6970 only has 2 RAMDACs, unless AMD updated it under the table and didn't tell anyone (unlikely) that isn't going to work. Could anyone test 3 monitors using both DVIs and HDMI to confirm?
The whole thing was dumbed down and consumerist. GTA4? The iPad?
How about programmable GPUs, ARM's rise, multi-core processors, heterogenous multicore (Fusion/Clarksdale/Sandy Bridge), SSDs, DDR memory, on-die memory controllers, broadband, stereoscopic/multimonitor gaming, fuck, any meat...
Kind of mixing our generations, aren't we? Sega responded with the Saturn and Nintendo responded with the N64. The Xbox and Wii were some 6 years apart, and the PS1 and Xbox were about 7 years apart... who writes this junk?
No card has more than 2 RAMDACs, so you have to use DP for any displays over that. 3x HDMI would be useless because you could never use more than 2 of them. Honestly ATI never should have put in 2x DVI ports because HDMI can be converted so easily to DVI. ATI should have done 1x DP, 2x mDP...
That's very very good news... I hated my 5970 for that reason, it didn't even feel like the 100fps I was getting in TF2 Eyefinity was smooth... I get a better experience in 1280x720 on my 5450. Seems like hyperbole but it's not, having a lower frame rate (~35) but steady is far far better in my...
Do you think the smoothness you're seeing is because of the improved frame rates, or has microstutter been minimized? Could you run something that gets ~30fps and let us know if it feels smooth... like single-card 30fps?
It seems to only affect some people... a lot of those people are likely multi-monitor users. After all, tri-crossfire is overkill for almost all single-monitor setups.
I have an i5 750 with a Hyper 212+, no problems getting it to 4ghz but I run it at 3.5 now so that I can run turbo boost, etc... that way the processor downclocks correctly when not in use, save power...
Makes me wonder how many people they even have on the driver team... AMD needs to seriously do some hiring, their drivers feel like an afterthought, always.
I've had enough experience with eyefinity + crossfire in the 5 series (former 5970 owner) to know that it's not a great idea. I often got a stuttery, herky-jerky mess, through every driver, beta, custom, and otherwise, that I could find.
I've heard things have improved for the 6000-series...
See http://support.amd.com/us/kbarticles/Pages/AMDCatalyst1012ahotfix.aspx
Great news, anyone with a Cayman card that can do some benchmarks?
5x1 is a great addition too, I'm sure WSGF is going nuts right now.
SLI, as in Scan-Line Interleave from the 3dfx days, used to be the best way to do things before post-process effects and customized shaders... now splitting a scene line by line isn't the most efficient way to do things anymore, a lot of the work would have to be replicated between cards...
I would love to see some serious eyefinity + CF reviews this round. I really hope they fixed the stuttering issues. FRAPS used to report anywhere from 50-80 FPS in eyefinity in BF:BC2 with my 5970, but it never felt like it was more than 30. It actually felt smoother with Catalyst AI disabled...
I personally ordered a Sapphire because it, unlike the other brands, comes with a mini-DP to DP adapter. And all the boards this round are reference boards, so there shouldn't be any difference between them.
2-year warranty is enough for me. I'll be upgrading by then.
This should probably be a different thread, we're getting off-topic here. But yes. AMD/ATI introduced very significant idle power improvements in the 5000-series.
Not entirely. Power usage these days is much improved from back in the day but a tiny chip like the 5450 uses less power, period. There is much less logic in the chip and far less streaming processors, which means that there is far less leakage. Not to mention, most low-end cards run so cool...
As stated above, PhysX is a proprietary physics API that some games use to accelerate physics calculations. Cloth floating in the wind, things exploding, and so on. It can run on the CPU also but it's intentionally crippled by NVidia so people will buy their cards. Honestly shouldn't be a big...
Well, yeah, sort of, if you buy a top-end enthusiast gaming card to do your 2d browsing / HD video duties... a $30 5450 will do the same thing and uses far less power.
Xfire does almost nothing to min FPS in my experience. A 6970 is overkill at 1080p. Crossfired 6970s or a 6990 is throwing your money out the window.
The 6970 is a very, very fast card. Unless you're running Eyefinity or 2560x1600, there's almost nothing it can't do well.
You mean 5760x1200, right?
Anyway, I would imagine 6950CF to be your best bet. Better scaling and overall far better performance than 5870CF, tons of memory, cheaper than 570 SLI.
I'm sure somebody did, which is why they did this.
Video cards have always been absurd IMO, because the average user who buys top-end video cards is pretty savvy. Generally, savvy enough to see through the very intense marketing bullshit both manufacturers make. I mean really, who are...