WBurchnall
2[H]4U
- Joined
- Oct 10, 2009
- Messages
- 2,622
I think ATI did a great job with the 5000 series, but if anyone thinks that the fat FERMI chip with 40% more transistors isn't going to kick the 5870s butt (despite it's ECC, etc.) is kidding themselves. The only question is how cost-competitive it's going to be.
I would care to disagree a bit.
A. Different manufacturing processes. One is 45nm that's third generation now? the other is first generation 45 nm and taking forever to get made. I'm not sure about the reliability issues that might exist there. Good warranties are great, but if you have to keep sending your card away for 2-4 week repairs...
B. Did you look at the slides? Did you? This side here is of some importance and I'd recommend you look at it again. ATI's estimating only 1.5 Single Percision Teraflops on the Fermi based of a 1.5ghz clock for the shader which is nearly half of the 5870's 2.7 Teraflops. Let alone the still-in-the-making 5870x2. Even if the shader actually runs at 2.5ghz, the teraflops would only be equal at best:
http://www.hardocp.com/image.html?image=MTI1NTQ3MDM3NXlvcUhUU1k4TzlfMV8xNl9sLmpwZw==
The 5870 has more than 300% the shaders Fermi has. The 5870 has nearly twice the teraflops of what Fermi will theoretically have with a 1.5ghz shader clock. The 5870 has 1/3rd more maximum threads at a time. Not to mention, Fermi's white paper of doom hasn't stated the bandwidth yet. Hasn't stated the number of texture units or ROPs.
B. One is a substantially larger die generating substantially more heat which may need to be clocked slower to compensate. Slower clock + more transistors does not necessary equate better performance than faster clock speeds but less transistors.
C. How many of those +40% transistors are GPGPU targetted such as the ton of Cuda Cores and those working towards that fairly impressive double-point precission calculations mostly used by science-applications and not gaming?
D. The wattage with 40% more transistors could be substantially higher. It wouldn't be unreasonable to argue up to 40% higher.
So sufficent to say, the only question is not how cost competitive it will be, but rather, how competitive it will be in the first place in gaming performance.
I'm personally a bit worried about nVidia since I haven't seen any gaming performance stats despite their working Fermi chips/boards? Supposibly, the first attempt made at least 7 working chips. So why hasn't a single game benchmark been performed?
Surely, there has to be tons of engineers at nVidia who are overly qualified to run the Crysis benchmarking tool or run a time-demo through HL2. I half except them to throw two Fermi chips on one board so their estimated 1.5 Teraflops goes up to 3.0 Teraflops and they can say 'See, we beat the 5870! and have GPGPU!!*
*In a dimmy lit room with the lights flashing on and off due to locally effected power outage*
Last edited: