Pics of R680

defiant007

2[H]4U
Joined
Feb 27, 2006
Messages
3,497
http://www.techpowerup.com/?47123

53094522qs8.jpg


88191229ar6.jpg


73904751dr4.jpg
 
2 gpus on one PCB, but whats that thing in the middle of them? physics? hrmm
 
How well would that run with no ram?

what? u want RAM? that is the turbo cache version! lol :p
no seriously if u want RAM you´ll have to buy the add-on card that comes with the actual vid RAM :rolleyes:
 
I can't believe that they're doing the dual GPU card again. I love my V5-5500 AGP and, in fact, I continue to use it, even to this very day in my emulation system. When the time comes for a rebuild this card would be a pretty nice upgrade, and hopefully close to the retail price at which I got the V5-5500 for, 299. The price would need to be about that low if AMD/ATI want to push the Crossfire angle. Maybe 399 for the 1 gb version. 499 for the 2 gb ultrahypersonicomgwtfbbq model. :)
 
How feasible would a shared memory pool be (a la C2D L2 cache)? You could do away with redundant texture data is what I'm thinking.
 
People need to start reading.

The blurb with the pictures in the link says that the card is not a working card.

The chip in the middle is a "PLX chip" for communication between the GPUs.
 
Hmm..after checking out PLX's website, they seem to make various bridge-interconnect chips etc. http://www.plxtech.com/products/expresslane/switches.asp

As for what it may do:
What is a PCIe Switch?

A PCI Express switch provides serial, high-speed, point-to-point connections between multiple I/O devices and a microprocessor (or root complex) allowing peer-to-peer communication, fan-out, or aggregation of end-point traffic to the host. The PCI Software views a PCIe switch as a hierarchy of PCI-to-PCI bridges (connected back-to-back). Within each switch, one port is designated as an upstream port connecting to a processor or a root-complex and the rest are downstream ports connecting to various I/Os. All PCI Express control transactions and interrupts automatically flow to the upstream.
When would you use this?

Most processors and root complexes offer limited PCI Express ports. A PCIe switch allows the host to connect to multiple PCIe I/Os or endpoints, effectively creating additional PCIe ports.

So I'm thinking that the card will be simulating crossfire in one slot. I don't know.
 
How feasible would a shared memory pool be (a la C2D L2 cache)? You could do away with redundant texture data is what I'm thinking.

Very impossible, GPUs don't have large sections of L2 cache, let alone L2 that acts as a bridge between multiple cores.

It has digital VRMs for both cores however, that makes be a but skeptical, seeing as the last digital VRM effort they made genuinely sucked. (x1950pro). That looks like an interesting PCiE bridge to say the least. And thats definitely a prototype board.
 
It looks really nice.. except for the R600 fan. :(

Hopefully they can get this out before nvidia's two GPU solution
 
I'm not good with AMD, can someone tell me what number this will be? 4000 series?
 
that's a long card...
i wonder if its going to have 2 mini-fans or something like a heat-pipe connecting both cores
 
this better no simulate crossfire on one card.. i hate crossfire and sli, doesn't work on all games, have to wait for a profile, sometimes it provides worse performace, or if there is an improvement, it is not that much!
 
I have a few questions;

- Do you need a Crossfire motherboard only, and also special drivers ? But I should be alright with my IP35 Pro

- So this baby will be speed of 2 x 3870's, or will it be like SLI where 2 nVidia cards are never truly 100% speed increase with the second card, more like 80%, and even some games don't even notice the second card in games like WoW.

- Is there a chance this card could possibly be faster than two 3870's ?
 
I think they will use lower speed memory than is on a single 3870.
 
The article at B3D talks about the X2 using 0.7ns GDD4 !! :) Damn fast!

Probably costs alot of money and possibly says 'ES' on it somewhere.

I mean ATi could do it, but it doesn't make economic sense for them to be throwing the best memory they can possibly get on this card. Esp when these cards aren't as bandwidth limited then say the G92.
 
I mean ATi could do it, but it doesn't make economic sense for them to be throwing the best memory they can possibly get on this card. Esp when these cards aren't as bandwidth limited then say the G92.

In fact, wasn't that their problem with the 2900XT? They loaded it up with lots of uber-fast RAM, gave it an extremely wide memory bus width (512-bit), and it ended up being expensive and power-hungry, but it still had disappointing performance. Then they pared down the RAM and the bus width, die-shrunk it, focused on optimizing the core and ended up with the inexpensive and much-more-loved 3870, which performs on-par with the 2900XT despite its less-impressive specs, especially in the memory department. While faster and better RAM is always nice, I think both NVIDIA and AMD learned from the 2900XT debacle and have consequently downplayed its importance.
 
In fact, wasn't that their problem with the 2900XT? They loaded it up with lots of uber-fast RAM, gave it an extremely wide memory bus width (512-bit), and it ended up being expensive and power-hungry, but it still had disappointing performance. Then they pared down the RAM and the bus width, die-shrunk it, focused on optimizing the core and ended up with the inexpensive and much-more-loved 3870, which performs on-par with the 2900XT despite its less-impressive specs, especially in the memory department. While faster and better RAM is always nice, I think both NVIDIA and AMD learned from the 2900XT debacle and have consequently downplayed its importance.

The 3870 has much faster RAM than the HD 2900 had. I'm not sure the actual 512bit memory interface on the HD 2900XT made much difference, other than performance at high resolutions, >16x10. The die shrink is what really cut costs and made the chip run cooler.
 
In fact, wasn't that their problem with the 2900XT? They loaded it up with lots of uber-fast RAM, gave it an extremely wide memory bus width (512-bit), and it ended up being expensive and power-hungry, but it still had disappointing performance. Then they pared down the RAM and the bus width, die-shrunk it, focused on optimizing the core and ended up with the inexpensive and much-more-loved 3870, which performs on-par with the 2900XT despite its less-impressive specs, especially in the memory department. While faster and better RAM is always nice, I think both NVIDIA and AMD learned from the 2900XT debacle and have consequently downplayed its importance.

The 1gb cards used DDR4, and used about 30 watts less power under load.
 
I can't wait to see how this thing performs, and the card length doesn't bother me at all, I have a p182 so worse comes to worse, I can just take the middle hard drive chamber out and lose my "wind tunnel"
 
Back
Top