ASUS Motherboards Qualified And True PCIe 3.0 Ready

So they had to limit it to 1x in order to get a measurable improvement out of switching to 3.0?

Sounds about right.

Today, I feel even the highest end video wouldn't see much benefit from 3.0, even if they supported it.

Down the line this will likely change.
 
What the truck is a "Ah-Zeus"? It's "Eh-Suss" dagnabit! :mad:

And one thing that struck me odd, "tried and true qualified PCIe gen3 performance..." was said, instead of specification. So is it true PCIe 3.0 or not?
 
Is there even a card out there that comes close to maxing out a 8x 2.0rev slot?
 
Quote:
Originally Posted by DesertCat
But... this one goes to 11!

Now that is funny! I wonder how many others will get it?

eleven.jpg


nigel-tufnel-it-goes-to-11-11-11-holiday-obama-shepardfairey-spoof.jpg
 
Quote at the end:
So there's a real world performance difference between PCI Express Gen 2 and Gen 3 bandwidths.

Here, let me fix that for you:
So there's a real world performance difference between PCI Express Gen 2 x1 and Gen 3 x1 bandwidths when testing something that was known to be faster than Gen2 x1, and isn't meant to be run in a x1 slot.

When we artificially hobble the interface, SURE we're going to see a difference. Just like we'd see a difference if we put it in an electrical x1 slot instead of a x8 slot. What we've really shown is that the card with a x8 connector really only needs a Gen 2 x2 connector... Even on a PCI Express Gen 1 system, it would only need a x4 connector.

Show us a card that maxes out PCI Express Gen 2 x16, that it benefits from moving to Gen 3, then we'll talk.
 
This article says " PCI-Express 3.0 compliant, complete with Gen3 switches and electrical components"

So is it true PCIe 3.0 or not?

Dictionary.com says "compliant" = "Meeting or in accordance with rules or standards"
 
Trust em, if Asus comes out saying shits 3.0 rdy, and at the end of the video that they released a new Ivybridge compatible EUFI...im in, im fucken in.
 
Can it handle Crysis....?

OK, OK,... I resurrected a long dead, stinking, beat up, anally violated miniature pony....

My bad....
 
I think everyone is missing the point of having a faster specification and working hardware out today. You need working motherboards before you can start attaching cards to them. Second, if your bus speed is now double and you use 40 lanes on a board, now you suddenly can act as if you had 80. Remember that motherboards are limited space wise and can only have so many physical slots. So now a device that needed 8 PCIe lanes can now fit into a 4x slot provided it works with the standard and signaling. What this means is, once again faster devices in a smaller space. Congrats ASUS on prepping the world.
 
Zarathustra[H];1037830291 said:
So they had to limit it to 1x in order to get a measurable improvement out of switching to 3.0?

Yeah, that was worth a laugh, leaving 94% of the standard 2.0 bus unused to prove that 3.0 is faster.

But, this is normally the case for new buses, that they initially offer no speed benefit because their additional capacity is all excessive. But, it's just a matter of time higher speed devices come into existence, starting with video cards.
 
Anyone else think it was poor testing procedure to leave that card hanging off the daughterboard like that? It was bending down a LOT.

I mean, it worked, but...still.
 
Pci-e devices do most of their read and writes directly to the system ram
 
Zarathustra[H];1037830469 said:
Not that I know.

That being said, its probably better that new bus speeds come out BEFORE we need them :)

indeed it is. We hadnt capped out AGP before PCIe was introduced
 
seems odd the video wasn't hosted by * Lee or * Wong, as that's who I always have to deal with for asus support.
 
Quote at the end:


Here, let me fix that for you:


When we artificially hobble the interface, SURE we're going to see a difference. Just like we'd see a difference if we put it in an electrical x1 slot instead of a x8 slot. What we've really shown is that the card with a x8 connector really only needs a Gen 2 x2 connector... Even on a PCI Express Gen 1 system, it would only need a x4 connector.

Show us a card that maxes out PCI Express Gen 2 x16, that it benefits from moving to Gen 3, then we'll talk.

You do realize that will take GPU designers and probably RAID card designers some times to actually create a design that will use Gen 3, right?

It's not like you can make new chips overnight. But sure, why not? For websurfing and posting on [H], you don't NEED anything near your sig-rig. Not even the crossfire HD4870 X2 GPU. But you do have the capacity for more than just web surfing...

Likewise, don't think the only use of PCIe is just gaming. There WILL be applications (and no, not the executeable kind) for more PCIe bandwidth, and it's only looking forward as it is. The reality? When the AGP-->PCIe switch was happening, no GPU needed PCIe. But what do you know now?

http://www.tomshardware.com/reviews/pcie-geforce-gtx-480-x16-x8-x4,2696-3.html

I'll also note: 3dmark Vantage is almost unaffected by GPU PCIe bandwidth. Case in point? The Vaio Z2's external HD6650m PMD. Perfect Vantage and '11 scores. Horrid actual game performance (though there is speculation the 11.2 era drivers may be to blame).
 
Back
Top