ASUS P6T6 WS Revolution Sneak Peek

Yeah, this P6T6 WS Revolution without the NF200 would be perfect it sounds like.
Isn't that just the P6T? Or are you sayin' you want a different solution that actually creates extra PCIe lanes, and have a PCIe-only board?

I'm not so gung-ho to remove the legacy ports. I like doing a bit of hardware hacking, and without traditional serial or parallel ports, it's a little more difficult. Plus, I also love my clackity old IBM Model M keyboards. I'm sure that I sound like some old coot who insists carburetors are the only way and this new-fangled EFI is for the punks, but ...

I'm not too sure I understand the problem with the heat in the NF200. What's the concern? Why was it interesting for Kyle to run the board at 60^C, when he also concedes that nobody who can competently assemble a case would see such temps.
 
The heat is what really killed it for me.
I've been thinking about this. While we all always strive for the coolest possible configuration, I don't want one important fact to be be cast aside. This board took the heat like a trooper and did awesome under stress testing. So much so that Kyle wrote "The P6T6 WS Revolution proved to be the pinnacle of [H] Torture Testing stability we have seen yet."

It's going to be real easy for people to write this board off way too easily and not accept this fact when reading the review. They are only going to read between the lines as usual. :rolleyes:
 
Isn't that just the P6T? Or are you sayin' you want a different solution that actually creates extra PCIe lanes, and have a PCIe-only board?
No, the P6T has PCI and PATA and floppy.. things I haven't used in years. I could make the P6T work but I'd have to modify the chipset cooling so I could actually use the first PCI-E slot. I love the layout of some of the P45 boards where you have two PCI-E x1 or x4 slots and then your first PCI-E x16 slot with two slots separating the two x16 slots. The trend these days is too focused on triple GPU adapters and I don't think the market is quite there for this yet.

Actually, I wouldn't mind having a serial port or two back. I do some work over serial ports still and not all USB to serial adpaters are created equally. I do some RF work and some of these adapters throw out hash like you would not believe.
 
Ya know, all this talk has me looking at the P6T WS Professional again. I need to download the manual and see if it has the same overclocking capabilities as the Revolution.
 
Ya know, all this talk has me looking at the P6T WS Professional again. I need to download the manual and see if it has the same overclocking capabilities as the Revolution.


Been following this thread for a while now.

I appreciate you 'thinking out loud' Tobit on what your plans are and everyone else as well. I'm in the same boat trying to decide which board to go with the new core. I'm torn between the revolution, evga, or the rampage xtreme.

I have to decide here soon...uggg..decisions decisions!
 
Yeah, this P6T6 WS Revolution without the NF200 would be perfect it sounds like.

If I could grab Asus and force them to make the board I want I would do the following:

1) Start with the Asus P6T Deluxe
2) Remove the IDE, Floppy, PS/2, PCI slots, and 1 of the NICs (you'll see why below)
3) Leave the space below the top x16 slot empty and put a x1 slot below that using the lane freed up by removing the 2nd NIC.

This would give us a legacy free board set up with x16, x8, x8, x4, x1 PCI-E slots with none of them being eaten up by the heatsink of the first video card. We would still have the option to config it to x16, x16, x4, x1, x1 to run 2-Way SLI or Crossfire and would only lose one of the x1 slots to a heatsink. Still a very workable situation. No NF200 anywhere to screw things up.

If I really had my way though, it would have no onboard NICs or sound at all either. While they have gotten better, onboard NICs and sound still can't hold up to a stand alone card.
 
So the NF200 chip so far is very good at heating up the NB chip via the heatpipe, and 3-way SLI doesn't work one darn yet. No figures on the latency, but my confidence in this board is somewhere near rock-bottom. There's no way I'll buy a mainboard which has an utterly useless (but hot!) component on it which could possibly affect performance of mission-critical apps in a very negative manner.

This is not a workstation board, as far as I can tell, nor one I'd put into a server.
 
ASUS sent Kyle a BIOS update last night which, supposedly, corrects the 3xSLI issue. Kyle is testing this sometime today. The heat is in no way causing any instability issues. In fact, it is one of the only X58 boards so far that has stood up to Kyle's torture testing above 60C. In a mission critical environment where I do not care about anything but stability, this it the board I would want (until something else torture tests better).

Kyle Bennett said:
The “Workstation” moniker conjures up images of a product that is going to be bulletproof in the stability department, and that is exactly what we got.
 
Don't forget that in a mission-critical environment cooling is an essential issue, not just for an individual system, but also for the equipment around it, plus the AC which has to work harder :)

I find my AMD 770-based Gigabyte mainboard to be plenty stable too, and it's really stripped down aside from the features it needs.
 
So the NF200 chip so far is very good at heating up the NB chip via the heatpipe, and 3-way SLI doesn't work one darn yet.
What video boards are you planning to use in your three-way SLI setup?

No figures on the latency, but my confidence in this board is somewhere near rock-bottom. There's no way I'll buy a mainboard which has an utterly useless (but hot!) component on it which could possibly affect performance of mission-critical apps in a very negative manner.
Sorry; maybe I've been in the server room for too long. How does the NF200 negatively affect the performance of mission "critical apps"? Which apps are those?
This is not a workstation board, as far as I can tell, nor one I'd put into a server.
It might be a workstation board, for some definition of workstation. But I don't think anyone in the server market is worrying about it.

The heat is in no way causing any instability issues.
Is everyone in this thread wrong to worry about it, then? At least a few folks--like Elledan above--have said the unquantified heat that the NF200 generates is a deal breaker.
In fact, it is one of the only X58 boards so far that has stood up to Kyle's torture testing above 60C. In a mission critical environment where I do not care about anything but stability, this it the board I would want (until something else torture tests better).
Again, I'm not sure I understand this test, or your therefore your reaction to it. In a mission-critical environment, my biggest worry isn't heat--it's issues more like software and driver stability and compatibility.

No, the P6T has PCI and PATA and floppy.. things I haven't used in years.
How do you install RAID drivers?

The trend these days is too focused on triple GPU adapters and I don't think the market is quite there for this yet.
I'm wondering--what percentage of P6T6 boards sold do you think will be run in a tri-SLI setup?
 
Is everyone in this thread wrong to worry about it, then? At least a few folks--like Elledan above--have said the unquantified heat that the NF200 generates is a deal breaker.
Nope, not at all. However, I think based on how the testing was done, that it's a bit premature to totally discount the board because of this. I'd like to have some more real world numbers once this board is inside of a well ventilated chassis.

How do you install RAID drivers?
Only OSes I've come across that are totally dependent on floppies for RAID have been WinXP, 2K, and NT. My Unix workstations do not have this limitation nor does Vista. I've never had a need to run RAID on XP machines, only Unix.

I'm wondering--what percentage of P6T6 boards sold do you think will be run in a tri-SLI setup?
Less than 20%
 
Again, I'm not sure I understand this test, or your therefore your reaction to it. In a mission-critical environment, my biggest worry isn't heat--it's issues more like software and driver stability and compatibility.
My point is that, given how stable this board remained without any performance issues during Kyle's torture test, it will be suitable for mission critical work so long as, as you state as well, we have a stable BIOS and drivers. In a properly ventilated chassis, I don't think that the temperatures are going to reach any critical levels for it to be an issue. Again, I think it is premature to discount this board because it runs "hotter" than others. We do not have real word motherboard temperatures yet.. only numbers from torture testing.
 
Sorry, one more post rather than an edit as I don't want this point to be missed. In Kyle's bottom line, he stated "The ASUS P6T6 WS Revolution looks to be a great motherboard both in terms of stability and performance" and he didn't reflect on the hotter than normal temperatures in his conclusion. This is good enough for me to give the board a fair shot in the real world.
 
What video boards are you planning to use in your three-way SLI setup?
It's doubtful I'd be using SLI in any form or shape. I took it more as an indication that something isn't totally right with the NF200 chip setup.

Sorry; maybe I've been in the server room for too long. How does the NF200 negatively affect the performance of mission "critical apps"? Which apps are those?
CUDA apps, for example. You know, GPGPU. If the communication between the GPU and CPU is slowed down due to latency, performance drops off with it. If one is already running the apps on a 1U, 4 GPU Tesla unit (or multiple), which connects via an external PCIe cable, then every ns of latency counts.
It might be a workstation board, for some definition of workstation. But I don't think anyone in the server market is worrying about it.
To be honest I'm not too familiar with applications of SLI in the workstation market either :)
 
Yeah, for CUDA, this might not be the board for Elledan. I can agree on that. ;)
 
CUDA apps, for example. You know, GPGPU. If the communication between the GPU and CPU is slowed down due to latency, performance drops off with it. If one is already running the apps on a 1U, 4 GPU Tesla unit (or multiple), which connects via an external PCIe cable, then every ns of latency counts.
Oh, got it -- I had fogotten about your CUDA posts. Thanks for the reminder! I guess we don't know what the latency is, but I suppose it would be surprising if there were none at all.
 
Yeah, for CUDA, this might not be the board for Elledan. I can agree on that. ;)

I haven't heard of any complaints using CUDA apps on nForce 100 switch chips, or on boards such as 780i or 790i which also use the nForce 200 chip. Nor have I had any problems folding with my 9800GX2 which uses the nForce 200 chip as well. Where has anyone seen any issues with cuda apps on nForce 200? nVidia created cuda and they made this chip, why would we think they'd be incompatible?
 
I will defer to Elledan. I have no knowledge of CUDA whatsoever so what he said made sense to me at the time. :)
 
I haven't heard of any complaints using CUDA apps on nForce 100 switch chips, or on boards such as 780i or 790i which also use the nForce 200 chip. Nor have I had any problems folding with my 9800GX2 which uses the nForce 200 chip as well. Where has anyone seen any issues with cuda apps on nForce 200? nVidia created cuda and they made this chip, why would we think they'd be incompatible?

As I said, without some hard evidence to back up any latency claims I can not in good faith tell my clients to use a board like this, or even use it for our own testing. Added latency wouldn't make this board incompatible with CUDA, it just would make the PCIe subsystem less efficient and thus impact performance.

When you're using $8k external 1U, 4 GPU Tesla units, you don't want a mainboard of a few hundred bucks to reduce performance by even a few tenths of a % :)
 
As I said, without some hard evidence to back up any latency claims I can not in good faith tell my clients to use a board like this, or even use it for our own testing. Added latency wouldn't make this board incompatible with CUDA, it just would make the PCIe subsystem less efficient and thus impact performance.

When you're using $8k external 1U, 4 GPU Tesla units, you don't want a mainboard of a few hundred bucks to reduce performance by even a few tenths of a % :)

I have an $8k Precision 690 with an nForce 100 chip based riser board, what's your point? If Dell was confident enough to use it in their workstations, don't you think it's been tested?

That's completely flawed logic, LOL.

So you're telling me you don't think they've tested the two of them together? As in, they've never tested the 9800GX2 or the Quadro FX4700x2 with CUDA apps?
 
So you're telling me you don't think they've tested the two of them together? As in, they've never tested the 9800GX2 or the Quadro FX4700x2 with CUDA apps?
No--I'm telling you that your assertion that two things are compatible just because the same company made them is inductive and completely flawed. Plus, we're not talking about compatibility--ElleDan is worried about optimal functionality.
 
No--I'm telling you that your assertion that two things are compatible just because the same company made them is inductive and completely flawed. Plus, we're not talking about compatibility--ElleDan is worried about optimal functionality.

I wasn't making an assertion that they were compatible because they were made by the same company, I was making an assertion that the company that made had tested them both.

Besides, if he's connecting and external box with 4 gpus through a single PCIe x16 connector, why isn't he concerned with it's latency issues?
 
I wasn't making an assertion that they were compatible because they were made by the same company, I was making an assertion that the company that made had tested them both.
Why would they test them both?
 
The nForce 200 MCP and CUDA come from two different teams. The nForce 200 MCP is also manufactured under contract by someone else. NVIDIA doesn't even make them. Those are also integrated into motherboards which are again, not built by NVIDIA directly. Specifically the ASUS P6T6 WS Revolution was designed and built by ASUS who procured the nForce 200 MCP's for use on the board. They may have purchased them from NVIDIA, but it seems highly unlikely they did a ton of CUDA testing on the board in question.
 
I wasn't making an assertion that they were compatible because they were made by the same company, I was making an assertion that the company that made had tested them both.

Besides, if he's connecting and external box with 4 gpus through a single PCIe x16 connector, why isn't he concerned with it's latency issues?

A multiplexing chip (NF200) adds potentially a lot more latency than a bit more wire. Just look at the latency involved in an ethernet connection, for example. over 90% of the latency on a LAN already is due to the network stack and NIC. The wire adds virtually no latency.

And I'm not just worried about latency with NF200, but also about reliability. It's another component which I won't be using in the first place (no need for triple SLI or such). Until I know more about it, I simply can't just gamble that it'll be fine. That's not how things work when it's your future which is potentially on the line.

Oh, and read my sig... I happen to be a girl :p
 
The nForce 200 MCP and CUDA come from two different teams. The nForce 200 MCP is also manufactured under contract by someone else. NVIDIA doesn't even make them. Those are also integrated into motherboards which are again, not built by NVIDIA directly. Specifically the ASUS P6T6 WS Revolution was designed and built by ASUS who procured the nForce 200 MCP's for use on the board. They may have purchased them from NVIDIA, but it seems highly unlikely they did a ton of CUDA testing on the board in question.

That just about sums up my worries :)
 
Why would they test them both?

If they tested CUDA with either the 9800GX2 or FX4700x2, then they tested both, since they use an nforce 200 chip to bridge the two gpus.

The nForce 200 MCP and CUDA come from two different teams. The nForce 200 MCP is also manufactured under contract by someone else. NVIDIA doesn't even make them. Those are also integrated into motherboards which are again, not built by NVIDIA directly. Specifically the ASUS P6T6 WS Revolution was designed and built by ASUS who procured the nForce 200 MCP's for use on the board. They may have purchased them from NVIDIA, but it seems highly unlikely they did a ton of CUDA testing on the board in question.

Well, if we want to go that route, what chips does nVidia build directly? They're built to nVidia specs, right? nVidia is also offering the nForce 200 solution to X58 motherboard manufacturers as a solution, so I would imagine they are offering some sort of reference solution, but who knows really.

A multiplexing chip (NF200) adds potentially a lot more latency than a bit more wire. Just look at the latency involved in an ethernet connection, for example. over 90% of the latency on a LAN already is due to the network stack and NIC. The wire adds virtually no latency.

And I'm not just worried about latency with NF200, but also about reliability. It's another component which I won't be using in the first place (no need for triple SLI or such). Until I know more about it, I simply can't just gamble that it'll be fine. That's not how things work when it's your future which is potentially on the line.

Oh, and read my sig... I happen to be a girl :p

So with Quadro Plex, how are those 4 gpus connected to a single PCIe x16 slot? Since you are concerned with the engineering of how an nForce 200 chip is hung off the x58 PCIe interface, shouldn't you also be concerned with how their 1u gpus in a box is also engineered? Or are you perfectly content to say that nVidia engineered that correctly but then question Asus/nVidia's engineering of the nForce 200 chip?
 
So with Quadro Plex, how are those 4 gpus connected to a single PCIe x16 slot? Since you are concerned with the engineering of how an nForce 200 chip is hung off the x58 PCIe interface, shouldn't you also be concerned with how their 1u gpus in a box is also engineered? Or are you perfectly content to say that nVidia engineered that correctly but then question Asus/nVidia's engineering of the nForce 200 chip?

I'm not questioning anyone or anything. I just want some assurances. With the 1U Tesla unit I know at least that it was engineered to work perfectly fine together and that if anything doesn't work well about it, I know who to blame for it. NF200 may be a perfect chip, but until I see some real evidence of this, I'm not going to gamble. I hope you can understand this.
 
I'm not questioning anyone or anything. I just want some assurances. With the 1U Tesla unit I know at least that it was engineered to work perfectly fine together and that if anything doesn't work well about it, I know who to blame for it. NF200 may be a perfect chip, but until I see some real evidence of this, I'm not going to gamble. I hope you can understand this.

I understand where your coming from, but at the same time, nVidia could tell you you're using Tesla on a non-certified board with whatever self built system you use. If you're really concerned about support, you'll need to purchase certified system/platform.

http://www.nvidia.com/object/tesla_compatible_platforms.html

Of course, the Asus P6T6 WS Revolution is on the list for Tesla C1060 and has been tested with 3 of them. ;) So is the Dell Precision 690 which uses the nForce 100 based riser. Might want to email them and ask about the S1070.
 
That's actually very good news. :cool:

I'm thinking that they only tested S1070 with servers because it's a rackmount unit. With the P6T6 WS being labeled a workstation board, they tested it with the 10 series "workstation" Tesla unit.
 
Well, if we want to go that route, what chips does nVidia build directly? They're built to nVidia specs, right? nVidia is also offering the nForce 200 solution to X58 motherboard manufacturers as a solution, so I would imagine they are offering some sort of reference solution, but who knows really.

NVIDIA has no reference X58 designs. The nForce 200 MCP was first featured on the 780i SLI reference designs. It is the chip that gave them three full speed PCI-Express x16 slots and PCI-Express 2.0 support. In fact the nForce 200 MCP is the only difference the 780i SLI chipset bad over the 680i SLI chipset. When a motherboard manufacturer wants to use the nForce 200 MCP, they just need chip specifications and the chips themselves. They probably source these things from NVIDIA who had them manufactured under contract by someone else.
 
NVIDIA has no reference X58 designs. The nForce 200 MCP was first featured on the 780i SLI reference designs. It is the chip that gave them three full speed PCI-Express x16 slots and PCI-Express 2.0 support. In fact the nForce 200 MCP is the only difference the 780i SLI chipset bad over the 680i SLI chipset. When a motherboard manufacturer wants to use the nForce 200 MCP, they just need chip specifications and the chips themselves. They probably source these things from NVIDIA who had them manufactured under contract by someone else.

Sorry, worded that wrong. Point I was trying to make was nVidia has diagramed how the nforce 200 chip should fit into the solution.

Also, finally found all of the SLI diagrams for X58: http://www.nvnews.net/articles/sli_intel_x58_chipset/index.shtml
 
Great find! If those diagrams are correct, then the NF200 isn't creating new lanes and is just multiplexing.
 
If 3x SLI catches on, we are going to need a new form factor to accommodate more PCI-E slots. :rolleyes: I refuse to run onboard LAN and audio.
 
Great find! If those diagrams are correct, then the NF200 isn't creating new lanes and is just multiplexing.

How would it create new lanes anyway? The number of actual lanes is fixed by the chipset. No additional chips can change this number. Ergo NF200 is a multiplexer (as I said a few days ago :) ).
 
How would it create new lanes anyway? The number of actual lanes is fixed by the chipset. No additional chips can change this number. Ergo NF200 is a multiplexer (as I said a few days ago :) ).

And it's what I've been suspecting all along, but others in the thread were insistent that the architecture created new lanes.

The diagrams aren't perfectly accurate; the X56 chipset has 4 more lanes that aren't shown. When this board is using 3xSLI, it's using 16 from the chipset, 16 and 16 from the NF200, and 4 lanes in the last slot.
 
Any idea if anyone is going to test this?

I think kyle said he would be doing SLI/Crossfire testing but i doubt that will show/prove anything about latency/subsystem issues.
 
Back
Top