ASUS Maximus IV Extreme & 2x GTX590

Discussion in 'nVidia Flavor' started by zerounleashednl, Apr 5, 2011.

  1. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    As you may know, this motherboard has four 16x PCI-E slots and can support up to three way SLI or CrossfireX.

    • single @x16 -> native Intel Chipset
    • dual @x8 -> native Intel Chipset
    • triple @ x8, x16, x16 -> combo Intel Chipset & NF200 chip

    Now all other socket 1155 motherboards when you use a dual SLI or CrossfireX setup will automatically downgrade to an 8x bandwidth for both PCI-E slots.

    But what happens when you SLI two GTX590's? You probably will use dual @x8 from the native Intel chipset, but will this provide enough bandwidth or througput for these cards?

    :confused:
     
  2. AthlonXP

    AthlonXP Pick your own.....you deserve it.

    Messages:
    20,199
    Joined:
    Oct 14, 2001
    are you looking into doing this, I would honestly say stay away from dual GTX 590's in that setup.
     
  3. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    Yes i'm considering a setup like this. Would you stay away because of the limitations of the Asus Maximus or because of the GTX590 fuzz? :(
     
  4. Dan_D

    Dan_D [H]ardOCP Motherboard Editor

    Messages:
    47,858
    Joined:
    Feb 9, 2002
    Three 3GB GTX 580's would be a better solution or even dual 6990's, etc.
     
  5. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    But why would it be better? :confused:

    Putting 3x 580's on this mobo will leave no room to breath... so no overclocking for me then... :(

    A workaround would be 2x 590's and even win 1 GPU. Yeah I know the fourth GPU won't scale so good... but there will be some decent room between the cards.

    My only concern is that both slots will downgrade to 8x in SLI and will that be enough to fully utilize the 590's?
     
    Last edited: Apr 5, 2011
  6. Lord_Exodia

    Lord_Exodia [H]ardness Supreme

    Messages:
    6,971
    Joined:
    Sep 29, 2005
    SLI on 2 dual gpu cards will not confuse the board. It will still allocate 16x to each slot thanks to the extra NF200 chip on that motherboard. My board goes 8x for each gpu when you combine 2 cards. The board your looking at allows 16x (x2) I would not get a dual gpu card in a slot that runs only at 8x as I feel that is bottlenecking everything a little too much. Full 16x speed for a dual card or bust.

    Now to echo what Dan D is saying 3x 3GB GTX 580 or even 2x GTX 580 3GB would be better depending on what resolution your gaming at. If your gaming on single monitor there are few scenarios where a 590 will run out of vram as you only get 1.5GB of vram useable in SLi, even quad. If your gaming at nvidia surround 3x 30 inch or even 3x 24 inch then the 3GB GTX 580s or even any 2gb cayman multicard combo is a better buy. They are also less explosive than the GTX 590s :p

    Also they will let you overclock and leave enough room to add other cards if they have full coverage water blocks on them ;) Your planning on going extreme, why not go all out.
     
  7. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    Haha.. yeah I know what you mean... ;) But I'm not sure about using water... :eek:

    But I understand why to get the 3GB 580's... At the moment I'm only using 1x 3D 1080p monitor, but it maybe more in the (near) future... :p

    I'm using the board right now, maybe I can throw in an old 8400GTS and see if the NF200 kicks in!
     
  8. Dan_D

    Dan_D [H]ardOCP Motherboard Editor

    Messages:
    47,858
    Joined:
    Feb 9, 2002
    True. I didn't add the qualifier about the monitor resolutions. If the monitor resolution isn't all that high then the OP would be just as well served by dual GeForce GTX 580's or a single Radeon HD 6990. Hell 2x 6950's would probably be even better overclocked. Going all out is great. I've been known to do it myself, however if there will be no benefit in the near future from doing so, why waste the money?
     
  9. spaceman

    spaceman [H]ardForum Junkie

    Messages:
    13,729
    Joined:
    Jan 7, 2005
    Hell a single 3gb 580 would max out that monitor for a good long time. If you only have a 1080P monitor, what the heck are you thinking?

    I agree with Dan.

    I would go with a single 6990. Multi monitor support, enough vram, and you have your most flexible solution.
     
  10. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    Hahaha... benchmarking my man, benchmarking! It's just a hobby... :D

    But... no SLI @x16 check the photo's...

    1. one videocard in slot 1 @ x16 (intel chipset)
    2. one videocard in slot 1 @ x8 (intel chipset) and one in slot 2 @x16 (nf200)
    3. one videocard in slot 1 @ x8 (intel chipset) and one in slot 3 @x8 (intel chipset)
    4. one videocard in slot 1 @ x8 (intel chipset) and one in slot 4 @x16 (nf200)

    [​IMG][​IMG][​IMG][​IMG]
     
    Last edited: Apr 5, 2011
  11. hdgamer

    hdgamer Gawd

    Messages:
    816
    Joined:
    Sep 28, 2009
    You will notice absolutely no difference between PCI x16, x8, and even x4 especially at that resolution. I would also stay away from the GTX590 especially SLI. That is too much overkill, but not even good overkill lol. At least get two 6990's. Less heat and less chance of a fire.
     
  12. Lord_Exodia

    Lord_Exodia [H]ardness Supreme

    Messages:
    6,971
    Joined:
    Sep 29, 2005
    Strange but I was mistaken. So scratching my head I went to asus' website to check the specs http://asusrog.com/Spec/MB_Maximus_spec.html

    It runs single at 16x, dual at 8x 8x and Tri SLi @ 16x 16x 8x. ???!!! Umm okay LOL. So I wonder, if you have two GTX 580 3GBs and a GTX 260 or so for PhysX would it run the 2 GTX 580 cards at 16x each as having 3 cards triggers the NF 200 to kick in. Strange way of getting it to work but maybe worth a shot? You don't have a 3rd card do you? However you will probably not notice any difference using a card at 8x vs 16x unless it's a dual gpu card. For some reason I think that would be a big bottleneck, kinda like running a single gpu card at 4x and the other at 16x but all of the testing I've seen that shows the buses and how cards saturate the bandwidth, I can't remember anyone doing it with a dual gpu card like a 6990 etc..
     
  13. AthlonXP

    AthlonXP Pick your own.....you deserve it.

    Messages:
    20,199
    Joined:
    Oct 14, 2001
    Agreed, I have 1 GTX 590 and that is plenty for what I do. Might also be an option.
     
  14. AthlonXP

    AthlonXP Pick your own.....you deserve it.

    Messages:
    20,199
    Joined:
    Oct 14, 2001
    Just want to point out that this fire stuff was done by most people who overclocked their cards way past what they were intended too. But to be honest if you want stupid fast then grab a 6990+6970 or SLI GTX 580's.
     
  15. LEVESQUE

    LEVESQUE Gawd

    Messages:
    776
    Joined:
    Sep 4, 2008
    Just a little fix. 2X580 1.5Gb are not on the same level with 6990+6970, and severely memory limited. But 2X 580 3Gb ''might'' be better, since it's still 3 GPUs against 2.
     
  16. nmanley

    nmanley Limp Gawd

    Messages:
    424
    Joined:
    Oct 27, 2005
  17. Lord_Exodia

    Lord_Exodia [H]ardness Supreme

    Messages:
    6,971
    Joined:
    Sep 29, 2005
    Thanks for providing a link to the video, I noticed that the NF200 benches slightly slower than native intel x8 x8 by a tiny margin most of the time as per his results at the end of the video. I'm sure those were synthetic benches as Sir James D Tech has visited the forum and I asked him about that and he was kind enough to confirm for me that he does canned benches most of the time. I think the NF 200 may be best if used in Tri SLi scenarios and would like to see more before I decide but from this video alone I don't think anyone is missing much by just using intel native 8x 8x.
     
  18. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    .... and finally one videocard in slot 1 @ x8 (intel chipset), one in slot 2 @x16 (nf200) and one in slot 4 @x16 (nf200)

    [​IMG]

    Slot 1 and 3 are Native Intel Chipset and 2 and 4 are the NF200, which is basically a PCI Express switch and is connected through 16x PCIe lanes itself with the Intel Chipset...

    So it makes sense when 2 videocards are connected with the NF200 chip (which is 16x itself) and you measure the speed it would basically be 2 times x8 :rolleyes:
     
  19. Spooony

    Spooony 2[H]4U

    Messages:
    2,075
    Joined:
    Mar 9, 2011
    This is my 3dmark score with 3x GTX 580s.
    http://3dmark.com/3dm11/946255

    Don't worry about the nf200 chip. Its 5 percent performance loss gets nullified by the cpu running at a higher frequency aswell as the gpus
     
  20. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    But i'm still wondering: will @8x be enough for a 590? :confused:
     
  21. Spooony

    Spooony 2[H]4U

    Messages:
    2,075
    Joined:
    Mar 9, 2011
    The nf200 takes the lanes coming from the cpu and double. Its a duplexer.
     
  22. Spooony

    Spooony 2[H]4U

    Messages:
    2,075
    Joined:
    Mar 9, 2011
    Remember Pci-e is packet base. The bandwidth is theoretical will never be reached. No gpu will fill it. The thing bringing Pci-e down are the efficiency and overhead. Remember the data gets wrapped in a overhead and sent to the otherside to which depends on the quality of your board about 75-80 percent of the data will arive at the other end. A junk board will have a efficiency between 70 to 75 percent. Hence the 3 percent performance difference between X8 x16 and x4. Its the efficiency more than its the bandwidth making the difference
     
  23. vjcsmoke

    vjcsmoke [H]ardness Supreme

    Messages:
    4,523
    Joined:
    Dec 5, 2006
    590's suck anyways.
    They have very little OC headroom and their 1.5gb memory per gpu starves the cards at higher resoloutions + antialias settings.
    3GB 580 SLI or unlocked 6950s in tri-fire are the way to go.
     
  24. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    Thx Spooony, I never realized the quality of the board would play a role.

    About the performance differences between x4, x8 and x16; are there any good articles/blogs about it (which I can understand :rolleyes:)
     
  25. HohnyF

    HohnyF Gawd

    Messages:
    652
    Joined:
    Nov 30, 2010
    Yup. It may depend from game to game,
    but generaly that's exactly where high memory bandwidth is very beneficial.
     
  26. spacing guild

    spacing guild [H]ard|Gawd

    Messages:
    1,595
    Joined:
    Dec 9, 2010
    They've been halving 16X PCI-e slots in SLI for generations now. Is there no way to keep all PCI-e 16x slots @ 16x at all times?:confused: Guess I am unaware of some sort of inherent chipset limitations?
     
  27. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    It's OK to halve the 16x to 8x for normal SLI because a GPU will not use all bandwidth offered by 8x. But 2 GPU's (like on a 590) double this bandwidth and therefore it could be possible all 8x bandwidth is consumed... but no one seems to have some real numbers on this... (inlcuding myself :rolleyes: )
     
  28. spacing guild

    spacing guild [H]ard|Gawd

    Messages:
    1,595
    Joined:
    Dec 9, 2010

    I know 8X is enough bandwidth for almost all cards (Try telling that to forum-ers when PCI-e 16X first came out...vicious pack of ignorant dogs. "16x is gonna be sooo great, doubles your bandwidth over AGP, dude! Double the speed of your video card!") But my point is...why can't they just make it 16x all the time for all your PCI-e slots?- minus the smaller 1x and 4x slot varieties of course...
     
  29. omniscence

    omniscence [H]ard|Gawd

    Messages:
    1,311
    Joined:
    Jun 27, 2010
    The Maximus IV Extreme has a little bit weird PCIe wiring.

    The first 8 lanes from the CPU go right to the first x16 PCIe slot. The second 8 lanes go also to the first slot if no other slot is populated. However, if there is a card in the third x16 slot, these 8 lanes go not to the first but to the third slot. If there is a card in the second or fourth slot, these 8 lanes go to the NF200, which is connected with 16 lanes to the second and 16 lanes to the fourth slot. I'm not sure if slot 3 or slot 2&4 take precedence. It is not possible to use all four slots.

    So if 1 and 3 are populated, it runs at x8/x8 from the processor, if 1, 2 and 4 are populated it runs x8/x16/x16, but with both x16 ports connected through the NF200 with only 8 lanes to the CPU. In any case slot 2 and 4 can only communicate to each other with 16 lanes and have to share 8 lanes to the processor. I doubt that this is more effective than the Gigabyte UD7 design, which runs 3 cards with x16/x8/x8 all through the NF200.
     
  30. sirmonkey1985

    sirmonkey1985 [H]ard|DCer of the Month - July 2010

    Messages:
    18,616
    Joined:
    Sep 13, 2008

    so in other words it doesn't matter that both pci-e slots on the NF200 are x16 since it is funneled down to x8 to the cpu.. :p sounds awesome.....
     
  31. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    The Maximus IV Extreme has a little bit weird PCIe wiring.

    The first 8 lanes from the CPU go right to the first x16 PCIe slot. The second 8 lanes go also to the first slot if no other slot is populated. However, if there is a card in the third x16 slot, these 8 lanes go not to the first but to the third slot. If there is a card in the second or fourth slot, these 8 lanes go to the NF200, which is connected with 16 lanes to the second and 16 lanes to the fourth slot. I'm not sure if slot 3 or slot 2&4 take precedence. It is not possible to use all four slots.
    In a picture it looks like this:

    [​IMG]

    Source: http://www.techreaction.net/2011/01...us-maximus-iv-extreme-battle-of-the-titans/3/
     
  32. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    Guys look at this: http://www.guru3d.com/article/triple-monitor-gaming-on-geforce-gtx-590-and-radeon-6990/16

    The NF200 chip is on the ASUS MIVE, so it should work...? :confused: Still no one with this setup...?
     
  33. omniscence

    omniscence [H]ard|Gawd

    Messages:
    1,311
    Joined:
    Jun 27, 2010
    The question is if the GPUs have to be connected to the NF200 for quad-SLI to work. If they have to, the MIVE is a bad board for 2x590, because of the 8 lane upstream limitation.
     
  34. Ultima99

    Ultima99 2[H]4U

    Messages:
    3,914
    Joined:
    Jul 31, 2004
    Get the 590's since they have extra utility. If you go camping you can start the fire with no matches!

    Seriously, 580's, 6990, etc. No 590.
     
  35. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    I got word from ASUS telling me that two 590's on the Maximus IV Extreme is supported! :D
     
  36. Michael Turbo

    Michael Turbo n00bie

    Messages:
    54
    Joined:
    Jan 15, 2011
    The nf200 allows discreet gpus to communicate with his each other without sending packets across the entire length of the pci-e bus. It also broadcasts cpu data meant for 2-4 gpus by replicating one set of data and delivering to multiple targets. These functions vastly clear up existing bandwidth on the express bus, which is why they work so well on p55/p67 chipsets whilst offering negligible benefit on full-equipped 32-lane alternatives. There is also 16 lanes of bandwidth between the two GTX 580s by virtue of the onboard nf200 chip built on the 590. So 8 lanes between cpu and the 590 is plenty. However, under the circumstances the inter gpu communication would be optimal with both cards hooked to the bridge. Unfortunately the gigabyte alternative is a better suited design in this case.
     
  37. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
  38. zerounleashednl

    zerounleashednl n00bie

    Messages:
    31
    Joined:
    Dec 20, 2010
    Whats the difference between the MIVE and the Gigabyte?
     
  39. Michael Turbo

    Michael Turbo n00bie

    Messages:
    54
    Joined:
    Jan 15, 2011
    The gigabyte routes all 16 native pci-e lanes directly into the nf200. All 4 of the express slots have a x16 or x8 link with every gpu in the array. You effectively halve cpu bandwidth on the maximus when occupying slot 3 and 5; and lose some of the nf200 features if occupying slot 1 and 3. If you selectively integrated non-dual cards in triple sli, itd likely be negligible impact. With quad-sli i fear the impact may be more than marginal, and a proper nf200 design (for quad gpu, that is) necessary to stretch out the card's huge potential.
     
  40. Michael Turbo

    Michael Turbo n00bie

    Messages:
    54
    Joined:
    Jan 15, 2011
    It warrants a mention that in single sli (2 gpus) the maximus will perform fractionally better..all things, such as clocks of cpu and gpu, being equal. Three cards and above tip the scales otherwise, but i bet it's close regardless. Its the unique condition of the OP,however, that leads me to believe this board in particular not best fit for the task.