What MB has the most pi-e 8x compatible slots?

iansilv

Limp Gawd
Joined
Jun 10, 2004
Messages
335
I was just thinking about WHS, MyMovies and those new upcoming SSD PCI-E drives from OCZ- wanted to see how many I could get on to a single MB for maximum storage capacity.
 
WOW! so theoretically, space not being an issue, I could put 7 ssd pic-e drives and have the fastest solid-state storage system possible. Thats a lot of dough...
 
http://www.magma.com/products/pciexpress/ you could also go nuts with that, but the price will be ABSURD

otherwise I'd either go with the Asus that Fox suggested, or the model down, its about $100 less, and you lose a single PCIe slot, not that big of a deal if you ask me (for $100, I'd take the cheaper board, they got an open box for about $200 less as well)
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131358R
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131358

::edit

in response to your 7 PCIe SSD thought, honestly you can do the same thing with SATA, you won't really lose or gain anything that route (because honestly even the best current SSD's aren't saturating SATA II completely), and boards with 8, 12, even 16 SATA ports are less expensive than that Asus, Samsung has a clip on youtube about the maxed expansion of that kind of system (I think they used a pair of Areca controllers with onboard memory though, but the concept is similar)

and yes, this has the potential to easily get into the tens of thousands of dollars range
 
If you really want insane amounts of space, just look into SAS expanders. With the two SAS expanders and my Areca 1680ix-24, I have enough ports for 72 drives (and room to expand from that) before I even count the motherboard ports. With enough rack space, you could have over 900 drives attached to a single board (using Areca cards and SAS expanders). The sky is the limit when your wallet allows for it. Also, the open box Asus P6T6 is a pretty good deal. I bought one for my desktop.
 
Cool. But- I am looking at this not for insane amounts of space, but for a bunch of space with insane speeds- that's shy I want the pi-e ssd drives. Supertalent actually has a 2tb coming soon- so line up 7 of them, and you have 14 t.b. of space. But- absurd cost.
 
Do realize that with a decent RAID card, you get the same speeds as you would with those PCIe SSDs as they are essentially a RAID card and a bunch of SSDs crammed onto a single PCB.
 
yeah- I think so- to be honest, I want to see these PCI-E drives tested by Kyle before I make any serious decisions. Apparently they are shipping this month...
 
Isn't the substantial difference that these new ssd pi-e drives bypass completely the SATA connection?
 
If you really want insane amounts of space, just look into SAS expanders. With the two SAS expanders and my Areca 1680ix-24, I have enough ports for 72 drives (and room to expand from that) before I even count the motherboard ports. With enough rack space, you could have over 900 drives attached to a single board (using Areca cards and SAS expanders). The sky is the limit when your wallet allows for it. Also, the open box Asus P6T6 is a pretty good deal. I bought one for my desktop.

hahahahahahahahaha.
but the thing is, TCO

Isn't the substantial difference that these new ssd pi-e drives bypass completely the SATA connection?

ok its annoying me
its "PCIe", there is a C in there

that aside:
why would this matter? current PCIe SSDs are not somehow faster than current SATA SSDs, and none of them max either bus' abilities, its the same chips/logic/whatever you wanna call it, different interface, its just as fox has said though, the PCIe version has a few levels of SSDs, which gives you better speed, the interface has nothing to do with it

and I really hoped this thread wouldn't turn into a n00b whining about some nonsense "I need 90000 jiggaXXXXXs of bandwidth" because the marketing goons have gotten to them :(

basically, the more SSDs (or mechanical disks, or tapes, or whatever) you have, the faster the device *can* be, assuming you have a good way to control all of it, and a few other factors, but it isn't bottlenecked by its existing bus (no current HD is bottlenecked by SATA II, although groups of them may be (which is why multiple disks run on multiple channels)), the PCIe thing from OCZ is four SSDs in RAID0, wherein yes, there is some advantage, but if you just took those four SSDs and tied them onto a proper RAID controller, you'd get the same thing (of course OCZ doesn't want to say this, because then they couldn't sell stuff to make a profit)

honestly the best route, if you want to do this "right", is a proper storage controller and however many disks you want for the speed or capacity desired, especially since you won't be looking at a 6-7TB array thats a JBOD of 6-7 RAID0 arrays (holy crap thats painful just to write), you'll have some actual redundancy and so on, basically the OCZ thing is neat, its a little widget for gaming PCs, and as a single device, theres nothing wrong with it, but running a group of them as a storage array, put logical thought back into the equation and do it cleanly the first time, you'll probably save some money in the process (And you don't have to buy a $250-$400 motherboard to accomplish your end goal)
 
Last edited:
Isn't the substantial difference that these new ssd pi-e drives bypass completely the SATA connection?
No, it's just a RAID card and 4 SSDs on a single PCB. Technically they would be using SATA as well.

hahahahahahahahaha.
but the thing is, TCO
Like I said, the sky is the limit when your wallet allows for it. While I don't plan on having 72 drives myself, 50TB+ isn't an issue for me.
 
No, it's just a RAID card and 4 SSDs on a single PCB. Technically they would be using SATA as well.


Like I said, the sky is the limit when your wallet allows for it. While I don't plan on having 72 drives myself, 50TB+ isn't an issue for me.

i wish i could do 50TB+ but my wallet won't allow it yet :p lol
 
i wish i could do 50TB+ but my wallet won't allow it yet :p lol
It's all about priorities. It's very convenient being a 21 year old college student with no other bills apart other than school (which isn't cheap I might add). :D
I wish I could understand what honorable, non-commerical, purpose a home user has with 50TB+ :eek:
If I say home-office, does that legitimize it? I mean, home users don't have a server rack in their bedroom. :p
 
I was just thinking about WHS, MyMovies and those new upcoming SSD PCI-E drives from OCZ- wanted to see how many I could get on to a single MB for maximum storage capacity.


1x SuperMicro X8DTH-6 with 7 pci-e gen2
7x LSI SAS9200-8E
100 x lsi based 36 port sas expanders.
3200 x sas/sata/ssd disks.
 
holy batman "way off" indeed :eek:

also, that controller is JBOD only :eek:

I guess I should have said "3,000+" disks, "100+" expanders.

I'm glad that I left out the last line of: "Then you can deliver all those disks, via 1Gb iscsi..."

..(long rant, deleted)..

~900 disks would be with sas edge expanders only.
A (sas-1) fanout expander on each card/domain, would allow you to use more that two edge expanders per card/domain, and support many more disks.

But obviously the cpu's could never drive this many disks, and boot-up would probably take 3 hours. We also don't know how many disks the onboard sas controller can handle. 8? 20? 32? 128?

Summarized rant:
...512 endpoints....sas 1.1....sas-2...smp...ssp...stp...39 phy....subtractive ports....jbod....target...direct...raid...scam...diagram...112....blah...blah...blah
 
Depending on the RAID card, the SAS expanders support between 16 and 256 disks. Mine only does 128, but I don't think that will be a problem anytime soon. Don't forget that while the expander may also have 36 ports, only 24 of those are for disks.
 
Back
Top