TRIM For RAID0 Coming!!!!!

That sounds special. :)

Because I have plenty of room ( @ 80% unused space) it's never been a problem but I'm sure it'll be beneficial to most users.

Now, if Intel would just fix the problem limiting the SATA6 ports to just two...............:D
 
Hippie - You will have to wait for the Ivy Bridge X79 which will give you 10, count em 10 6Gb/s SATA ports.
 
Hippie - You will have to wait for the Ivy Bridge X79 which will give you 10, count em 10 6Gb/s SATA ports.
Aren't those the MBs that just hit the marketplace?

If so, look again at the amount of SATA6 ports.
 
Its not really a chipset. Its just software. The CPU does all the work anyways.
 
Hippie - You will have to wait for the Ivy Bridge X79 which will give you 10, count em 10 6Gb/s SATA ports.

Who the hell cares. When you saturate that many ports on a CPU controlled software based solution you are gonna never meet the bandwidth potential.

I will take hardware add in card any day over software.
 
Who the hell cares. When you saturate that many ports on a CPU controlled software based solution you are gonna never meet the bandwidth potential.

I will take hardware add in card any day over software.

For Raid0 is there is little, if any, advantage to hardware raid. As long as you operate below the saturation points then the ROC gains are for parity configurations only.
 
Who the hell cares. When you saturate that many ports on a CPU controlled software based solution you are gonna never meet the bandwidth potential.

I will take hardware add in card any day over software.

Jibberish......


And the x79 chipset only gives you 2 SATA 6Gbps ports..
 
Who the hell cares. When you saturate that many ports on a CPU controlled software based solution you are gonna never meet the bandwidth potential.

I will take hardware add in card any day over software.

So that's not entirely true. In fact, most large storage vendors service 100's of disks, even in RAID 0, using a single CPU.

With that being said, 10x SSDs doing 550MB/s reads = 5.5GB/s which will eat a lot of controller/ PCIe bandwidth. That is why Intel and a lot of the storage guys say that PCIe generations are not keeping up with bandwidth needs of modern devices.
 
Meh, too late now to make a 2nd 80GB X25-M drive a cost effective option for me... Newer/faster 120GB drives are cheaper and less hassle/risk. Two 120GB M4 or Samsung 830 aren't any cheaper than the 256GB drives either. Looks like non news for early adopters and recent adopters, those with drives <64GB and those that purchased 3rd gen drives like a Vertex 2 around a year ago will probably see the biggest upside. I've seen good deals on older Vertex 2's, bordering on $1/GB.
 
With that being said, 10x SSDs doing 550MB/s reads = 5.5GB/s which will eat a lot of controller/ PCIe bandwidth. That is why Intel and a lot of the storage guys say that PCIe generations are not keeping up with bandwidth needs of modern devices.

PCI-E 3.0 can handle that but it would take an x16 slot to do it. however it really isn't a big problem. PCI-E can handle hundreds of traditional spinning disks quite comfortably. Use an x16 slot for read cache.

The problem is data out of the box over the wire is limited to 40GbE for now per stream which is still just below your 5.5GB/s theoretical rate for 10 SSDs in raid0. Broadcom is selling 100GbE chips but the costs are ridiculous and the stability probably isn't there yet. It really doesn't matter if you have the capability to read at 5.5GB/s when you max out at 40GiB/s if you have infiniband or super expensive 40GbE. Hell 10GbE is still really expensive.

Also, your theory is flawed. Nobody is going to run 10 SSDs in raid0 ever especially not in the storage space unless it is as part of a smart cache layer like ZFS where if one of the 10 fails it really doesn't matter. You lose some IOPs until you replace it but otherwise the actual data is fine.
 
Seriously does anyone use that much bandwidth in their home? Unless your running a fiber channel or iscsi with bonded 10gbps nics a 10 disc raid 0 is not gonna feel any different loading software than a single 6g ssd.

Honestly its nuts. But it would be nice. And to other posters talking about 100s drives on CPU controllers sure but that is not Windows and its not Intel ich software because they don't support hundreds of drives so don't use that comparison.
 
And to other posters talking about 100s drives on CPU controllers sure but that is not Windows and its not Intel ich software because they don't support hundreds of drives so don't use that comparison.

its the same bus though. the SAN heads from EMC/Netapp etc what do you think they're using for their motherboard interconnects? Here is a hint its PCI-E. Also the low end EMC clariions run windows, or at least they did as of the CX3. 480 disks was the max supported.
 
Seriously does anyone use that much bandwidth in their home? Unless your running a fiber channel or iscsi with bonded 10gbps nics a 10 disc raid 0 is not gonna feel any different loading software than a single 6g ssd.

Honestly its nuts. But it would be nice. And to other posters talking about 100s drives on CPU controllers sure but that is not Windows and its not Intel ich software because they don't support hundreds of drives so don't use that comparison.

Users who rebuild large databases and do lots of video editing can easily utilize those data transfer rates.
 
Users who rebuild large databases and do lots of video editing can easily utilize those data transfer rates.

Given the transport infrastructure is in place. And its probably very rare and people like that are not waiting on 10 sata ports from Intel. They already use Areca, LSI, Adaptec, etc...

Its just a marketing hype. Just like they were gonna add SAS support, which would have been fantastic for the 0.07% of people who actually want to shell out for 15K drives and enterprise SSDs for their enthusiast rig.
 
Given the transport infrastructure is in place.
I don't know why it wouldn't be. Having the ability to RAID0 two or more SSDs and enabling TRIM would help a lot of end-users. No, it's not the majority of the market, hardly, but the technology is exciting and will be very useful to a select group which requires high speed and endurance over time, which, without TRIM, is not possible.

Many times, a single SSD is simply not enough, and very large HDD RAID arrays in RAID0, while fast, are highly unstable, regardless of the controller or RAID type being used.

And its probably very rare and people like that are not waiting on 10 sata ports from Intel. They already use Areca, LSI, Adaptec, etc....
Not necessarily, many individuals do not have hardware RAID setups with SSDs, but do use the Intel fakeRAID controller to add SSDs or HDDs in RAID0.

A hardware RAID controller is not necessary for only two SSDs in RAID0, but without TRIM, performance can decrease quickly with heavy use, i.e. database rebuilding and media creation.

The main reason for this is so that end-users can enable TRIM with two or more SSDs in RAID0. Very cost-effective, easy to use, and has minimal setup hassle when compared to a hardware RAID controller.
 
Many times, a single SSD is simply not enough, and very large HDD RAID arrays in RAID0, while fast, are highly unstable, regardless of the controller or RAID type being used.
you're saying spinners in raid0 are unstable yet you welcome doing the same thing with drives that are on a clock the second you start writing to them?

wtf?
 
you're saying spinners in raid0 are unstable yet you welcome doing the same thing with drives that are on a clock the second you start writing to them?

wtf?

I'm saying having many HDDs in RAID0 is more unstable than having two SSDs in RAID0.
Either way a backup should definitely be in place and in use.

The point was, however, that two SSDs with TRIM in RAID0 will be more stable, reliable, and faster than, say, ten HDDs in RAID0.

Two SSDs in RAID0 will be far more reliable than a large HDD array in RAID0, and will be much faster.
 
and 10 HDs in raid10 are going to read at the same speed as 10 discs in raid0 so why would i run raid0 at all?

raid0 is just retarded anyone who runs it for anything other than "lol look at the numbers" is a fool.
 
and 10 HDs in raid10 are going to read at the same speed as 10 discs in raid0 so why would i run raid0 at all?

raid0 is just retarded anyone who runs it for anything other than "lol look at the numbers" is a fool.

That's your opinion, RAID0 has many viable uses.
As I said above, as long as a backup is kept of files, what is the harm in using RAID0?

Also, 10 disks in RAID0 will have far more storage space than 10 disks in RAID10, just fyi, and for large databases and more so for video editing and creation, that much space may actually be necessary for some users.

If I were you, I wouldn't talk as though your RAID10 solution is a one-size-fits-all for everyone.

Oh, and nice insult to those who use RAID0, real mature. :rolleyes:

Only an inexperienced user would utilize RAID0 along with critical data without a proper backup.

This is getting off topic though.


Being able to utilize two or more SSDs in RAID0 with TRIM support is huge though, as many people need the speed and storage space, along with the consistent performance without decay.
Without TRIM, even with multiple SSDs, performance will decrease, and in some situations, that's not an option.
 
Not sure why you're all arguing over something that you'd all ultimately agee is a positive development, and a long overdue one... This thread took the off ramp after about a dozen posts and then hitchhiked it's way to nonsenseville.
 
obviously you dont actually do any work in the industry beyond maybe low end tech support.

nobody runs a database, of any size, on raid0. further, if the database is large enough that it would actually require the space differential between a raid0 and raid10 then said database is going to be broken down into multiple parts anyway and run clustered across multiple servers and multiple disk arrays.

on another point RE size and raid10 vs raid0 if you're into that type of size requirements using modern spinners then you're into areas where backups take a really long time. likely you're replicating in real time to a dedicated replication server, dropping a snapshot, and backing up off that. which means you're delta between live changes and most recent backup are often in the 12-24 hours time frames or more which is significant so there goes your daily backup idea for your 10 disk raid 0.

you have no idea what you're talking about is what i'm getting at.

raid0 is for fools.
 
Not sure why you're all arguing over something that you'd all ultimately agee is a positive development, and a long overdue one... This thread took the off ramp after about a dozen posts and then hitchhiked it's way to nonsenseville.

the only good thing is this likely means raid10 can support TRIM too.
 
nobody runs a database, of any size, on raid0. further, if the database is large enough that it would actually require the space differential between a raid0 and raid10 then said database is going to be broken down into multiple parts anyway and run clustered across multiple servers and multiple disk arrays.

on another point RE size and raid10 vs raid0 if you're into that type of size requirements using modern spinners then you're into areas where backups take a really long time. likely you're replicating in real time to a dedicated replication server, dropping a snapshot, and backing up off that. which means you're delta between live changes and most recent backup are often in the 12-24 hours time frames or more which is significant so there goes your daily backup idea for your 10 disk raid 0.

It depends on the disks being used and the situation at hand.

Quit acting so arrogant like you know everything, which you obviously do not.

There are many scenarios which you are leaving out.
Many times I have needed to install smaller, 80GB HDDs in RAID0 for the additional speed, while still having a decent overall storage space size, as compared to two larger HDDs in RAID0, there would be more space, but a speed sacrifice.

With four 80GB HDDs in software RAID0 under MDADM, I was able to achieve write speeds over 180MB/s and read speeds over 250MB/s.
Backups were always in place and live, so any data which was finished would be stored without concern of data corruption or HDD failure.

As for the database reconstruction, it happened in VMs on a RAID0 array, so that they could communicate with one another on the same system before backing them up to ensure full functionality.
I never once said the databases in question were run on a server while being tested and reconstructed on the RAID0 array.

Damn, what the hell is your problem?
RAID10 fanboi much? :p
Seriously, I never said RAID10 wasn't useful, it obviously has its uses and has some great advantages over RAID0, as does every other RAID level.

obviously you dont actually do any work in the industry beyond maybe low end tech support.
Where did I ever say that RAID0 was ok in the "industry"? By which I believe you mean an enterprise-grade situation.

you have no idea what you're talking about is what i'm getting at.
Speak for yourself. You keep talking about things I never said or stated.

raid0 is for fools.
Only those who can't handle it are be afraid to use it. -hint-
 
Last edited:

You're seriously linking articles from may? You realize that the rumoured specs of this PCH has changed like a billion times since then, right?

The official specs from Intel lists the x79 PCH as having 2 (two, not ten) SATA3 aka 6G aka 6Gbps ports. Seeing as the sata ports are part of the PCH not the CPU, this should definitely not change along with an upgrade in CPU.
 
You're seriously linking articles from may? You realize that the rumoured specs of this PCH has changed like a billion times since then, right?
Yep.

mwroobel is a little behind the times. :D
 
its the same bus though. the SAN heads from EMC/Netapp etc what do you think they're using for their motherboard interconnects? Here is a hint its PCI-E. Also the low end EMC clariions run windows, or at least they did as of the CX3. 480 disks was the max supported.

I think EMC Clariion's were just using a windows boot loader.
 
Yep.

mwroobel is a little behind the times. :D

Hey, I started the thread :) 4 of the sata3 connections are coming from the pch, many of the x79 boards also have an addition controller (or more than 1, asmedia or some other) which allows for up to 10.
 
Hey, I started the thread :) 4 of the sata3 connections are coming from the pch, many of the x79 boards also have an addition controller (or more than 1, asmedia or some other) which allows for up to 10.

You're still either wrong or confused by the terms. SATA3 is 6 Gbps and there are only 2 of these in the X79 PCH, extra controllers do not add SATA3 ports to the PCH, and you cannot RAID drives running off the PCH with drives running off a add-in controller... Well, with software ofc, but not with the Intel Matrix RAID Storage or whatever they've decided to call it this time...
 
Been waiting for TRIM on RAID0 for a while now. Be interesting to see how it affects performance.

AFAIK it will only be made available for fools using RAID0. :p Other RAIDs will have to wait.
 
Y...extra controllers do not add SATA3 ports to the PCH, and you cannot RAID drives running off the PCH with drives running off a add-in controller... Well, with software ofc, but not with the Intel Matrix RAID Storage or whatever they've decided to call it this time...

I didn't say the extra controllers added drives to the PCH, just that they could add available SATA3 ports. I also didn't say you could RAID them with the onchip SATA3 ports. Also, I looked back in the beginning of the thread, I mistyped, I wrote: ...Hippie - You will have to wait for the Ivy Bridge X79 which will give you 10, count em 10 6Gb/s SATA ports. ... and I meant to write ...Hippie - You will have to wait for the Ivy Bridge X89 which will give you 10, count em 10 6Gb/s SATA ports.
 
. Also, I looked back in the beginning of the thread, I mistyped, I wrote: ...Hippie - You will have to wait for the Ivy Bridge X79 which will give you 10, count em 10 6Gb/s SATA ports. ... and I meant to write ...Hippie - You will have to wait for the Ivy Bridge X89 which will give you 10, count em 10 6Gb/s SATA ports.
My mind reading hat musta been disabled that day. LOL!

I'm pretty sure the X79 boards were supposed to have more SATA6 ports but Intel ran into a bug?

Can someone shed a little more light on this?
 
My mind reading hat musta been disabled that day. LOL!

I'm pretty sure the X79 boards were supposed to have more SATA6 ports but Intel ran into a bug?

Can someone shed a little more light on this?

I thought it was the onchip PCIe 3.0 that they dropped because of the bugs.
 
Back
Top