Next-generation SATA is obsolete

Joined
Mar 6, 2003
Messages
737
The OCZ Colossus will put 4 Indilinx controllers in a single 3.5" enclosure. If the colossus is more than 4x the performance of a vertex, then with a queue depth of 4 the colossus 4k random read is 64*4 = 256 MB/s, or in other words it is SATA II bottle necked!!!

That's insane! There's little point in even looking at seq read/write if the random read is already bottle necked.

Also, SATA 6 Gbps will be obsolete by the time it comes out. Considering a single Vertex already maxes out the seq read, a colossus would need more than 24 Gbps.

And this is extrapolating using existing technology. I think we have plenty of room for improvement. A single barefoot controller only uses 4 channels, and a vertex has 16 flash chips! This means if the future gen barefoot have 16 channels, that's a 4x improvement. (Intel's controller is already at 10 channels. But also note that Intel fit 20 chips in their 2.5" enclosures, so it makes sense if there were a 20 channel controller in the future)

So next gen SATA would have to be at least 4*24 Gbps = 96 Gbps if it wants to exceed the capabilities of SSDs. (That is assuming you toss in 4 of these controllers in a 3.5" enclosure. You could definitely make custom chips that have more than 64 channels, but it makes more sense to use the same controller for 2.5" and 3.5" drives)

This far exceeds the 10 Gbps limitation of the DMI between the north and south bridge in a x58. Which means the next next gen SATA controller needs to be moved to the north bridge, or even moved directly on-board the CPU like the memory controller was. Sweet, sweet latency reduction!

And I am only talking about ONFI 1.0. Next gen ONFI nand flash is even faster per channel!
 
Last edited:
Pretty amazing how quickly this all happened. Storage has been the bottleneck for decades. Now, within a year's time, on the consumer side, we've seen real, solid, affordable solutions completely blow mechanical disks away.
But yeah, SATA 3 ain't gonna cut it. Obviously as SSDs mature, the OSs, controllers, buses, processors and even memory subsystems will have to be completely re-architected.
Posted via [H] Mobile Device
 
Well, this has all come about because the whole concept of "good enough" computing is catching on.

Even a 32 or 64GB OS partition on an SSD is good enough.. then we just use 1 and 2TB hard drives for the bulk storage.
 
Pretty amazing how quickly this all happened. Storage has been the bottleneck for decades. Now, within a year's time, on the consumer side, we've seen real, solid, affordable solutions completely blow mechanical disks away.
But yeah, SATA 3 ain't gonna cut it. Obviously as SSDs mature, the OSs, controllers, buses, processors and even memory subsystems will have to be completely re-architected.
Posted via [H] Mobile Device

Yeah, for decades the hard drive was slow... like you said within a year we're so far ahead with storage that hardware has to play catchup.
 
SATA-II is the "good enough" crap. Random was going to be bottlenecked from day one. I don't know why people are foolish enough to believe, much less convince themselves otherwise. We've known this since dual-port SATA drives hit the market. We knew this before then, even.

As far as I'm concerned personally, you're all still playing catch up to ME. Taking this opportunity to remind you that I've been involved with SSDs for over a decade. The serious performance SSDs have never been offered in IDE, only FC and SCSI, because the inherent limits are recognized. Industrial SBCs and the like got fed IDE SSDs (Disk-On-Chip) because of the priority for reliability and temperature range over performance. It wasn't until the introduction of the HGST EnduraStar line that you could get a mechanical HDD capable of anything approaching industrial temperature ranges.

The fact is that the computer industry as a whole is engaged in a madcap race to the absolute bottom. If you work in IT, and know your stuff, you recognized this about 5 years ago, possibly 3. Everyone is heading for the bottom as fast as they can. They want to sell you trash at premium prices. NetBackup and Solaris are both stellar examples.
NetBackup support in 2006 was front lined in America by people who had experience with the product and were training to move up to Level II support. However, patches started to frequently include major feature and function changes, at the insistence of Symantec. By 2008, NetBackup support was entirely run out of remote call centers by people who had never touched the product and often had accents so thick you couldn't possibly understand them anyways. The software itself had become so unpredictable and unreliable from patch to patch that most Administrators were refusing to perform upgrades, for fear of losing their backups - no less than three patches in the past two years have had that exact problem. Installation can or did cause corruption or loss of backups prior to installation of the patch.
Solaris, I just don't know where to begin. How do you begin when since late 2007, support has devolved to the point of some guy in a foreign country responding to your initial problem description as "reboot, restore from tape, reinstall"? I wish I were joking, but that's the kind of support you get for Solaris 10 on UltraSPARC or SPARC64 hardware. There was a time not so long ago when Solaris was developed by Sun in house, bugs were thoroughly documented and workarounds were found, where patches went in the order of Interim Relief, T-Patch, Release Patch. Now? You might get Interim, but T-Patch gets skipped completely so the final is untested - and frequently requires a total OS upgrade. More often you get told to do an OS upgrade, and when it doesn't work, to reinstall. And if you have Solaris 10 6/06 installed? Gods help you, 'cause Sun won't - that release is so buggy and so bad, they refuse to support it unless you upgrade it. And they won't support you if the upgrade breaks. Don't even get me started on the hardware - sun4v (Niagra) is literally a bunch of partial UltraSPARC-IIi cores bolted together, nothing more. The IIi was so awful, it ended up being one of the shortest lived processors out of Sun ever. And the list just goes on for miles.

Welcome to "modern computing." You can do it right all you like, but the manufacturers will still find some way to sabotage your efforts in an attempt to drive it downmarket or increase their profits.

And Ockie, why the hell am I not on the list there? I'm pretty sure I'm still the only one around here with 4Gbit FC. :p
 
I was just thinking the same thing ... that SATA is on its way towards being obsolete, and it should be as well as the superslow AHCI interface that will never ever get beyond 600 MB/s ..... but it's cheap, just like VGA and it tooks like 10 years to get rid of that.

We may need SATA for now but, only until its replacement becomes more available and at reasonable prices. I'm guessing it should start being considered obsolete by 2020 and begin to disappear from motherboards to make more room for more NVMe interface & SSD connections. I just hope SATA doesn't turn into the next VGA and linger for 10 years taking up real estate on the mobo. No point clinging to old obsolete, outdated and super slow technology that is capped at super slow speeds with an AHCI interface that will never ever get beyond 600MB/s. SATA has been a huge bottleneck keeping HD's and storage super slow for years so, I look forward to the new NVMe interface.

ssd-interface-comparison-v2-en.png


6468_31_defining_nvme_hands_on_testing_with_the_1_6tb_intel_p3700_ssd.png


6468_18_defining_nvme_hands_on_testing_with_the_1_6tb_intel_p3700_ssd.png
 
Back
Top