Latency difference between SATA and 10gbe

gjs278

Gawd
Joined
Feb 22, 2011
Messages
712
Just a simple question that I hope one of our super tech guys with 10gbe nics can answer.

Which scenario has less latency and results in faster performance:

A) Requesting data from a local SSD through a intel 6gbps port

B) Requesting data from a remote ramdrive over 10gbe

I know for max throughput, the 10gbe is going to win since it can do 1000mb/s theoretical, but what about latency? What is the overhead of SATA vs 10gbe? Will the ramdisk over 10gbe be slower or faster for small files than requesting from a local ssd?
 
A lot depends on what kind of data you are moving. Is it a single sequential file moving over the wire (in either scenario) or multiple, parallel interleaved transfers which can saturate the pipe. Is it a single SSD or an array you are pulling from locally? In raw throughput, again in empirical numbers and using absolute theoretical maximums, your 10GbE(and there are a lot of options here, I will use 10GBASE-T as my transport) will give you 1.25GB/s and your single SATA600 will give you peak 600MB/s with protocol overhead. As to the latency, it depends on your device choices and the distance between network devices, as well as how they are interconnected. In general, you will find if you RAID or stripe your local devices, you will have far higher real-world speeds and lower latency than 10GbE. It is not really an apples-to-apples question because there are so many variables. What kind of files, what filesystems, what OS's etc?
 
Last edited:
A lot depends on what kind of data you are moving. Is it a single sequential file moving over the wire (in either scenario) or multiple, parallel interleaved transfers which can saturate the pipe. Is it a single SSD or an array you are pulling from locally? In raw throughput, again in empirical numbers and using absolute theoretical maximums, your 10GbE(and there are a lot of options here, I will use 10GBASE-T as my transport) will give you 1.25GB/s and your single SATA600 will give you peak 600MB/s with protocol overhead. As to the latency, it depends on your device choices and the distance between network devices, as well as how they are interconnected. In general, you will find if you RAID or stripe your local devices, you will have far higher real-world speeds and lower latency than 10GbE. It is not really an apples-to-apples question because there are so many variables.

let's say 4k files and the devices are connected through crossover right next to each other.

here is why I am asking. if you look at http://www.acard.com/english/fb01-product.jsp?idno_no=270&prod_no=ANS-9010&type1_idno=5&ino=28 that is a hardware ramdisk that uses sata 1.5gb/s to connect to a pc.

it does pretty well too. you can see some numbers here: http://www.xtremesystems.org/forums/showthread.php?229759-Acard-9010-single-drive-on-ich10r-benches

what I'm really interested in is the 4k qd1 reads, where it seems to reach 70mb/s, more than doubling basically every ssd offering available.

so my question is if I could make that exact same device, but instead of using sata, use 10gbe. why? because I have no clue how they made that drive export itself as a sata drive. I can't find any information on it at all. 10gbe I'd just put a card on each end, cross them over, and potentially run my OS off of the remote device's ram, but I don't know what performance that brings about.

alternatively, if someone knows how to make a pc export a drive over sata, and I could somehow connect that pc through a sata connection, well I'd really be set. I'm basically trying to figure out how to replicate what acards do, but I haven't a clue what protocol to try.
 
Again, you are talking about completely different things. This is a volatile (even though it seemingly has a battery) memory vs non-volatile on a normal HDD or SSD. It seems a rube goldberg way of doing things, you will be much happier just sticking the RAM you would stick in that thing maxxing out your main box, and get a SSD or two. Just because something performs well in synthetic benchmarks doesn't mean it will make much of a real world difference, especially as an OS drive. As to making your own, this is an embedded device so if you are an EE I am sure you can develop something. Otherwise just max out your RAM and get an SSD or two.
 
Again, you are talking about completely different things. This is a volatile (even though it seemingly has a battery) memory vs non-volatile on a normal HDD or SSD. It seems a rube goldberg way of doing things, you will be much happier just sticking the RAM you would stick in that thing maxxing out your main box, and get a SSD or two. Just because something performs well in synthetic benchmarks doesn't mean it will make much of a real world difference, especially as an OS drive. As to making your own, this is an embedded device so if you are an EE I am sure you can develop something. Otherwise just max out your RAM and get an SSD or two.

my main box is already maxed out ramwise. ssds galore as well.

I can boot into pure ram (things are faster, take that $900 (well $900 at the time) raid array), but it leaves me without enough memory for my daily tasks. with ddr3 prices as they are, I figure I could get something small that has the sole purpose of delivering me ram in the form of a mountable drive, and I use it to boot and run daily tasks. just can't figure out the best way to get it to me, and then I came up with the 10gbe idea.

it's an embedded device sure, but I don't see how it's really relevant to the method they used to transport a drive over sata. if something embedded can do it, I can do it too on a normal pc. I would need some piece of hardware to act as the controller, and then software to handle it, but it would still be doable, and would allow me to create a 64gb hardware ram device for booting purposes.

I can dream, but it doesn't look like the acard company is making any new devices soon, and I still cannot find anything relevant about creating my own sata drive that uses ram as its backing. I guess one day I'll do a full system upgrade and I'll just get a board that can handle a ridiculous ram amount.
 
Acard makes pretty crappy stuff. If you want RAM backed disks, take a gander over here and you can find links to just about everybody involved in this particular product category. Keep in mind that quality in this category doesn't come cheap with high-grade hardware such as the RAMSAN. 600,000 IOPS, 4.5GB/s RANDOM access etc.
 
If you are considering 10Gbps you might as well look at Infiniband:

http://en.wikipedia.org/wiki/InfiniBand

SDR: 10Gbps
DDR: 20Gbps
QDR: 40Gbps

The other major benefit of IB is that is provides ultra-low latency...microsecond vs. 10s of microsends for 10Gbps.

See Page 100 here:
www.crc.nd.edu/~rich/SC09/docs/tut156/S07-dk-basic.pdf

Shows DDR IB in the 2-9 microsecond range for various small message sizes..whereas 10Gbps is 25 to 35 microseconds.

With QDR IB it's completely feasible to achieve upwards of 30Gbps of throughput between a IB client and IB server. However you're only going to be able to realize that if you have some kind of a storage sub-system (or cache) that also has that kind of throughput capability.

SDR and DDR IB adapters are available on eBay, often at prices that are significantly *less* than equivalent 10Gbps adapters. QDR is about the same cost. With a $40-50 IB cable, you can connect two machines together without having to deal with the cost of an IB switch...if you put a dual-port IB adapter in your server, you could connect two clients to it without needing a switch.

The biggest challenge I'm seeing with IB is that the driver and software support for it is no where near as ubiquitous as Ethernet.

For example, I've gotten my OI151a4 server to work serving up ZFS using NFSoIB to both FreeBsd and Centos clients....however, as soon as I issue a NFSoRDMA mount command to OI151a4, OI kernel panics...ouch!
 
Last edited:
Acard makes pretty crappy stuff. If you want RAM backed disks, take a gander over here and you can find links to just about everybody involved in this particular product category. Keep in mind that quality in this category doesn't come cheap with high-grade hardware such as the RAMSAN. 600,000 IOPS, 4.5GB/s RANDOM access etc.

I end up on storagesearch a lot, and honestly, it is the most unorganized site I have to deal with.

ramsan is about $4,000, but if I copied their methods, I could do it for less.
 
InfiniBand

as a transport method, this looks good. the receiving end of this would be linux, but the host I am familiar enough wtih freebsd or linux to handle it.

however, can infiniband share a ramdisk, or can it only send out physical drives? like can I just make a /mnt/tmpfs that is nothing but 16gb of ram, and then say, ok, now send that out over infiniband? if so, that seems like it would cover my needs here.

according to http://www.c0t0d0s0.org/archives/4906-Separated-ZIL-on-ramdisk..html it seems like that is what he's done, so things are looking up for infiniband.

http://davidhunt.ie/wp/?p=2324 seems to export as ramdisk over IB. awesome.

http://www.servethehome.com/set-infiniband-srp-target-ubuntu-1204/ confirmed here as well. looks like they get some pretty decent speeds too.

so that about sums it up I think. IB is the answer for remote pc ramdisk solutions, without having to get too involved in creating your own sata controller and other nonsense.
 
Last edited:
I'm not sure what your initial question has to do with anything, it seems no easier to me to connect RAM to Ethernet than to connect it to SATA, and at least SATA is a storage interface.

As for the Acard or RAMSAN implementations, those are totally custom, you have to start from scratch designing the processor/controller, the motherboard, the firmware. I doubt one man could do that alone even if he was very good at all the disciplines involved.
 
I'm not sure what your initial question has to do with anything, it seems no easier to me to connect RAM to Ethernet than to connect it to SATA, and at least SATA is a storage interface.

well, it actually is easier, because you can use nfs over ethernet very easily. there's not really any good way of using sata to cross between two computers.
 
Back
Top