Cerulean
[H]F Junkie
- Joined
- Jul 27, 2006
- Messages
- 9,476
Why is it called 'iSCSI' if it uses RJ45 cables? Shouldn't it be using SCSI connectors? Please help me to understand.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Thanks, I appreciate it and have skimmed it, but I am one of those individuals that learn better via questions and specialized responses. Wikipedia isn't very useful for me most of the time.
Hmm, interesting.. *does some Googling*SCSI isn't a connector. It's a protocol and with iSCSI the SCSI protocol is being sent over IP...therefore iSCSI.
what i havn't quite figured out is the usefulness of iSCSI, (i get that iscsi is a cheaper way of doing centrailized storage like fiberchannel, but...) i get having all your storage on one box centralizes it, but doesn't it also add some susceptibility to failure?
i guess normally you'd have redundant iscsi targets.... and how exactly does it make backups easier? don't most iSCSI go screwy when you have two initiators pointed at one target? a lot of products don't even support that!
are you supposed to run your backups on your actual iSCSI box then?
EDIT: i also get that technologies like vmotion would require centralized storage.... other than that? what?
One thing I like about iSCSI over say, NFS or SMB is that it makes the OS treat it like a raw hard drive. No need to screw around with remote file system permissions, adding server and client to a domain and stuff like that. what really kills me though is the 2TB limit. Not sure who's bright idea that was, considered it's designed for SANs which are more then likely to have 10's of TB of space.
One side note about iSCSI is that it has a little more CPU overhead. With today's processors, this really isn't an issue.
We've been running iSCSI for a few years now. We have a P4000 SAN from HP (previously LeftHand Networks) and we just added a DataCore SAN Melody 3 setup with the new 6G SAS drives. Both are used exclusively for storage repositories in Citrix XenServer 5.6 FP1. Performance is great.
Highly recommended!
Do any of you use ZFS on a SANs?
Usefulness of iSCSI:
- Performance. Kicks a$$ over both NFS and Samba over ethernet
- Block-mode operation. Looks like a native SCSI disk to the host OS, so it is file-system agnostic.
- Block-mode operation: Since it is block-mode, you can do non-filesystem disk operaions on it (database)
- Simple protocol. Simple enough to be implemented in BIOS. Supports boot-mode operations.
- Reliability. Includes hooks to build duplication and redundancy for fault tolerant operation. These things are not a native part of iSCSI, but iSCSI makes doing them simple.
Having a simple, reliable pool of disks that are can be supported easily across a wide variety of OSs is invaluable In a data center environment. Not suggesting that file-system level sharing (NFS, Samba, etc) are not useful too - but iSCSI has a lot of advantages for large disk arrays. It is especially useful in a virtualized data center where tools like VMWare vMotion are used to migrate "servers" across your hardware to balance load & reliability.
It is probably of limited use outside of the data center.
Offloading + Jumbo Frames + Multipathing = A solution just about as good as 4GB fiber without all the extra cost.
And more complexity...and you still don't get the speed of 4Gb FC. FC switches and HBAs have gotten a LOT cheaper over the last 18 months. Many of our iSCSI customers like the idea of physical iSCSI switches. Compare that to the cost of FC switches and HBAs and it's close enough that many just do FC for the additional performance. Assuming their array does FC, of course.
Not true at all, I have multiple SANs with iSCSI targets of 2+ TB. All you have to do is initialize the LUN as GPT and you are good to go. The only time I have had issues with 2TB drives is in Openfiler, but that was a different problem all together.
Oh really? so this is an Open Filer limitation? That's waht I happen to be using. It will let me create a LUN bigger then 2TB, but the client only sees 2TB. I will have to do some research on that. I just assumed it was a iSCSI limit as that's what I read everywhere.
Oh really? so this is an Open Filer limitation? That's waht I happen to be using. It will let me create a LUN bigger then 2TB, but the client only sees 2TB. I will have to do some research on that. I just assumed it was a iSCSI limit as that's what I read everywhere.
Couple things.
1. Performance over NFS? Eh..Which NFS stack?
2. It is block mode, but block mode isn't always best. I speak mainly to VMware environments, which is what I do.
One good thing about doing VMware w/ NFS is that the storage array understands the underlying filesystem and can do things outside the client server. For example, with VMware and some arrays that export via NFS, I can say "Snapshot that VM!" and the vSphere host will tell the array to do it, and the array does it at a file level. I can say "Compress that VM!" or "Compress that datastore!" and the NFS array will do that, if it supports it. Takes the load off the vSphere hosts.
Also, since the array understands the underlying filesystem and data you can easily do NDMP backups of files stored on the array. Things you can't do with iSCSI.
I have no major issues with iSCSI but we rarely EVER do iSCSI implementations of VMware anymore...very rarely.
This, of course, a rather parochial response. You start with the presumption that a linux-like filesystem - which is all NFS can present - is best for your application.
Multipathing with iSCSI can be done with two identical links to your SAN. The standard at the moment is 1GB copper. 2 of these links will give you similar performance to a single 4GB fiber link, however multipathed 10GB links would be preferred. 10GBe is currently the fastest thing we have, and it is true the stories of maxing out 10GB links on storage servers holding database applications. We have SANs with 2x 4GB fiber connections that get maxed out due to the IO needs of the guests.
EDIT: Unless you're talking IOPS. I'm talking throughput. For IOPS that's a diff discussion and yes, you can push a lot of IOPS over a couple of 1Gb links and normally that's the question as rarely do we see someone maxing out FC throughput from a host.
Quick question about multipathing: what's the difference between dedicating two active/active NICS to the ISCSI vm kernel and setting up multipathing?
I'm assuming that when you setup two NICS for ISCSI traffic, a traffic gets hashed and is pushed down one link to the SAN. So essentially there isn't a load balancing mechanism? Is that accurate?