iSCSI questions

Cerulean

[H]F Junkie
Joined
Jul 27, 2006
Messages
9,476
Why is it called 'iSCSI' if it uses RJ45 cables? Shouldn't it be using SCSI connectors? Please help me to understand. :(
 
SCSI isn't a connector. It's a protocol and with iSCSI the SCSI protocol is being sent over IP...therefore iSCSI.
 
what i havn't quite figured out is the usefulness of iSCSI, (i get that iscsi is a cheaper way of doing centrailized storage like fiberchannel, but...) i get having all your storage on one box centralizes it, but doesn't it also add some susceptibility to failure?

i guess normally you'd have redundant iscsi targets.... and how exactly does it make backups easier? don't most iSCSI go screwy when you have two initiators pointed at one target? a lot of products don't even support that!

are you supposed to run your backups on your actual iSCSI box then?

EDIT: i also get that technologies like vmotion would require centralized storage.... other than that? what?
 
what i havn't quite figured out is the usefulness of iSCSI, (i get that iscsi is a cheaper way of doing centrailized storage like fiberchannel, but...) i get having all your storage on one box centralizes it, but doesn't it also add some susceptibility to failure?

i guess normally you'd have redundant iscsi targets.... and how exactly does it make backups easier? don't most iSCSI go screwy when you have two initiators pointed at one target? a lot of products don't even support that!

are you supposed to run your backups on your actual iSCSI box then?

EDIT: i also get that technologies like vmotion would require centralized storage.... other than that? what?

iSCSI is just one type of network storage used for centralized storage. This is useful for any clustered scenario, whether it is a VMware vmotion setup, Hyper-V cluster, a SQL database using MS Clustering services etc.

As far as having multiple hosts connect to the same iSCSI target that really isn't an issue as long as your device supports persistence. Something like FreeNAS and openfiler, though supporting this technology do sometimes have problems supporting multiple hosts per target, however any enterprise class iSCSI appliance usually have no issues. I have multiple HP P2000s, HP MSA21xx and Dell MD3x00i appliances setup like this with no issues.

For redundancy if you build your storage fabric correctly your only point of failure is the backplane of the storage appliance itself. You have dual controllers, a RAID with some sort of redundancy, and if you can afford it redundant SAN switches. Usually when I build SANs I run RAID 5 with 1 hot spare, dual controller array, and then two SAN switches.

With concern to backups, you can snapshot directly to the array, however that is really only good for recovering corrupt VMs or doing single file recovery type stuff, and personally I don't recommend it. Normally backups are handled by doing backup through the OS (Windows, Veeam, etc) to backup the data on the SAN to another disk storage device, OR you can do direct replication to another identical SAN depending on what hardware you are using.

Overall it is a good technology that allows users on a small budget to have lots of centralized storage with a good amount of reliability.
 
One thing I like about iSCSI over say, NFS or SMB is that it makes the OS treat it like a raw hard drive. No need to screw around with remote file system permissions, adding server and client to a domain and stuff like that. what really kills me though is the 2TB limit. Not sure who's bright idea that was, considered it's designed for SANs which are more then likely to have 10's of TB of space.
 
Usefulness of iSCSI:

- Performance. Kicks a$$ over both NFS and Samba over ethernet
- Block-mode operation. Looks like a native SCSI disk to the host OS, so it is file-system agnostic.
- Block-mode operation: Since it is block-mode, you can do non-filesystem disk operaions on it (database)
- Simple protocol. Simple enough to be implemented in BIOS. Supports boot-mode operations.
- Reliability. Includes hooks to build duplication and redundancy for fault tolerant operation. These things are not a native part of iSCSI, but iSCSI makes doing them simple.

Having a simple, reliable pool of disks that are can be supported easily across a wide variety of OSs is invaluable In a data center environment. Not suggesting that file-system level sharing (NFS, Samba, etc) are not useful too - but iSCSI has a lot of advantages for large disk arrays. It is especially useful in a virtualized data center where tools like VMWare vMotion are used to migrate "servers" across your hardware to balance load & reliability.

It is probably of limited use outside of the data center.
 
One thing I like about iSCSI over say, NFS or SMB is that it makes the OS treat it like a raw hard drive. No need to screw around with remote file system permissions, adding server and client to a domain and stuff like that. what really kills me though is the 2TB limit. Not sure who's bright idea that was, considered it's designed for SANs which are more then likely to have 10's of TB of space.

Not true at all, I have multiple SANs with iSCSI targets of 2+ TB. All you have to do is initialize the LUN as GPT and you are good to go. The only time I have had issues with 2TB drives is in Openfiler, but that was a different problem all together.
 
One side note about iSCSI is that it has a little more CPU overhead. With today's processors, this really isn't an issue.

We've been running iSCSI for a few years now. We have a P4000 SAN from HP (previously LeftHand Networks) and we just added a DataCore SAN Melody 3 setup with the new 6G SAS drives. Both are used exclusively for storage repositories in Citrix XenServer 5.6 FP1. Performance is great.

Highly recommended!
 
One side note about iSCSI is that it has a little more CPU overhead. With today's processors, this really isn't an issue.

We've been running iSCSI for a few years now. We have a P4000 SAN from HP (previously LeftHand Networks) and we just added a DataCore SAN Melody 3 setup with the new 6G SAS drives. Both are used exclusively for storage repositories in Citrix XenServer 5.6 FP1. Performance is great.

Highly recommended!

Intel (among others) has implemented the iSCSI stack on their enterprise NICs. Most of the more recent 1GBe and all of their 10GBe offload the iSCSI processing completely. Overhead problem solved.
 
Offloading + Jumbo Frames + Multipathing = A solution just about as good as 4GB fiber without all the extra cost.
 
Do any of you use ZFS on a SANs?

from my understanding, ZFS (i'm assuming you're talking about raidz in actuality) are more or less SUNs answer to a standardized software raid5... so performance is probably on par with that...

i imagine it would be useful if you were slapping a cheap freenas box together with a atom board or something that has 6 sata ports but no real RAID, or a software raid solution for windows only or something....

ZFS in itself is just a next gen file system for slowalris and now freebsd (freenas is based on freebsd)

the newest beta of freenas is completely different and shows a LOT of potential... it would be awesome if they had it running off of freebsd8 and an actual production ready implementation of ZFS
 
Last edited:
Usefulness of iSCSI:

- Performance. Kicks a$$ over both NFS and Samba over ethernet
- Block-mode operation. Looks like a native SCSI disk to the host OS, so it is file-system agnostic.
- Block-mode operation: Since it is block-mode, you can do non-filesystem disk operaions on it (database)
- Simple protocol. Simple enough to be implemented in BIOS. Supports boot-mode operations.
- Reliability. Includes hooks to build duplication and redundancy for fault tolerant operation. These things are not a native part of iSCSI, but iSCSI makes doing them simple.

Having a simple, reliable pool of disks that are can be supported easily across a wide variety of OSs is invaluable In a data center environment. Not suggesting that file-system level sharing (NFS, Samba, etc) are not useful too - but iSCSI has a lot of advantages for large disk arrays. It is especially useful in a virtualized data center where tools like VMWare vMotion are used to migrate "servers" across your hardware to balance load & reliability.

It is probably of limited use outside of the data center.

Couple things.

1. Performance over NFS? Eh..Which NFS stack?
2. It is block mode, but block mode isn't always best. I speak mainly to VMware environments, which is what I do.

One good thing about doing VMware w/ NFS is that the storage array understands the underlying filesystem and can do things outside the client server. For example, with VMware and some arrays that export via NFS, I can say "Snapshot that VM!" and the vSphere host will tell the array to do it, and the array does it at a file level. I can say "Compress that VM!" or "Compress that datastore!" and the NFS array will do that, if it supports it. Takes the load off the vSphere hosts.

Also, since the array understands the underlying filesystem and data you can easily do NDMP backups of files stored on the array. Things you can't do with iSCSI.

I have no major issues with iSCSI but we rarely EVER do iSCSI implementations of VMware anymore...very rarely.
 
Offloading + Jumbo Frames + Multipathing = A solution just about as good as 4GB fiber without all the extra cost.

And more complexity...and you still don't get the speed of 4Gb FC. FC switches and HBAs have gotten a LOT cheaper over the last 18 months. Many of our iSCSI customers like the idea of physical iSCSI switches. Compare that to the cost of FC switches and HBAs and it's close enough that many just do FC for the additional performance. Assuming their array does FC, of course.
 
And more complexity...and you still don't get the speed of 4Gb FC. FC switches and HBAs have gotten a LOT cheaper over the last 18 months. Many of our iSCSI customers like the idea of physical iSCSI switches. Compare that to the cost of FC switches and HBAs and it's close enough that many just do FC for the additional performance. Assuming their array does FC, of course.

In VMware more complexity, with windows not really.

Then again I personally wouldn't use iSCSI with VMware because you are still limited by VMFS. For VMware I use NFS in almost 100% of situations.

As far as cost goes, If I only have 2 servers, technically I can do SAN to HBA (not recommended as it makes adding that 3rd server a PITA to reconfigure everything) but I have only found a few instances in which fiber is worth the extra expense. Usually it is only database apps with high numbers of transactions, or extremely high density hypervisors. Most clients I support have two hypervisors with between 4 and 10 guests running 2008 R2 and Hyper-V and thus iSCSI ends up working out just fine.

To each his own though.
 
Not true at all, I have multiple SANs with iSCSI targets of 2+ TB. All you have to do is initialize the LUN as GPT and you are good to go. The only time I have had issues with 2TB drives is in Openfiler, but that was a different problem all together.

Oh really? so this is an Open Filer limitation? That's waht I happen to be using. It will let me create a LUN bigger then 2TB, but the client only sees 2TB. I will have to do some research on that. I just assumed it was a iSCSI limit as that's what I read everywhere.
 
Oh really? so this is an Open Filer limitation? That's waht I happen to be using. It will let me create a LUN bigger then 2TB, but the client only sees 2TB. I will have to do some research on that. I just assumed it was a iSCSI limit as that's what I read everywhere.

It was at one time. No longer.
 
Oh really? so this is an Open Filer limitation? That's waht I happen to be using. It will let me create a LUN bigger then 2TB, but the client only sees 2TB. I will have to do some research on that. I just assumed it was a iSCSI limit as that's what I read everywhere.

I'm not the storage guy but I have several 3tb or bigger luns running on our EMC hardware
 
Couple things.

1. Performance over NFS? Eh..Which NFS stack?
2. It is block mode, but block mode isn't always best. I speak mainly to VMware environments, which is what I do.

One good thing about doing VMware w/ NFS is that the storage array understands the underlying filesystem and can do things outside the client server. For example, with VMware and some arrays that export via NFS, I can say "Snapshot that VM!" and the vSphere host will tell the array to do it, and the array does it at a file level. I can say "Compress that VM!" or "Compress that datastore!" and the NFS array will do that, if it supports it. Takes the load off the vSphere hosts.

Also, since the array understands the underlying filesystem and data you can easily do NDMP backups of files stored on the array. Things you can't do with iSCSI.

I have no major issues with iSCSI but we rarely EVER do iSCSI implementations of VMware anymore...very rarely.

This, of course, a rather parochial response. You start with the presumption that a linux-like filesystem - which is all NFS can present - is best for your application. If you are running web-servers it probably is. If you are running a serious database it probably isn't. Really - doing block-mode simulation over NFS to a ZFS-based SAN to a real block-mode device will never outperform a pure, well optimized block-mode protocol to a block-mode SAN. Even Oracle doesn't recommenced this for large database servers...and they "own" NFS now and have the most to gain by promoting their data storage appliances acquired from Sun.

Every application has a 'best' solution. Every 'best' solution does not look like Linux with NFS. Unless, of course, Linux/NFS is all you really understand.
 
This, of course, a rather parochial response. You start with the presumption that a linux-like filesystem - which is all NFS can present - is best for your application.

Well, you started with stating that performance was superior to NFS. I asked a simple question, compared to WHICH implementation of NFS. I then stated my experience was primarily with VMware since that's what I do 99% of the time. The rest followed that train of thought.
 
Either care to elaborate on 4gb fc overperforming iscsi? (Btw I don't know jack about storage, just read that people were saturating 10gb with iscsi and i've problems understanding why it'd be worse :)
 
Last edited:
i assume they are talking about iSCSIwith multipathing on gigabit links. I dont have much experience with FC but I dont think it could keep up with 10GigE. I'll trust someone with more knowledge on FC will correct me if Im wrong.
 
Multipathing with iSCSI can be done with two identical links to your SAN. The standard at the moment is 1GB copper. 2 of these links will give you similar performance to a single 4GB fiber link, however multipathed 10GB links would be preferred. 10GBe is currently the fastest thing we have, and it is true the stories of maxing out 10GB links on storage servers holding database applications. We have SANs with 2x 4GB fiber connections that get maxed out due to the IO needs of the guests.
 
Multipathing with iSCSI can be done with two identical links to your SAN. The standard at the moment is 1GB copper. 2 of these links will give you similar performance to a single 4GB fiber link, however multipathed 10GB links would be preferred. 10GBe is currently the fastest thing we have, and it is true the stories of maxing out 10GB links on storage servers holding database applications. We have SANs with 2x 4GB fiber connections that get maxed out due to the IO needs of the guests.

Are you saying 2 x 1Gb Multipathed links on iSCSI will compete with a single 4Gb FC link? Because that's not true. I can absolutely get 4Gb on a 4Gb FC link. You won't even really get 2Gb on those multipathed links.

And yes, 10Gb can greatly change things. We're starting to see a lot of requests for 10Gb NFS on VMware implementations.

EDIT: Unless you're talking IOPS. I'm talking throughput. For IOPS that's a diff discussion and yes, you can push a lot of IOPS over a couple of 1Gb links and normally that's the question as rarely do we see someone maxing out FC throughput from a host.
 
Ah, yes, of course. I thought that there might be some major bottleneck regarding the iscsi protocol itself.
 
EDIT: Unless you're talking IOPS. I'm talking throughput. For IOPS that's a diff discussion and yes, you can push a lot of IOPS over a couple of 1Gb links and normally that's the question as rarely do we see someone maxing out FC throughput from a host.

Yes I was referring to IOPS, as you are absolutely correct, 2x 1GB links that can only provide about 90% of their total bandwidth are not going to have the same bandwidth as a 4GB FC link.
 
Quick question about multipathing: what's the difference between dedicating two active/active NICS to the ISCSI vm kernel and setting up multipathing?

I'm assuming that when you setup two NICS for ISCSI traffic, a traffic gets hashed and is pushed down one link to the SAN. So essentially there isn't a load balancing mechanism? Is that accurate?
 
Back
Top