Most notable difference between SAN v. NAS

imzjustplayin

[H]ard|Gawd
Joined
Jan 24, 2006
Messages
1,171
So the largest difference is performance and the difference between block level access and file level access for SAN and NAS respectively? And that means that you can update a file bit by bit at a time for block level access while file level access means that you can only update the file as a whole which is an issue if you only need to make a small change and you've got a huge file, right? A SAN and a DAS are similar except a DAS isn't networked but instead directly attached to a host machine (hences the direct attached) while a SAN is networked and shared? Also a SAN has the ability to scale much better than a NAS and can be backed up much easier than a DAS? So how would you go about backing up a DAS? Would the host machine have to be on if what I said before is true?

SANs appear to be locally attached much like a DAS except for all machines on the network?

Also you can have a NAS on a SAN, right? How does that work? Is it just the scalability of SAN w/o the block level access but instead file level access of NAS?
 
Alright from all your posts here you are trying to figure out things way over your head. Start with the easy stuff instead of trying to become a rocket scientist overnight.
 
Alright from all your posts here you are trying to figure out things way over your head. Start with the easy stuff instead of trying to become a rocket scientist overnight.


Naw, I know this makes no sense but I'd rather build a building from the top and bottom, then make my way to the center...


Instead of burning candles at both ends, I'm making a candle at both ends, where it eventually meets.

The only reason why it's over my head is because noone has given me the information I was looking for, it's not that it's too complicated for me, it's just that I don't know enough to be able to ask the right questions for my self and then figure it out from there..
 
You guys are thinking wayyy too deep! The answer to this riddle is: SAN is NAS backwards!
 
... and you haven't bothered to go looking for it yourself.
I've tried but it's difficult to find it out and it seems like it's one of those nuances you can't find out without someone telling you, like it isn't written.. Clearly you don't know because if you did, you'd help me.
 
...it's just that I don't know enough to be able to ask the right questions for my self and then figure it out from there..
Exactly. Which is why you have been repeatedly given some very good books to go read and yet you dismiss book or resource recommendations as being below you. Frankly, every post you've ever made here has started and ended the same way - you ask for help/advice/knowledge about a given topic then scoff at anyone who offers what it is you seek. There have been people who have offered books, taken the various topics down to the very basics, drawn diagrams, etc... and yet you still complain that no one helps you and that you don't understand... yet you're unwilling to take the advice given to you.

Once again you are confusing a myriad of topics. A DAS is nothing but an external hard drive array. A NAS is a network-attached-storage device, a self-contained storage device that shares its storage via configurable shares. A SAN is a device (made up of several components) which provides high performance centralized and diversified storage to a multitude of servers.

DAS is attached to an internal RAID controller inside of the server it is being added too.

A NAS is accessed by shares over the network. The same way shares on an average server are accessed.

A SAN is accessed by HBA's (host bus adapters) installed in the servers you want to access the SAN which provide a conduit to either a fiber or iSCSI (ethernet) dedicated storage network.

You cannot have a NAS as part of a SAN. They are two different things.

A DAS is backed up just as any other drive or drives are on a physical system - usually through the use of an installed agent.

A NAS is backed up either by an agent installed on the NAS or by configuring the backup software to backup via the shares.

A SAN is an enterprise solution and there are many ways of backing it up - including letting the attached servers back up the storage they are using from the SAN the same way they would any other drive on the system (via an agent), site/san-to-site/san replication for DR, internal SAN mirroring and snapshots and/or direct-attached-tape for LUN to tape-direct backup.

Also, aside from accessing the various SAN components to configure the back-end parameters such as fiber switch pathing or storage processor options, a SAN is NOT "networked" -- it is a dedicated and isolated network specific to the SAN and the servers which use it for storage.

DAS = limited fault tolerance, typically same-as-internal performance. ~ If the server that hosts it is down, the data on the storage array is down.

NAS = limited fault tolerance, performance limited by many factors including network usage and device configuration. Allows data to be accessed by multiple machines at once but only, typically, by mapped network shares. An example of a NAS implementation would be a NAS installed in a small-office used to store user's home directories.

iSCSI SAN = mid-business fault tolerance, typically offering middle-of-the-road performance and is relatively inexpensive to implement.

FC SAN = enterprise level fault tolerance, typically offering full path redundancy and best-in-class performance but is relatively expensive to implement.

iSCSI vs FC = SAN's based on iSCSI are rather inexpensive but will hit their performance cieling much faster than a properly implemented FC solution. It used to be that the price difference was in the hundreds of thousands between these two solutions but over the last few years that price gap has been closed dramatically.
 
You cannot have a NAS as part of a SAN. They are two different things.

While I do appreciate the time you took to write out your lengthy post (no sarcasm), I have to say about this point that you're either wrong, mistaken or not 'up to date'.

Now I don't know how much you know nor your experience neither know whether or not this article is correct and or I'm misinterpreting it or not but...

http://searchstorage.techtarget.com/tip/1,289483,sid5_gci800728,00.html

"SAN (storage area networks) and NAS (network attached storage) are frequently touted as competitive technologies, when in fact they are quite complementary. Before you can make informed decisions about the technologies available you have to understand your data and the applications that need to access it."

"When properly architected, SAN and NAS technologies can be combined to help you spend your storage dollar to its maximum value. The next time a storage vendor comes knocking on your door - ask him/her "how does your product fit the way I use my data?"

With that said, I'm going to assume that I've misinterpreted the article...


Here is another article (albiet from the same site) with more of what I was talking about...

http://searchstorage.techtarget.com/tip/0,289483,sid5_gci929484,00.html

"Companies that are looking to combine SAN and NAS operations face a host of choices, including standalone NAS gateways, SAN solutions with integrated NAS functionality, NAS devices offering block I/O and even filer capability running within a switch."
 
The first two quotes are referring to how SAN and NAS devices fit into an enterprise, not how they interconnect. In other words, SANs are often reserved for databases, exchange (mail) and other I/O intensive areas that require a lot of disk space, mainly due to cost. Meanwhile NAS devices may be "good enough", in some businesses, to serve their less intensive needs such as user shares, printers, etc. So a single enterprise may have SAN devices on the high end and NAS devices on the low end.

The last quote refers to more advanced NAS and crossover devices and how they can work together with a SAN -- you however never "put a NAS into a SAN." Over the last few years the cost of SAN technology has been coming down to the point where there are many options in the lower budget arena.
 
I managed to find this page, and I think it accurately describes a DAS, NAS and SAN..

http://www.boston.co.uk/stuff/articles/tech/310505-1/part1.aspx

The third storage option was to create a SAN – Storage Area Network – which would consist of multiple storage units with essentially no processing capability being networked together using a high-speed interconnect (typically fibre channel) to provide a potentially massive volume of high speed storage accessible at a block level.

The storage processor in a SAN, what is it's purpose? Does the storage processor act as a HBA like in a traditional DAS system and or your typical home computer? Interfacing the drives with the mainboard? Except with a SAN, the storage processor interfaces with the fibre channel and or NIC interface?
 
That page described iSCSI and how it compares to a FC SAN solution. It is important to understand that both solutions are -not- made up of "storage units with essentially no processing capability", they are not made up of multiple low-grade servers, they are made up of devices specifically designed for use in either an iSCSI or FC SAN.

In a typical SAN environment, either FC or iSCSI, you have multiple storage enclosures which are linked together over either copper ethernet, copper fiber channel or fiber optic fiber channel to form what is referred to as a back-end loop. These storage enclosures are nothing more than "dumb" disk shelves which provide housing, power and a backplane for the drives to plug into.

These "dumb" disk enclosures are ultimately terminated at one or more "storage processor(s)", typically two. This is where iSCSI and FC SANs differ from each other slightly; FC SANs typically terminate the disk storage enclosures at the SP while iSCSI typically terminates all of its connections at the same dedicated switch-level.

Storage processors in both scenarios are "smart" devices custom designed for the duties they perform - it is important to realize that they are -not- "servers." Storage processors provide management and cache to the SAN.

In an FC SAN, where the disk enclosures typically terminate at the SP, the SPs then connect to a Fiber Channel switch or multiple switches (usually two, at least) to which servers which will take advantage of the SAN storage have also been connected. This is referred to as the "front-end loop."

Servers are connected to these fiber channel switches by means of HBAs, typically two or a dual-channel single card solution, which provide the end-point for the SAN and logical access to storage on a SAN that has been carved out for those specific servers.

In an iSCSI SAN, storage processors, disk arrays and the servers taking advantage of the storage on the SAN are all connected to a standard (but dedicated) ethernet switch. The storage processor(s) take care of routing traffic between the various disk enclosures and servers as needed.

With iSCSI, an HBA is not necessarily needed but the use of one will offer a much more robust solution than without. iSCSI technology must have an initiator, which can either be on-board an HBA and accelerated by an the on-board processor or done through software on the server side.

Now that you have the physical side, lets examine the logical side briefly: You should think of either SAN solution as being a large pie. Storage is not necessarily just there for the taking - a pie that has not been cut is useless after all - in order for attached servers to access storage on a SAN, a LUN must be carved out for them and assigned. A LUN is a Logical Unit Number which is made up of one or more SG or Storage/Drive Groups. The Storage/Drive Groups are made up of multiple physical disks in the SAN, which could be all from one enclosure or spread across multiple enclosures to provide the best redundancy, which are bound together to form a RAID array.

Once a LUN has been "carved out" for a given server or a group of servers, they may access it however they wish - typically be giving it a mount point or drive letter and using it as if they would any other storage. One important feature of SANs is the ability to support concurrent traffic from multiple hosts to the same LUN, something DAS is often incapable of, making SANs ideal for clustered server solutions.

Attached servers do not see all the available LUNs and choose which they wish to use, rather they are assigned specific rights to one or more LUNs through the management interface provided by the storage processors. The switching components in either an FC or iSCSI SAN also provide an even greater degree of management by offering the ability to segregate and control traffic from each other. In an FC solution this would be done by isolating specific fiber connections into a group or "zone."
 
That page described iSCSI and how it compares to a FC SAN solution. It is important to understand that both solutions are -not- made up of "storage units with essentially no processing capability", they are not made up of multiple low-grade servers, they are made up of devices specifically designed for use in either an iSCSI or FC SAN.

In a typical SAN environment, either FC or iSCSI, you have multiple storage enclosures which are linked together over either copper ethernet, copper fiber channel or fiber optic fiber channel to form what is referred to as a back-end loop. These storage enclosures are nothing more than "dumb" disk shelves which provide housing, power and a backplane for the drives to plug into.

These "dumb" disk enclosures are ultimately terminated at one or more "storage processor(s)", typically two. This is where iSCSI and FC SANs differ from each other slightly; FC SANs typically terminate the disk storage enclosures at the SP while iSCSI typically terminates all of its connections at the same dedicated switch-level.

Storage processors in both scenarios are "smart" devices custom designed for the duties they perform - it is important to realize that they are -not- "servers." Storage processors provide management and cache to the SAN.

In an FC SAN, where the disk enclosures typically terminate at the SP, the SPs then connect to a Fiber Channel switch or multiple switches (usually two, at least) to which servers which will take advantage of the SAN storage have also been connected. This is referred to as the "front-end loop."

Servers are connected to these fiber channel switches by means of HBAs, typically two or a dual-channel single card solution, which provide the end-point for the SAN and logical access to storage on a SAN that has been carved out for those specific servers.

In an iSCSI SAN, storage processors, disk arrays and the servers taking advantage of the storage on the SAN are all connected to a standard (but dedicated) ethernet switch. The storage processor(s) take care of routing traffic between the various disk enclosures and servers as needed.

With iSCSI, an HBA is not necessarily needed but the use of one will offer a much more robust solution than without. iSCSI technology must have an initiator, which can either be on-board an HBA and accelerated by an the on-board processor or done through software on the server side.

Now that you have the physical side, lets examine the logical side briefly: You should think of either SAN solution as being a large pie. Storage is not necessarily just there for the taking - a pie that has not been cut is useless after all - in order for attached servers to access storage on a SAN, a LUN must be carved out for them and assigned. A LUN is a Logical Unit Number which is made up of one or more SG or Storage/Drive Groups. The Storage/Drive Groups are made up of multiple physical disks in the SAN, which could be all from one enclosure or spread across multiple enclosures to provide the best redundancy, which are bound together to form a RAID array.

Once a LUN has been "carved out" for a given server or a group of servers, they may access it however they wish - typically be giving it a mount point or drive letter and using it as if they would any other storage. One important feature of SANs is the ability to support concurrent traffic from multiple hosts to the same LUN, something DAS is often incapable of, making SANs ideal for clustered server solutions.

Attached servers do not see all the available LUNs and choose which they wish to use, rather they are assigned specific rights to one or more LUNs through the management interface provided by the storage processors. The switching components in either an FC or iSCSI SAN also provide an even greater degree of management by offering the ability to segregate and control traffic from each other. In an FC solution this would be done by isolating specific fiber connections into a group or "zone."
Much better, that explains a lot, now that you've said that, I've got some smaller questions, ones that haven't been explained by dell...

1. Do the storage processors have some sort of OS on them? Are the storage processors like the equivalent of web accessed based firewalls like m0n0wall where you access it from a workstation and have options to add permissions, remove permissions, form arrays and delete arrays? Or does it have a full blow OS like Windows 2000 Advanced server?
2. When you create a LUN, where does the raid, linking of the drives occur? Are those functions done by the storage processor or does the storage processor simply isolate a specific set of drives assigned to a LUN, forward the data over to the workstation/server with the HBA and then the raid is performed on the workstation itself?
3. Drives that are mounted from the SAN, do they appear as a network share or do they appear like a physical disk? Does Windows treat the SAN disk (Assigned LUN) like any other physical disk in the system?
4. (Forgot this one) Can you have an unlimited number of drives in a LUN? Is there a limit to how many drives can be attached to a storage processor? If there is a limit, I assume you'd have an aggregate of multiple storage processors with X amount of drives, but then how would you combine the drives amongst all the storage processors? This is with the idea that the LUN is per storage processor.
 
1. Do the storage processors have some sort of OS on them?
Yes.
Are the storage processors like the equivalent of web accessed based firewalls like m0n0wall where you access it from a workstation and have options to add permissions, remove permissions, form arrays and delete arrays?
Yes.
Or does it have a full blow OS like Windows 2000 Advanced server?
Depending on the manufacturer and model it will either be a stripped down proprietary version of windows or linux. It only allows you to perform certain tasks related to SAN management and little else.
2. When you create a LUN, where does the raid, linking of the drives occur? Are those functions done by the storage processor or does the storage processor simply isolate a specific set of drives assigned to a LUN, forward the data over to the workstation/server with the HBA and then the raid is performed on the workstation itself?
The drive and RAID configuration is handled at the SP (storage processor) level which is generally accessed by a java/web interface.
3. Drives that are mounted from the SAN, do they appear as a network share or do they appear like a physical disk? Does Windows treat the SAN disk (Assigned LUN) like any other physical disk in the system?
They appear as if they were another physical disk in the server and windows treats it as such.
4. (Forgot this one) Can you have an unlimited number of drives in a LUN? Is there a limit to how many drives can be attached to a storage processor? If there is a limit, I assume you'd have an aggregate of multiple storage processors with X amount of drives, but then how would you combine the drives amongst all the storage processors? This is with the idea that the LUN is per storage processor.
There are limits on hard drives per storage groups, limits on how many storage groups can be members of a LUN and limits on how many LUNs can be formed into a Meta-LUN. But these vary by manufacturer and model. A Meta-LUN is a combination of LUNs that the attached server will see as one physical disk and is one way of getting around the aforementioned limitations.
 
this thread has some great info.

Orinthical, thanks for typing all this in. I've not had much experience with SAN/NAS stuff, and this was a great read through! :cool:
 
"You cannot have a NAS as part of a SAN. They are two different things."

They are two different things, but you most certainly can. Most large storage vendors sell NAS heads that can front end most any kind of back end storage.
 
"You cannot have a NAS as part of a SAN. They are two different things."

They are two different things, but you most certainly can. Most large storage vendors sell NAS heads that can front end most any kind of back end storage.
Absolutely. But that would be a NAS sharing SAN storage, not a NAS being a component of a SAN.
this thread has some great info.

Orinthical, thanks for typing all this in. I've not had much experience with SAN/NAS stuff, and this was a great read through! :cool:
No problem, glad to help. :)
 
Someone said on another forum that the Dell/EMC CX3-80 support 480 drives, I find this hard to believe.. While dell lists it in its specifications, I can't imagine one storage processor handling raid 5 for multiple LUNs across 480 drives... Not to mention having 250 servers simultaneously accessing this single storage processor...

http://www.dell.com/content/products/compare.aspx/sanet_fibre?c=us&l=en&s=biz&cs=555

Is this true?
 
The CX3-80 has 4 SPs with 2 4Gbit FC connections. They'd probably be wired to redundant switches, which would then connect to the rest of the hosts. I can see it "supporting" 256 hosts, but if you were pushing those numbers you'd jump to something bigger anyway.

I don't know EMC systems very well, but raid is probably handled at the enclosure level and the SPs group those into storage pools that you can assign LUNs from.
 
Someone said on another forum that the Dell/EMC CX3-80 support 480 drives, I find this hard to believe.. While dell lists it in its specifications, I can't imagine one storage processor handling raid 5 for multiple LUNs across 480 drives... Not to mention having 250 servers simultaneously accessing this single storage processor...

http://www.dell.com/content/products/compare.aspx/sanet_fibre?c=us&l=en&s=biz&cs=555

Is this true?

Sorry for the bump.

Yes it's true.

That's nothing though - the high end EMC Symmetrix array can take 2000+ drives!
 
The CX3-80 has 4 SPs with 2 4Gbit FC connections. They'd probably be wired to redundant switches, which would then connect to the rest of the hosts. I can see it "supporting" 256 hosts, but if you were pushing those numbers you'd jump to something bigger anyway.

I don't know EMC systems very well, but raid is probably handled at the enclosure level and the SPs group those into storage pools that you can assign LUNs from.

It has only 2 SPs. They are active / active but if one dies the other will take on the load until it is replaced.
 
It has only 2 SPs. They are active / active but if one dies the other will take on the load until it is replaced.

So how exactly can two processors handle multiple Raid 5 Luns across 480 drives? I find it extremely hard to believe it could handle those drives with out some serious performance penalty of some sort.

Sorta like having 2 Drives on a Parallel ATA channel or something like that..
 
I see no reason why a 2 specialized processors can't handle a load like that.
Think of a enterprise switch, there is one processor handling millions of packets per second, and it seems to handle that just fine.

it's not like the processors has to worry about running windows, or getting user input, they run specific routines to handle the data.

Think of processing speed, a 200 megahertz processor, is executing 200 million commands per second, most, if not all of those commands are dealing with just telling the data where to go.
It doesn't seem that hard to imagine.
 
I see no reason why a 2 specialized processors can't handle a load like that.
Think of a enterprise switch, there is one processor handling millions of packets per second, and it seems to handle that just fine.

it's not like the processors has to worry about running windows, or getting user input, they run specific routines to handle the data.

Think of processing speed, a 200 megahertz processor, is executing 200 million commands per second, most, if not all of those commands are dealing with just telling the data where to go.
It doesn't seem that hard to imagine.
Well I'm surprised because in order to fully utilize gigabit ethernet, you need to have like a P4 3ghz machine, wouldn't the requirements for two gigabit ethernet cards be doubled? I mean thats only 125MB/s plus overhead, I can only imagine having 4 gigabit ethernet cards in one machine..

Can you turn a computer into a switch by installing multiple quad port ethernet cards?
 
So how exactly can two processors handle multiple Raid 5 Luns across 480 drives? I find it extremely hard to believe it could handle those drives with out some serious performance penalty of some sort.

Sorta like having 2 Drives on a Parallel ATA channel or something like that..

Remember each SP has two 3.6 Xeon CPUs and 8GB of cache. So the CX380 has, in total, four 3.6 Xeon CPU's and 16GB cache. That's quite powerful and although someone may have 480 drives they might only have 30-40 luns spread accross those drives.

On top of that you have a specialised OS ('Flare') which is designed to handle all the RAID for those Luns.
 
Remember each SP has two 3.6 Xeon CPUs and 8GB of cache. So the CX380 has, in total, four 3.6 Xeon CPU's and 16GB cache. That's quite powerful and although someone may have 480 drives they might only have 30-40 luns spread accross those drives.

On top of that you have a specialised OS ('Flare') which is designed to handle all the RAID for those Luns.
So would you say the raid done by those storage processors is faster than any hardware card you can get for a desktop computer?

How come software raid is so shitty on the desktop if this is essentially what is is? Or is it just designed differently with the drives being directly connected to the Chipset? What bus do the drives go through on the storage processor? What chipset is inside of the storage processor?
 
So would you say the raid done by those storage processors is faster than any hardware card you can get for a desktop computer?

How come software raid is so shitty on the desktop if this is essentially what is is? Or is it just designed differently with the drives being directly connected to the Chipset? What bus do the drives go through on the storage processor? What chipset is inside of the storage processor?


you lookiing too deeply. Its a specialized be of hardware designed for managing large dis subsystems. Its not a Pc running end user apps at the same time its caluclating RAID.

Most enterprise SAN have large cache to so a lot of data can be read extermely fast without touching the disk for ever I/O transaction.
 
Back
Top