I don't understand the network setups

sovs said:
yea, but don't pay to much $$ for it.

scsi would be better for learning.


Huh? What do you mean? Are you saying I should create a SCSI array (with scsi HDDs) so that I can learn about them? If so, there is no need as I had the chance to work with a SCSI II array, boy was it fun trying to hookup drives to the controller card (2 HDDs, 1 optical) with out a schematic on what each pin does on the hard drives (you know, so that you can assign the drives an ID).

:p Yeah that took about a day to figure out, ended up drafting up a configuration of what each of the pins mean as the diagram on the drive was extremely vague; great learning experience but a nightmare for those with out patience.

I can say with confidence that I know how to setup a SCSI array, cause I doubt they've changed much since SCSI II, no?
 
imzjustplayin said:
Now, with a SAN, is it possible to combine the diskspace into actually ONE drive by essentially "raiding up the systems" into one volume? While DFS can and probably is useful, don't they have SAN network setups where they combine all of the drive space into one contiguous volume? I ask this because what if you had a database that was literally 10TB but you're in 2002 and you don't have 500GB and 750GB drives like you do today, so what do you do? Thats my general impression about what a SAN should be and basically what I've been asking about this whole time. I guess basically I'm asking if you can RAID or JBOD a bunch of machines in a SAN or not..


I went back to this because it seems like you are really confused about RAID, SAN, DFS, etc.

For starters, you can't "RAID up systems". RAID arrays are groups of physical hard drives, period. The hard drives reside in one system. RAID combines PHYSICAL hard drives into LOGICAL drives. You can't have two servers with two hard drives in each and expect to make one big RAID. Nope. You can do a RAID array in each server on it's own though. Then if you wanted to, you could use DFS to create one big "volume" that spans the two RAID arrays. DFS doesn't care about RAID though. It's just used to associate directory structures distributed across different drives, servers, etc.

SAN is a high speed networking technology, it has nothing to do with RAID or DFS. SAN is not a group of hard drives. You can however have one or more RAID arrays included (connected to each other over the SAN) in your SAN. The SAN connects different systems, and systems to centralized storage, but it does not combine all of the drives from all of the servers into one big volume. SAN is a networking technology used to transport data, not a method of organizing directories and folders and such.

DFS handles the logical organization of the data at the directory and folder level. It has nothing to do with hard drives, RAID controllers, data transport, etc.

I hope that helps.
 
http://en.wikipedia.org/wiki/ISCSI

http://en.wikipedia.org/wiki/Fibre_Channel

A fiber channel SAN is typically made up of fiber links between the servers that use the san and the fiber switches, fiber between the switches and the storage processor and finally fiber or FC copper between the storage processors and the DAE's. Fiber Channel is an interface technology which utilizes Layer2 switching whereas iSCSI is more a network protocol that runs on top of IP.

iSCSI can utilize SCSI, SATA and IDE drives.
FC SAN's can utilize a variety of technology including FC, SATA and IDE drives.

Here's a comparison between LVD and FC drives/interfaces:
http://www.seagate.com/support/kb/disc/fibre_channel.html

There are a lot of FC vs iSCSI articles out there worth a read. Google it.

Either way, Ebay some of both tech and learn them both.

Edit: And yes, SCSI technology in general has changed quite a bit since SCSI-II, you might want to reexamine it. But, to be as nice as possible, if it took you a day to figure out a couple of jumpers... please don't question or berate people when they tell you to go read books. You really need to get a bunch of books on these different technologies and read through them. End of story. Once you have a good overview of each technology (from books, not from here) you can then asked informed questions about how each of them fits in.

It's like upgrading your workstations to gigabit cards when you're still running token ring. It doesn't work that way. :)

Spartacus said:
I hope that helps.
Good job summing it up but I don't know if it will help or not. :)
 
"But, to be as nice as possible, if it took you a day to figure out a couple of jumpers... please don't question or berate people when they tell you to go read books. You really need to get a bunch of books on these different technologies and read through them. End of story. Once you have a good overview of each technology (from books, not from here) you can then asked informed questions about how each of them fits in."

That is this thread in a nutshell. A guy who doesn't know or understand how to ask the right questions, then refusing to accept what is the best advice and trying to force his will to learn something that is very difficult to explain in a few paragraph forum posts. Or even multiple posts. Hell, it took me some time to really understand SAN technology and I was utilizing a huge SAN at work routinely. Just never took the time to get with the Storage guys to understand the fundamentals of what our Blades, Clusters, and whatnot was using
There has been some very well laid out and precise information in this 3 page forum post. It appears very little has been absorbed by the OP. And what do you want to bet it's not over? :) Whoosh (picture your hand going over your head in a quick sweeping motion)
 
ktwebb said:
That is this thread in a nutshell. A guy who doesn't know or understand how to ask the right questions, then refusing to accept what is the best advice and trying to force his will to learn something that is very difficult to explain in a few paragraph forum posts. Or even multiple posts. Hell, it took me some time to really understand SAN technology and I was utilizing a huge SAN at work routinely. Just never took the time to get with the Storage guys to understand the fundamentals of what our Blades, Clusters, and whatnot was using
There has been some very well laid out and precise information in this 3 page forum post. It appears very little has been absorbed by the OP. And what do you want to bet it's not over? :) Whoosh (picture your hand going over your head in a quick sweeping motion)

You forgot to add that the OP also claims in one of the first couple of posts that he can learn this on his own.... while this thread seems to serve as a testament against that very idea.

And while dated, I'm surprised this wasn't thrown out there with the book suggestions:
http://www.onlamp.com/pub/a/onlamp/2002/03/14/sansnas.html
 
Spartacus said:
I went back to this because it seems like you are really confused about RAID, SAN, DFS, etc.

For starters, you can't "RAID up systems". RAID arrays are groups of physical hard drives, period. The hard drives reside in one system. RAID combines PHYSICAL hard drives into LOGICAL drives. You can't have two servers with two hard drives in each and expect to make one big RAID. Nope. You can do a RAID array in each server on it's own though. Then if you wanted to, you could use DFS to create one big "volume" that spans the two RAID arrays. DFS doesn't care about RAID though. It's just used to associate directory structures distributed across different drives, servers, etc.

SAN is a high speed networking technology, it has nothing to do with RAID or DFS. SAN is not a group of hard drives. You can however have one or more RAID arrays included (connected to each other over the SAN) in your SAN. The SAN connects different systems, and systems to centralized storage, but it does not combine all of the drives from all of the servers into one big volume. SAN is a networking technology used to transport data, not a method of organizing directories and folders and such.

DFS handles the logical organization of the data at the directory and folder level. It has nothing to do with hard drives, RAID controllers, data transport, etc.

I hope that helps.
Gotcha, good clarifcation there.. So basically there would be no physical way of having raid 5 amongst two systems eh? (5 drives in each system)
LOL
I guess most network administrators would use a combination of a SAN with DFS right?
 
Orinthical said:
http://en.wikipedia.org/wiki/ISCSI

http://en.wikipedia.org/wiki/Fibre_Channel

A fiber channel SAN is typically made up of fiber links between the servers that use the san and the fiber switches, fiber between the switches and the storage processor and finally fiber or FC copper between the storage processors and the DAE's. Fiber Channel is an interface technology which utilizes Layer2 switching whereas iSCSI is more a network protocol that runs on top of IP.

iSCSI can utilize SCSI, SATA and IDE drives.
FC SAN's can utilize a variety of technology including FC, SATA and IDE drives.

Here's a comparison between LVD and FC drives/interfaces:
http://www.seagate.com/support/kb/disc/fibre_channel.html

There are a lot of FC vs iSCSI articles out there worth a read. Google it.

Either way, Ebay some of both tech and learn them both.

Edit: And yes, SCSI technology in general has changed quite a bit since SCSI-II, you might want to reexamine it. But, to be as nice as possible, if it took you a day to figure out a couple of jumpers... please don't question or berate people when they tell you to go read books. You really need to get a bunch of books on these different technologies and read through them. End of story. Once you have a good overview of each technology (from books, not from here) you can then asked informed questions about how each of them fits in.

It's like upgrading your workstations to gigabit cards when you're still running token ring. It doesn't work that way. :)


Good job summing it up but I don't know if it will help or not. :)
Well I'd say it's pretty damn impressive to map out what each jumper does when you have little to no reference guide. When I say it took me a day to figure out what each jumper does, that doesn't mean I didn't know what functions to LOOK FOR but the mere fact that it wasn't listed/extremely vague/incorrect.The diagram on the drive didn't say what orientation it was in and when you make a change on a SCSI jumper, the result of that change wasn't always evident therefore making it exceedingly difficult to diagnose the issue.
 
imzjustplayin said:
Gotcha, good clarifcation there.. So basically there would be no physical way of having raid 5 amongst two systems eh? (5 drives in each system)
LOL
I guess most network administrators would use a combination of a SAN with DFS right?

You can have 5 drives in each of two servers and then create a RAID-5 array in each server.

But that leaves you with two separate servers with two separate RAID arrays. If both servers were on the same ethernet network, you could run DFS and create a VIRTUAL volume that spans the two servers.

No reason to involve SAN.

If you later decided that it's taking too long to transfer data between the servers over the ethernet connection, you could buy hba's (host bus adapters) and a compatible switch and create a SAN. Then the servers could trade data at a much faster rate over the SAN connection. The ethernet connection would still be used to connect to the client desktops.

The creation of a SAN would also involve moving (or replacing) the hard drives to a centralized disk array unit.

EDIT: Before anybody says anything.... yes I know the move to SAN is more complex than what I outlined above, but I'm trying to keep the concept simple.
 
Spartacus said:
You can have 5 drives in each of two servers and then create a RAID-5 array in each server.

But that leaves you with two separate servers with two separate RAID arrays. If both servers were on the same ethernet network, you could run DFS and create a VIRTUAL volume that spans the two servers.

No reason to involve SAN.

If you later decided that it's taking too long to transfer data between the servers over the ethernet connection, you could buy hba's (host bus adapters) and a compatible switch and create a SAN. Then the servers could trade data at a much faster rate over the SAN connection. The ethernet connection would still be used to connect to the client desktops.

The creation of a SAN would also involve moving (or replacing) the hard drives to a centralized disk array unit.

EDIT: Before anybody says anything.... yes I know the move to SAN is more complex than what I outlined above, but I'm trying to keep the concept simple.


But DFS doesn't allow one to split up one very LARGE file among multiple servers regardless if they're in a SAN or not, correct? What does one do if they're in the year 2000, want to use only SCSI drives and need a place to put a 1TB file? Can't use raid in any configuration on the systems because there just isn't enough space on each drive, so what would one do? That was basically my question from the very beginning.
 
Not to be a dink, but it sure sounds like your trying to learn a little bit out of a lot things that require a lot of reading and understanding. I would suggest buying:

"MCSE guide to managing Microsoft windows server 2003 network ISBN 0-619-2173-7"

and

"MCSE guide to managing Microsoft windows server 2003 environment ISBN-10: 0-619-21752-9"
windows 2000 in your case.

Two great books, and cover your questions in depth and with examples.

Edit
***This is assuming your going to have a windows environment of course.
A good starting Unix book would be from O"Reilly, Essential System Administration ISBN 0-596-00343-9

Man I just realized I have a crap load of books.. ugh :rolleyes:
 
Tim Wardlaw said:
Not to be a dink, but it sure sounds like your trying to learn a little bit out of a lot things that require a lot of reading and understanding. I would suggest buying:

"MCSE guide to managing Microsoft windows server 2003 network ISBN 0-619-2173-7"

and

"MCSE guide to managing Microsoft windows server 2003 environment ISBN-10: 0-619-21752-9"
windows 2000 in your case.

Two great books, and cover your questions in depth and with examples.

Edit
***This is assuming your going to have a windows environment of course.
A good starting Unix book would be from O"Reilly, Essential System Administration ISBN 0-596-00343-9

Man I just realized I have a crap load of books.. ugh :rolleyes:

Eh don't worry about it, at least it's useful. I enjoy reading something like a technical manual, schematic, educational etc. at lot more than some stupid story (unless of course it's a true story).
 
imzjustplayin said:
But DFS doesn't allow one to split up one very LARGE file among multiple servers regardless if they're in a SAN or not, correct? What does one do if they're in the year 2000, want to use only SCSI drives and need a place to put a 1TB file? Can't use raid in any configuration on the systems because there just isn't enough space on each drive, so what would one do? That was basically my question from the very beginning.
And it's been answered several times over. They'd use a SAN.
 
Orinthical said:
And it's been answered several times over. They'd use a SAN.
Well then, you guys are clearly not doing a very good job of explaining how this would be possible.

If a SAN is basically a network between storage devices with a fast interconnect, how would you store a large file across multiple member servers? I don't believe DFS is capable of doing this..
 
imzjustplayin said:
Well then, you guys are clearly not doing a very good job of explaining how this would be possible.

If a SAN is basically a network between storage devices with a fast interconnect, how would you store a large file across multiple member servers? I don't believe DFS is capable of doing this..


Because you STILL DON'T have a BASIC understanding of networking :rolleyes:
 
I agree as well. Everything has been pretty well explained if you have a basic understanding of Networking.
 
imzjustplayin said:
Well then, you guys are clearly not doing a very good job of explaining how this would be possible.

If a SAN is basically a network between storage devices with a fast interconnect, how would you store a large file across multiple member servers? I don't believe DFS is capable of doing this..
1. You are confusing yourself. You're confusing technical and practical terminology and mixing up the layers of an overall design.
2. A SAN is a network, yes, but NOT of INDIVIDUAL SERVERS. It is independent.
3. It is helpful to think of a SAN as an independent device in your design. A SAN is made up of devices (NOT SERVERS) that contain storage. These devices are DAE's, storage enclosures made up of fifteen hard drives each. They are connected WITHIN the SAN to the SAN's Storage Processor. NOT SERVERS.
4. The physical storage contained WITHIN the SAN can then be configured into multiple logical volumes. SERVERS are then connected to a SAN allowing you to then /assign/ some or all of the SAN's available storage to a server. Servers can also share the same storage space or LUN on a SAN.

A SAN is a centralized storage system. Servers are connected to it.

Hard drives in the servers are NOT a part of the SAN, even if the server is connected to the SAN.

A SAN is independent of your network and system design.

A SAN is something you purchase and add to your design, not "build." If you buy an IBM SAN, you need to buy IBM DAE's to expand upon it. You don't just "add another server." You generally cannot just go buy an HP or DELL DAE and expect it to work. It's standard technology in a proprietary package.

With that said however, any manufacturer's servers can connect to it. An IBM SAN can work with Dell servers or vice versa.

If you want or need high volume, high availability, high bandwidth storage in an enterprise you are pretty much stuck with getting a SAN. Period. You don't "build" it when you need it or when you think it would be cool to do so.

SAN technology is not new. It's been around for a long time. To answer your scenario: If they had need to store a terabyte file or database in 2000, they'd use a SAN just like they do today.

------------------------------------------------------------------------------------------------------------

This has been explained repeatedly, even as frequent as a couple posts ago. If you cannot grasp what has been said, you need to go read books on the subject. These words are not being said out of the arrogance or the dismissive tone you are trying to attach to them, they are said because they are fact. There are a lot of well educated people here from both the enthusiast and consulting camps. What you have been asking about has been explained, in detail, repeatedly. Stop calling books "stupid", as you have repeatedly done, and go read some of the ones we've suggested to you.

And to respond without getting too personal here, we're not here to do a "job" of any kind. A forum such as this is a place where people can offer advice or best practices... not perform an educational or consulting service. If you want to berate someone for explaining something to you, you should hire someone who can come out and white-board this for you.

All of that being said, I'm done with this thread. Many contributors here have poured a great deal of technical information within these pages. You sir, need to listen to them and go read.

It is becoming obvious that by your inflammatory ignorance and name that you're probably not interested in learning at all. "I'm just playing" is your name after all, no?

Maybe you're not "just playing" and actually do want to learn... that's cool... but forums are not a place to learn on the level you are expecting. You need personal tutelage, a couple networking courses, some white-boarding sessions and, at best, some good book and lab time. Get that then come back if you're serious about learning.
 
oh just get on & do it. :D

Most of the vendor san switches (you could get away with a hub) will be silkworms that are rebadged. similarly, your HBAs may have a GBIC socket, or an SC connector.

you can pick up a silkworm switch for $100 right this minute on ebay - and its pro-grade equipment. sc-sc cables can be had for just $2-$3.

plug that into a dell powervault & you are away... (and if you can't get it to work, just dump it back onto ebay :( ).
 
lol, no book can be of any use unless it has pictures which in my opinion are especially important when discussing network topography.

Maybe you guys could draw a picture of what a typical SAN should look like and then a brief description of each of the components in a SAN?



This is the only picture I could find that resembles a SAN but to me, it doesn't quite make any sense. Seems like something is missing...Too vague?

0,1425,sz=1&i=12123,00.gif
 
hokatichenci said:
thread crap thread crap thread crap thread crap
this will be the last thing I leave in this thread. Good luck to you though.

Hmm I see.

Oh and BTW, if you guys had actually been more useful instead of getting caught up in the fact that I didn't know the details of a SAN and were trying to explain them in a very odd and rude way, this thread wouldn't have degenerated into what it was. You guys got more focused and essentially took a magnifying glass and were looking at the face of a person in a painting when I had never seen the painting from afar (the big picture) in terms on SAN networking and how it's relevant, setup etc..

http://www.answers.com/topic/nas-gateway

This whole thread basically was asking about the setups depicted below.

http://content.answers.com/main/content/img/CDE/LEGCYNAS.GIF
http://content.answers.com/main/content/img/CDE/NASGTWAY.GIF
http://content.answers.com/main/content/img/CDE/SAN2.GIF

Then any description past that would have made more sense.

Had you at least attempted to decipher what I was saying and not get all caught up in recommending some fucking books I could have found on my own (opposed to this link which was found only because I was looking up a similar but obsecure and new technology which BTW was never mentioned here) maybe you guys wouldn't have abandoned this thread and left with a bad taste in your mouth like as if you've been talking to some bitch for the past 5 hours.
 
its boggling the circles this thread is running


OP, how many clients are going to be on the network you want to set up, how much data are we talking about here?

ive never heard of any company that bought a san without a significant support staff infrastructure already hired and long in operation and knowledgeable to support it. we have a san. we have over 20,000 employees just in America, and 245 TB of storage. we also have a dedicated team that all they do is manage sans, luns, etc etc, and data backups. ive never seen a situation where a san is not "grown into". ie... the office i run has 40HP servers, with about 6-7TB of storage if i totaled them all up. most file servers or email servers have 360GB of space, one has 1.1TB, and one 3.5TB. so thats about 15 servers... the rest of the 35 have whatever its job requires (usually 18-60GB, average).

1) they are all RAID5, or a few of them with only small amount of drives (ie domain controllers) are RAID1.

2) none of it is ever down. RAID keeps everything running until i can get there in the rare instance that a drive fails. the odds of a servers just spontaneously combusting is beyond my brain to calculate (ie... it wont happen).

3) i dont have any fail over this and fail over that, save for my 2 main email servers. clustering file servers is pretty much a waste of disk space. microsoft DFS doesnt work worth a damn, ive taken a look at it many times. in reality, people map drives (seen in my computer) rather than click start, run, then type \\server and see the folders they want to go to. the DFS root that shows the shares on all your servers is really a moot point. performance was always piss poor as well.
 
realistically, here are the things that need multiple redundant/failover/clustered servers.

www.hp.com

www.victoriassecret.com

www.microsoft.com

slashdot.org

see where im going with this? at my site, i have 175 users, but my datacenter services about 1000 people around the country (via WAN). aside from my aforementioned 2 main email servers, we have no need for the level of clustering/redundancy your dreaming up, and i am still able to provide higher than 99% uptime to my users.

(its just not practical)
 
Sharaz Jek said:
realistically, here are the things that need multiple redundant/failover/clustered servers.

www.hp.com

www.victoriassecret.com

www.microsoft.com

slashdot.org

see where im going with this? at my site, i have 175 users, but my datacenter services about 1000 people around the country (via WAN). aside from my aforementioned 2 main email servers, we have no need for the level of clustering/redundancy your dreaming up, and i am still able to provide higher than 99% uptime to my users.

(its just not practical)
Well while uptime isn't too important to me, I would like to know how to impliment a solution in the event it *does* come to be important for me so I figured that I could create a small scale failover setup of some sort but I'm not sure how to configure it at all. I dunno how the software just reverts to the other server or what not.. It's suppose to be fairly seemless and transparent for the user, right?
 
Why don't you do a quick recap of what you have learned in your multiple threads on networking so people in the thread know where you stand?

It might help to know what you know and what you think you know.
 
imzjustplayin said:
lol, no book can be of any use unless it has pictures which in my opinion are especially important when discussing network topography.

It's really hard not to view this as a trolling attempt... seriously.


Computers <---- *MAGIC*----> SAN

There it is in a netshell.
 
Malk-a-mite said:
It's really hard not to view this as a trolling attempt... seriously.


Computers <---- *MAGIC*----> SAN

There it is in a netshell.


QFT
 
Again, not to sound like a dick, but in a nutshell I think this sums up this post..

For most of us here we have plenty of experience with network environments and have studied these topics either on our own from 'books' or have college diplomas and degrees in related subjects.
Now the response you are getting isn't that of rudeness but more of plain ol reality. Basically how this post looks to me is this, your know how to drive a car and your asking the details on how to machine your own Porsche from us that know whats under the hood. It just doesn't work that way. You have to make the effort to read up and study what you are going to do. I know our SAN took months to develop the architecture to support it.
So in your best interest it would be good to make your effort to get an understanding from a creditable source and apply the knowledge to form a solution in your environment. We could tell you all about SAN's and the like but if you were to implement one based on what you read here how would you ever fix a problem if you had one with out the full understanding of what is going on.

Sorry to say, in some cases you have to learn to crawl before you can walk, and you're trying to run a marathon...

Take the time, read a book. This should be the end of this thread...

PS, the books i suggested have plenty of pictures.
 
imzjustplayin said:
Well while uptime isn't too important to me, I would like to know how to impliment a solution in the event it *does* come to be important for me so I figured that I could create a small scale failover setup of some sort but I'm not sure how to configure it at all. I dunno how the software just reverts to the other server or what not.. It's suppose to be fairly seemless and transparent for the user, right?

nope, your following the wrong set of skills.

realistically, i would say you need to follow the paths of disaster recovery and data retention. have you even thought about that yet? these concepts are 1000% more important and come years before a SAN should even be necessary, IMO.

like i said above, until you are seeing the traffic that websites like those are seeing, clustering really doesnt need to be on your radar (your wasting valuable research time that could be going to more practical skills, such as backups and recovery).

in the end... file server redundancy is "just not done", due to lack of practicality.
 
Sharaz Jek said:
in the end... file server redundancy is "just not done", due to lack of practicality.

Actually i am going to disagree with you on this one. It's practicle if your business depends on it.
We have a cold swap file server. It's running and instantly synced to production, but it would need to be renamed, join the domain, and reshare some folder. Total time to full file server back in action, 1 - 2 hours total. Now the software we use, double take, can be configured for fully automatic failover, but the consequenses of an accidental full failover in case of a lockup or something would be greater than the 2 hour manual swap over.

This failure protection is in addition to our SAN supporting the production file server, nightly incremental to tape and weekly full backup of data.
 
Tim Wardlaw said:
Again, not to sound like a dick, but in a nutshell I think this sums up this post..

For most of us here we have plenty of experience with network environments and have studied these topics either on our own from 'books' or have college diplomas and degrees in related subjects.
Now the response you are getting isn't that of rudeness but more of plain ol reality. Basically how this post looks to me is this, your know how to drive a car and your asking the details on how to machine your own Porsche from us that know whats under the hood. It just doesn't work that way. You have to make the effort to read up and study what you are going to do. I know our SAN took months to develop the architecture to support it.
So in your best interest it would be good to make your effort to get an understanding from a creditable source and apply the knowledge to form a solution in your environment. We could tell you all about SAN's and the like but if you were to implement one based on what you read here how would you ever fix a problem if you had one with out the full understanding of what is going on.

Sorry to say, in some cases you have to learn to crawl before you can walk, and you're trying to run a marathon...

Take the time, read a book. This should be the end of this thread...

PS, the books i suggested have plenty of pictures.


I agree completely and this is pretty much what I wrote in his other "i don't get it" thread.
 
Sharaz Jek said:
nope, your following the wrong set of skills.

realistically, i would say you need to follow the paths of disaster recovery and data retention. have you even thought about that yet? these concepts are 1000% more important and come years before a SAN should even be necessary, IMO.

like i said above, until you are seeing the traffic that websites like those are seeing, clustering really doesnt need to be on your radar (your wasting valuable research time that could be going to more practical skills, such as backups and recovery).

in the end... file server redundancy is "just not done", due to lack of practicality.
You say that I need to focus more on backups and recovery, how do you mean? I choose this particular path because it interested me and it really would suit my needs despite it seemingly not suiting my needs.

Maybe I'm just ignorant but what is there to know about backups and recovery? The issue here isn't about practicality or saving money, but diving into the higher end IT stuff, not the stupid small business/SOHO shit.

Please, enlighten me about backups and recovery, I may actually learn something OR this could all be yesterdays news...

(In simplified terms, I didn't learn computers in a linear fashion such that taught in a classroom or book per se...)
 
mobiux said:
I agree completely and this is pretty much what I wrote in his other "i don't get it" thread.

It's what many of us have written. .

"Originally Posted by imzjustplayin:
Maybe I'm just ignorant ...."
 
Back
Top