Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I thought that the letters would loop over from the beginning when you reached Z? ex AA, BB etc
Unix-like mounting can be done in Windows. Microsoft calls it a Volume Mount Point.
http://technet.microsoft.com/en-us/library/cc938934.aspx
This is great. Can I mount more than one drive in an empty folder? Say my empty folder is D:\others. Can I mount DriveE, DriveF, DriveG, etc., such that the I could have the following:
D:\others\DriveE
D:\others\DriveF
D:\others\Drive G
etc.
Just wondering.
This is great. Can I mount more than one drive in an empty folder? Say my empty folder is D:\others. Can I mount DriveE, DriveF, DriveG, etc., such that the I could have the following:
D:\others\DriveE
D:\others\DriveF
D:\others\Drive G
etc.
Just wondering.
The first question is where and why are you hitting that limit? But the answer is to use volume mounting. You can find a lot of articles on it. The 26 drive letter limit hasn't really been an issue in a long time.
Yes, but I think you misunderstand how it works. The empty folder is the mount point. In your example D:\others\DriveE would be a mount point, D:\others is just a directory, and it can have other things in it if you want. Until you actually mount a filesystem it D:\others\DriveE would simply be seen as an empty directory.
If the guy's job is anything like mine, it's easy to hit that limit. I monitor and report on disk usage for >1400 sites located from Europe to America to Japan, and I am not allowed the level of access required to install and use a quota setting/monitoring/reporting system like Northern Storage Suite.
I do understand this point. However, from the few articles I read, it wasn't clear if the mount point could support multiple other drives. I seem to recall that 4-5 years ago, when I was still running Win XP, I tried to use mount points, but got an error when I tried using this approach. I don't remember any of the details.
Just so it's clear to everyone on this thread, I am NOT an IT professional, just a guy who has been around computers since the days of System/360, MVT, and punch cards. Hey, I remember when VM and TSO were introduced in the System/370 days.
Okay, since the issue has come up, my situation is in the domain of home usage. I'm thinking of ditching my current Norco 4220 for a 4224 and, while I'm at it, replacing the motherboard/cpu/memory and fully populating the box with drives (ideally 4 TB, but that's going to be pretty expensive). The 4224, on its own, can accept 24 hot-swappable drives. I think a 25th drive can also be installed via a hidden bay. That leads to the following problem:
o From what I understand, A: and B: are "hard reserved" and cannot be reassigned to another drive, so you're effectively down to 24 letters.
o Windows has to be installed somewhere, so another letter down (23).
o If you want to attach an external drive in the future, whether it be hard, USB, or optical, that leaves 22. Virtual drive applications, like Daemon Tools or VirtualCloneDrive, will consume their own letters.
Which leaves, at the very least, a 4 letter shortfall - an entire row on the Norco.
That's why I was asking if the 26-letter limit could be overcome or sidestepped in some manner.
Thanks, everyone, for the feedback. I'll definitely investigate mount points.
Okay, since the issue has come up, my situation is in the domain of home usage. I'm thinking of ditching my current Norco 4220 for a 4224 and, while I'm at it, replacing the motherboard/cpu/memory and fully populating the box with drives (ideally 4 TB, but that's going to be pretty expensive). The 4224, on its own, can accept 24 hot-swappable drives. I think a 25th drive can also be installed via a hidden bay. That leads to the following problem:
o From what I understand, A: and B: are "hard reserved" and cannot be reassigned to another drive, so you're effectively down to 24 letters.
o Windows has to be installed somewhere, so another letter down (23).
o If you want to attach an external drive in the future, whether it be hard, USB, or optical, that leaves 22. Virtual drive applications, like Daemon Tools or VirtualCloneDrive, will consume their own letters.
Which leaves, at the very least, a 4 letter shortfall - an entire row on the Norco.
That's why I was asking if the 26-letter limit could be overcome or sidestepped in some manner.
Thanks, everyone, for the feedback. I'll definitely investigate mount points.
You'd also have 25MB/s (or less) write speeds. Storage Spaces is junk.Since you're obviously using windows I'm going to go ahead and say storage spaces should be a serious consideration at least. Then you could have 70-80TB total space, a couple spare drives for when one fails (because with that many drives it's not if but when), and one drive letter.
I used FlexRAID and then Liquesce for drive pooling in Windows, but never wrote to the pool directly, precisely because it would spread files randomly (and write performance was terrible, but that might have changed), so I preferred to keep things organized manually, and had a 1 by 1 copy of each drive for backup (more than 40 drives and 40 backups). So having one letter showing me all my files together was nice, but I still mounted all the drives with mount points to copy data to them.
You'd also have 25MB/s (or less) write speeds. Storage Spaces is junk.
Aah, I haven't tried it in R2.I heard it got better in R2
A mount point is for one drive only. You can make as many mount points as needed, anywhere.
Aah, I haven't tried it in R2.
The idea being that writes are too important to wait and that stuff like hyper-v VMs expect write through caching and the bulk writes require write back caching. RAID cards get around this with battery backups and being independent of the OS (i.e. a loss of power or OS crash wouldn't cause the data that the RAID card said had been written to not actually get written, which is a big reason I'm still a bigger fan of hardware RAID than HBAs and OS based RAID)“The caveat of a parity space is low write performance compared to that of a simple or mirrored storage space, since existing data and parity information must be read and processed before a new write can occur. Parity spaces are an excellent choice for workloads that are almost exclusively read-based, highly sequential, and require resiliency, or workloads that write data in large sequential append blocks (such as bulk backups).”
So with ZFS there's never a time or a configuration you could setup to where it would use RAM to cache writes for improved performance, where if the ZFS system crashed or lost power data could be lost? Because with a flash or battery backed up RAID card once you write to it and it says "ok I got it" it actually does have it regardless of OS crash or power loss since it's backed up and separate from the OS.
Don't bother responding for my sake since I've gone ahead and added you to my ZFS zealot list (aka ignore) but others might care what you have to say.
What would you do?