The dreaded 26 drive letter limit in Windows.

fleggett

Gawd
Joined
Nov 30, 2004
Messages
546
So, what do y'all do to overcome this limit? Yeah, I know, don't use Windows, but for those of us not ready to jump ship, how is this bypassed? I don't think WSS works since the drive letters still get reserved.
 
The first question is where and why are you hitting that limit? But the answer is to use volume mounting. You can find a lot of articles on it. The 26 drive letter limit hasn't really been an issue in a long time.
 
I thought that the letters would loop over from the beginning when you reached Z? ex AA, BB etc
 
I thought that the letters would loop over from the beginning when you reached Z? ex AA, BB etc


That's what I "figured" it would do myself, although never had enough drives to test it out. Always been curious about this answer just never felt inclined to ask since it never really pertained to me.

At least there is a solution that's been available for at least a decade though lol.
 
This is great. Can I mount more than one drive in an empty folder? Say my empty folder is D:\others. Can I mount DriveE, DriveF, DriveG, etc., such that the I could have the following:

D:\others\DriveE
D:\others\DriveF
D:\others\Drive G

etc.

Just wondering.

Yes, absolutely.
 
This is great. Can I mount more than one drive in an empty folder? Say my empty folder is D:\others. Can I mount DriveE, DriveF, DriveG, etc., such that the I could have the following:

D:\others\DriveE
D:\others\DriveF
D:\others\Drive G

etc.

Just wondering.

Yes, but I think you misunderstand how it works. The empty folder is the mount point. In your example D:\others\DriveE would be a mount point, D:\others is just a directory, and it can have other things in it if you want. Until you actually mount a filesystem it D:\others\DriveE would simply be seen as an empty directory.
 
I've got a directory on my C drive that contains 68 empty directories, all mount points. I don't use them only for hard drives permanently connected to my computer, but for any drive that will be connected with some regularity, that way I can run backup scripts without worrying about what letter each drive will get, since that's random.

This has worked well for me for 3+ years, but it's just too time consuming, so now that I've got a job, I've got less time but more money, so moving to a ZFS server (already running) and a ZFS backup, only two "drives" of many TBs to worry about, much simpler !
 
The first question is where and why are you hitting that limit? But the answer is to use volume mounting. You can find a lot of articles on it. The 26 drive letter limit hasn't really been an issue in a long time.

If the guy's job is anything like mine, it's easy to hit that limit. I monitor and report on disk usage for >1400 sites located from Europe to America to Japan, and I am not allowed the level of access required to install and use a quota setting/monitoring/reporting system like Northern Storage Suite. :(
 
Yes, but I think you misunderstand how it works. The empty folder is the mount point. In your example D:\others\DriveE would be a mount point, D:\others is just a directory, and it can have other things in it if you want. Until you actually mount a filesystem it D:\others\DriveE would simply be seen as an empty directory.

I do understand this point. However, from the few articles I read, it wasn't clear if the mount point could support multiple other drives. I seem to recall that 4-5 years ago, when I was still running Win XP, I tried to use mount points, but got an error when I tried using this approach. I don't remember any of the details.

Just so it's clear to everyone on this thread, I am NOT an IT professional, just a guy who has been around computers since the days of System/360, MVT, and punch cards. :eek: Hey, I remember when VM and TSO were introduced in the System/370 days.
 
If this is just a home computer, volume mount points.

If this is a server, or business system based around Windows...Windows DFS-N is something you should look into. You would still likely use VMP's, but you could use DFS-N to clean things up and make it more presentable, or spread your drives over various systems but still have them accessible from one share.
 
If the guy's job is anything like mine, it's easy to hit that limit. I monitor and report on disk usage for >1400 sites located from Europe to America to Japan, and I am not allowed the level of access required to install and use a quota setting/monitoring/reporting system like Northern Storage Suite. :(

If its for work then they are using a system that has been outdated for over a decade. If its for personal use, then I am wondering why he has a setup like that. The one area I can understand is if in a work setting he has a workstation in which he has a tone of network drives mounted, but that can also be worked around. I haven't run into this issue in the last decade for any of my sys admin duties, so I guess it just seems odd to me.
 
Maybe I'm doing it wrong, but I mount all my drives as folders (48 drives) and then use FlexRaid to create protection and storage pooling (and the storage pooling gives me 1 drive with the contents on all the drives added to it).
 
I do understand this point. However, from the few articles I read, it wasn't clear if the mount point could support multiple other drives. I seem to recall that 4-5 years ago, when I was still running Win XP, I tried to use mount points, but got an error when I tried using this approach. I don't remember any of the details.

Just so it's clear to everyone on this thread, I am NOT an IT professional, just a guy who has been around computers since the days of System/360, MVT, and punch cards. :eek: Hey, I remember when VM and TSO were introduced in the System/370 days.

A mount point is for one drive only. You can make as many mount points as needed, anywhere.
 
Okay, since the issue has come up, my situation is in the domain of home usage. I'm thinking of ditching my current Norco 4220 for a 4224 and, while I'm at it, replacing the motherboard/cpu/memory and fully populating the box with drives (ideally 4 TB, but that's going to be pretty expensive). The 4224, on its own, can accept 24 hot-swappable drives. I think a 25th drive can also be installed via a hidden bay. That leads to the following problem:

o From what I understand, A: and B: are "hard reserved" and cannot be reassigned to another drive, so you're effectively down to 24 letters.
o Windows has to be installed somewhere, so another letter down (23).
o If you want to attach an external drive in the future, whether it be hard, USB, or optical, that leaves 22. Virtual drive applications, like Daemon Tools or VirtualCloneDrive, will consume their own letters.

Which leaves, at the very least, a 4 letter shortfall - an entire row on the Norco.

That's why I was asking if the 26-letter limit could be overcome or sidestepped in some manner.

Thanks, everyone, for the feedback. I'll definitely investigate mount points.
 
Okay, since the issue has come up, my situation is in the domain of home usage. I'm thinking of ditching my current Norco 4220 for a 4224 and, while I'm at it, replacing the motherboard/cpu/memory and fully populating the box with drives (ideally 4 TB, but that's going to be pretty expensive). The 4224, on its own, can accept 24 hot-swappable drives. I think a 25th drive can also be installed via a hidden bay. That leads to the following problem:

o From what I understand, A: and B: are "hard reserved" and cannot be reassigned to another drive, so you're effectively down to 24 letters.
o Windows has to be installed somewhere, so another letter down (23).
o If you want to attach an external drive in the future, whether it be hard, USB, or optical, that leaves 22. Virtual drive applications, like Daemon Tools or VirtualCloneDrive, will consume their own letters.

Which leaves, at the very least, a 4 letter shortfall - an entire row on the Norco.

That's why I was asking if the 26-letter limit could be overcome or sidestepped in some manner.

Thanks, everyone, for the feedback. I'll definitely investigate mount points.

so you're building a 96TB+ server for regular home use? :eek::confused: i've always used A: and B:. they dont autopopulate because theyre looking for floppy drives or whatever, but you should be able to manually assign them to any drive.
 
Okay, since the issue has come up, my situation is in the domain of home usage. I'm thinking of ditching my current Norco 4220 for a 4224 and, while I'm at it, replacing the motherboard/cpu/memory and fully populating the box with drives (ideally 4 TB, but that's going to be pretty expensive). The 4224, on its own, can accept 24 hot-swappable drives. I think a 25th drive can also be installed via a hidden bay. That leads to the following problem:

o From what I understand, A: and B: are "hard reserved" and cannot be reassigned to another drive, so you're effectively down to 24 letters.
o Windows has to be installed somewhere, so another letter down (23).
o If you want to attach an external drive in the future, whether it be hard, USB, or optical, that leaves 22. Virtual drive applications, like Daemon Tools or VirtualCloneDrive, will consume their own letters.

Which leaves, at the very least, a 4 letter shortfall - an entire row on the Norco.

That's why I was asking if the 26-letter limit could be overcome or sidestepped in some manner.

Thanks, everyone, for the feedback. I'll definitely investigate mount points.

I am confused, you aren't going to build an array with all those drives? Why would you use 24 individual drives instead of putting at least some of them into some sort of RAID array? That wipes out a whole mess of letters right there...
 
I bet he is using SnapRAID which requires each drive to be mounted to a location (be it letter or directory). It supports up to 6 parity disks per array so large single arrays are possible.

FlexRAID is more automatic and in CruiseControl setup mode it will actually automatically remove the letters from your disks and mount them inside a hidden and protected directory in C:/
 
It seems instead of figuring out how to do with "only" 26 drive letters you might want to make a post detailing exactly what you're trying to accomplish and ask for input on how best accomplish that, because frankly having 24+ drives exposed directly to the OS doesn't exactly seem like a great idea to me, and will probably seem like less of a great idea to you in a few months when you're trying to remember if you put that file on O or Q. Or when one of the drives dies and you're either restoring from backup, redownloading, or recreating from source material, thinking "you know if I had implemented some sort of mirroring or parity system I'd just be sticking a new drive in and it would rebuild itself instead of having to do all this shit".

Since you're obviously using windows I'm going to go ahead and say storage spaces should be a serious consideration at least. Then you could have 70-80TB total space, a couple spare drives for when one fails (because with that many drives it's not if but when), and one drive letter.
 
I used FlexRAID and then Liquesce for drive pooling in Windows, but never wrote to the pool directly, precisely because it would spread files randomly (and write performance was terrible, but that might have changed), so I preferred to keep things organized manually, and had a 1 by 1 copy of each drive for backup (more than 40 drives and 40 backups). So having one letter showing me all my files together was nice, but I still mounted all the drives with mount points to copy data to them.
 
Since you're obviously using windows I'm going to go ahead and say storage spaces should be a serious consideration at least. Then you could have 70-80TB total space, a couple spare drives for when one fails (because with that many drives it's not if but when), and one drive letter.
You'd also have 25MB/s (or less) write speeds. Storage Spaces is junk.
 
I used FlexRAID and then Liquesce for drive pooling in Windows, but never wrote to the pool directly, precisely because it would spread files randomly (and write performance was terrible, but that might have changed), so I preferred to keep things organized manually, and had a 1 by 1 copy of each drive for backup (more than 40 drives and 40 backups). So having one letter showing me all my files together was nice, but I still mounted all the drives with mount points to copy data to them.

FlexRAID is decent but Liquesce is garbage. Best of breed these days is SnapRAID for parity and then Stablebit DrivePool for pooling. DrivePool supports filling up a disk at a time to avoid scattering files everywhere across multiple drives, and the latest beta goes a step further and introduces file placement rules: you predetermine specific files, wildcards, or folders and can then limit them to specific drive(s).

About a month ago I requested the developer to create an SSD caching plugin for tiered storage: I wanted files written to the pool to hit the SSD first, then migrate the data to spinning disks later, and at a configurable interval (I've got it set for 5am daily). He banged it out within a week (now called "SSD Optimizer"). Great software, great support, been working flawlessly on Windows Server 2012 R2. And best of all, I'm able to mix and match different sized disks in a pool without the space penalty that comes with striping based RAID systems.
 
Last edited:
You'd also have 25MB/s (or less) write speeds. Storage Spaces is junk.

I heard it got better in R2, I still prefer hardware RAID to software solutions personally so I haven't actually tested it in 2012 R2. Still between storage spaces dubious performance and 24+ individual drive letters, storage spaces is still worth looking into
 
How do you get the " SSD Optimizer " for tiering ? I don't see a refernce for it on the SnapRAID page or forum.
 
A mount point is for one drive only. You can make as many mount points as needed, anywhere.

OK, so I thought about this for a while. My situation is that I have a home LAN and no central server, because my desktop system is effectively the "server" for my photos and music. (No movies or TV yet.) Also, I wanted to have a uniform drive letter assignment and network sharing scheme. A long time ago, this was easy.

Now I have eight digital cameras, counting 3 real cameras, 4 cameras in phones and tablets, and a scanner. I need to ingest from any of these directly. Plus I just got a 4-in-1 internal card reader. I need to ingest from 2 of these readers. Plus a whole bunch of external HDDs, which can be eSATA or USB-mounted, plus the usual gaggle of thumb drives, plus a "Photo Data Tank," so I can back up and unload camera cards away from home. Plus 2 GPS now. Way too many devices for each to get their own Windows drive, plus also a lot of confusion.

Enter mount points. Now I've set up separate drive letter mount point structure for:

  • Cameras
  • Card Readers
  • Phones (except for camera)
  • GPS
  • USB and eSATA drives.
And each drive letter has a name to indicate its usage. Much better. Thanks guys, this idea is why I love [H].
 
Aah, I haven't tried it in R2.

Looks like most of the performance increases I had read about had more to do with tiered storage than actual improvements in parity performance:

http://homeservershow.com/creating-...indows-server-2012-essentials-r2-preview.html

It seems like storage spaces doesn't recognize bulk writes and treats each write individually requiring the full worst case 4 (or 6 for double parity) IOs per write, vs 1 IO per write if you recognize that you don't care what existing parity is because you're writing such large blocks that existing parity is going to be completely changed anyway. It seems MS themselves stated as much:

“The caveat of a parity space is low write performance compared to that of a simple or mirrored storage space, since existing data and parity information must be read and processed before a new write can occur. Parity spaces are an excellent choice for workloads that are almost exclusively read-based, highly sequential, and require resiliency, or workloads that write data in large sequential append blocks (such as bulk backups).”
The idea being that writes are too important to wait and that stuff like hyper-v VMs expect write through caching and the bulk writes require write back caching. RAID cards get around this with battery backups and being independent of the OS (i.e. a loss of power or OS crash wouldn't cause the data that the RAID card said had been written to not actually get written, which is a big reason I'm still a bigger fan of hardware RAID than HBAs and OS based RAID)

But since RAID IOPS scales with drive count, using 24 drives in a dual parity drive space would probably be fast enough that things like source/network performance would bottleneck before the drives do.

I'd definitely be interested to read some benchmarks by the OP using storage spaces in simple, mirror, double mirror, parity, and double parity setups with that many drives.
 
Or it would crawl to a stop.

When you mention software RAID I'm sure you're excluding ZFS, because it doesn't have the flaws you mention.
 
So with ZFS there's never a time or a configuration you could setup to where it would use RAM to cache writes for improved performance, where if the ZFS system crashed or lost power data could be lost? Because with a flash or battery backed up RAID card once you write to it and it says "ok I got it" it actually does have it regardless of OS crash or power loss since it's backed up and separate from the OS.

Don't bother responding for my sake since I've gone ahead and added you to my ZFS zealot list (aka ignore) but others might care what you have to say.
 
So with ZFS there's never a time or a configuration you could setup to where it would use RAM to cache writes for improved performance, where if the ZFS system crashed or lost power data could be lost? Because with a flash or battery backed up RAID card once you write to it and it says "ok I got it" it actually does have it regardless of OS crash or power loss since it's backed up and separate from the OS.

Don't bother responding for my sake since I've gone ahead and added you to my ZFS zealot list (aka ignore) but others might care what you have to say.

ZFS handles sync writes properly. Sync writes will be persisted to the ZIL before being written out to disk. If the system loses power (or otherwise crashes) the writes are replayed from the ZIL next time the pool is brought online. This process can be made faster by using a fast device such as an SSD for the ZIL.
 
Hi all. Sorry for bugging out, but I've been pretty sick for the past few days. I'm still feeling extremely sluggish and blah.

Okay, let me reframe the issue. Say you've got a stuffed 4224 (24 4TB HDs + 1 SSD). It's been modified with the 120mm fan cage and hooked directly to the playback monitor/TV. As such, you've got boatloads of space to manage. It'll have the latest motherboard, probably 16 GB of ram, a fairly beefy CPU, and a couple 1015s or similar for SAS connectivity.

The only mandate is that this system will solely be used as an HTPC. A monster HTPC, but an HTPC at the core. I'll be ripping my entire BR library to this machine, menus and all, plus some DVDs.

What would you do? My only request is to not make this system so exotic that my addled brain can't keep track.

Oh, one other caveat - I won't have all the drives at the start. Right now, I've got enough to populate two rows. The other three will be purchased in the next 3-4 months.

Thanks in advance!
 
Seems like a good use case for Snapraid.

What would you do?

Well my HTPC software runs under linux so and has done so for the 10 years + 4 days that I have used it. These days I am running snapraid ontop of zfs and ext4. With the parity disks being external 4TB drives. I sync 1 time per week and this usually is 100 GB to 300GB of updates.
 
Storage Spaces with 2 drive parity would probably suit this usage case just fine, even if the writes speeds do suck and are <100MB/s, your source (the BD drive) isn't going to saturate that anyway. SS would let you expand as needed (one drive at a time if you just want to buy drives as you need them) and is built into Win8+. Instead of running out of drive letters you'd just have a single 80TB E: drive.
 
Back
Top