Partition File Servers?

bigdogchris

Fully [H]
Joined
Feb 19, 2008
Messages
18,739
I like the idea of having a separate partition for the OS/Applications and a second for User data. I can totally buy into the idea that to many partitions can cause a long stroking effect.

What do you guys think about partitions and their affect on file server performance? And what types of data should be separated by partitions (user data/SQL/OS/etc.)? Should you defragment a FS?

Keep in mind the server in question is 2008 R2 with 8 - 10k drives in hardware RAID 10, with 125 users.

Thanks!
 
Last edited:
What are you hoping to accomplish with the partitions? In my mind, partitions impose artificial limits. Logs should be on separate partitions for recovery, but heavy disk use programs should be on separate drive volumes, not just separate partitions.
Look at your goals and see if partitions help meet or hinder meeting your goals.
 
It's going to be a file server for user data, like network folder and redirected user documents, etc. But the question is more general though.

As for separate drive volumes, do you mean, physically separate disk? I like putting data across as many spindles as possible.
 
It's going to be a file server for user data, like network folder and redirected user documents, etc. But the question is more general though.

As for separate drive volumes, do you mean, physically separate disk? I like putting data across as many spindles as possible.

we do a 50gig or so partition for the OS/software, and the rest of the drive space for user data storage.

and yeah, he means a different RAID volume if the app (ie databases) are very disk heavy. You would get better performance if the data is on a completely seperate array from the OS.
 
Put 2 of those HDDs in a RAID1 (or "10") array for the OS, and use the other 6 for another RAID 10 array for everything else.

OS and Data on a single array can lead to performance issues.
 
In windows, I view multiple partitions as a negative. In that you should only use them if you have a solid need to do so.

Anytime you allow users to add data to a server, that section needs to be isolated from the OS itself; ie partition. If for no other reason, than to prevent a DOS condition where users manage to fill up the remaining space. As far as specific types of servers;

SQL - Partitions here tend to be more a necessity given that your data and log files should be on separate hardware devices ( where performance appropriate ), and both should be kept away from your OS device.

Fileserver - Even if you put the data on the same device as the OS, partitions prevent users from DOSing the OS by filling up the remaining space.

* I'll add this here as most people ignore this; align your partitions! Windows 2003 and older didn't align their partitions, and I don't think many Linux distros align either. This involves some extra setup prior to loading the OS, but the performance boost can be impressive in high utilization conditions.
 
In windows, I view multiple partitions as a negative. In that you should only use them if you have a solid need to do so.

Anytime you allow users to add data to a server, that section needs to be isolated from the OS itself; ie partition. If for no other reason, than to prevent a DOS condition where users manage to fill up the remaining space. As far as specific types of servers;

SQL - Partitions here tend to be more a necessity given that your data and log files should be on separate hardware devices ( where performance appropriate ), and both should be kept away from your OS device.

Fileserver - Even if you put the data on the same device as the OS, partitions prevent users from DOSing the OS by filling up the remaining space.
Good tips, thanks.
* I'll add this here as most people ignore this; align your partitions! Windows 2003 and older didn't align their partitions, and I don't think many Linux distros align either. This involves some extra setup prior to loading the OS, but the performance boost can be impressive in high utilization conditions.
Have you always had to do that? I've only heard of aligning partitions on the new advance format drives.
 
Good tips, thanks.Have you always had to do that? I've only heard of aligning partitions on the new advance format drives.
It's always been a performance consideration, but it's only been recently that folks have noticed it. Not sure why it's been under the radar as long as it has been.
 
Alignment is also very beneficial in RAID arrays

It's always been a performance consideration, but it's only been recently that folks have noticed it. Not sure why it's been under the radar as long as it has been.
Creating the RAID array, then creating partitions through Server 2008 R2 will automatically properly allign partitions in all cases, correct?

We are close to buying the servers. I've decided to change up the disk layout and give the OS/APPS it's own RAID 1 set, then using the remaining 6 disk in a RAID 10 set for user/application data storage.

Thanks.
 
Last edited:
Different partitions on same spindle/disk....not as good of a performance increase as having diffferent partitions on different spindles/disks.

Example....
Single 600 gig RAID array, 80 gig C partition, 200 gig D partition, 200 gig E partition...similar performance...as it's the same disk/volume.

However...that tame 600 gigs...
Say you have a pair of 76 gig drives RAID 1 for the OS C
And a 2 or 3 or 4 or more 146 gigs RAID 1,5, or 10 for the D partition...
and same as above for the E partition.

Different RAID arrays..basically different spindles...far FAAAAR better disk performance when dealing with concurrent hits.
And span the pagefile.sys system managed across all 3.
 
Forgot to mention, these two servers will also be hosting AD and DNS as well as acting as a file server. The original intention was to use separate servers for AD but I can only buy 2 servers so I have no choice and I want to have two DC's and DNS servers for redundany.

Keeping the OS and the AD database on a raid 1 set and the 6 remaining disk in raid 10 for file storage in 1 partition should perform well?
 
Defragging not as beneficial on a file server. Don't forget..workstations see the drives/files...in their own mind. They don't care about what the server thinks about fragmentation. NTFS deal with files for a network.

So 2x servers..both DCs...and both file servers?
I'd honestly want the first DC to just be a DC..nothing else...for 100+ users.

Make the file server a second DC for ...secondary DC (yeah we aren't supposed to call them primary and secondy anymore..but...they still sorta are).
 
So 2x servers..both DCs...and both file servers?
I'd honestly want the first DC to just be a DC..nothing else...for 100+ users.

Make the file server a second DC for ...secondary DC (yeah we aren't supposed to call them primary and secondy anymore..but...they still sorta are).
Yes, around 120 users but not all of them will be accessing the server at the same time. Most of them use their network storage sparsely throughout the day, if any at all. This environment is not heavy usage by any means. The file servers are going to be a quad core XEON 12 GB of ram and 6 10k 6GB SAS drives in RAID 10 for data, it should be plenty.

I have another question. We actually might be able to grab a 3rd server (2008 R2 Ent) which I want to bring up 4 Hyper-V VM's in. Would the same concept of the OS on a RAID 1 set, then the 4 .vhd's in a single partition on a RAID 10 still work? VM specs are going to be 2xXEON quad's HT, 24GB of ram and 8 -15K SAS 6GBs drives. I will buy more ram later if necessary.
 
Last edited:
Defragging not as beneficial on a file server. Don't forget..workstations see the drives/files...in their own mind.

?

The server smb/nfs/whatever process reads in the file and sends the bits to the client. If the server process has to read bits of the file all over the platter, then fragmentation is still hindering read performance.
 
?

The server smb/nfs/whatever process reads in the file and sends the bits to the client. If the server process has to read bits of the file all over the platter, then fragmentation is still hindering read performance.
Yeah. My question really is about how often should you defrag it I guess. I suppose I am over thinking it. If it shows there is fragmentation then it should be defragmented.
 
In 2008 R2 there's a defrag task that's disabled but already configured in scheduled tasks. Once a week in the middle of the night should be fine, but it depends on how big the filesystem is and if it's in use at that time.

And yes, you're probably over thinking it. The performance gain is generally minimal, but it's probably worth doing once a week if you can fit it into a quiet window.
 
I like to have a partition for the OS, usually very small depending on what types of programs need to be installed and whether more programs will be installed over time. Usually I go like 20-30GB. Then one other partition for data. If you need to reinstall the OS or it becomes unbootable beyond repair and has to be reinstalled at least you can safely format that drive and reinstall and then the data will be safely on the other partition.
 
I like to have a partition for the OS, usually very small depending on what types of programs need to be installed and whether more programs will be installed over time. Usually I go like 20-30GB. Then one other partition for data. If you need to reinstall the OS or it becomes unbootable beyond repair and has to be reinstalled at least you can safely format that drive and reinstall and then the data will be safely on the other partition.

20-30 is WAY too small. Sure its fine at first, but with updates and temp files and over time you will fill it up. I had to fix this many times at my old job for a bunch SBS because my boss made them only 30 GB. You want at least 20% free disk space. I'd recommend a bare minimum of 60 GB.
 
20-30 is WAY too small. Sure its fine at first, but with updates and temp files and over time you will fill it up. I had to fix this many times at my old job for a bunch SBS because my boss made them only 30 GB. You want at least 20% free disk space. I'd recommend a bare minimum of 60 GB.

Good point, I was thinking Linux, with windows, yeah 40-60GB is better. With VMs I also tend to be more conservative as I know I can fairly easy expand. Rather than making partitions I just make separate virtual disks so it allows me to expand them easily. Can even do it live with certain virtualization products. I know in ESX I could do data drives live, but not OS. Well I could do the "physical" part live but not the OS part.
 
Defragging not as beneficial on a file server. Don't forget..workstations see the drives/files...in their own mind. They don't care about what the server thinks about fragmentation. NTFS deal with files for a network.

So 2x servers..both DCs...and both file servers?
I'd honestly want the first DC to just be a DC..nothing else...for 100+ users.

Make the file server a second DC for ...secondary DC (yeah we aren't supposed to call them primary and secondy anymore..but...they still sorta are).
You could have them both be DCs and use DFS for replication between the two so you have a completely redundant setup.
 
I have another question. We actually might be able to grab a 3rd server (2008 R2 Ent) which I want to bring up 4 Hyper-V VM's in. Would the same concept of the OS on a RAID 1 set, then the 4 .vhd's in a single partition on a RAID 10 still work? VM specs are going to be 2xXEON quad's HT, 24GB of ram and 8 -15K SAS 6GBs drives. I will buy more ram later if necessary.

I do the same...pair of small drives RAID 1 to install the hypervisor host on...and then create your RAID arrays and make LUNs to have the hyper-v (or ESXi) present to the OS's.

If you have a 3rd Winders license...esp Ent, now is the time I would make 1x dedicated DC..and have your 2x additional file servers (could make one of them a DC). For just DC and file servers....making one large RAID 10 out of 15k disks would be fine....present as one large disk..store the VHDs on it. There's just something in me that...once you have a larger network...say over 75 users....I like to have a dedicated top DC doing nothing at all but DC stuff....main infrastructure server.

If you had database servers..heavier hitting on the disks, wanting separate spindles for the OS and data...now is when you get creative with the RAID controller and make different volumes to present. The last ESXi setup I did....had an HP MSA1000 fiber SAN, stuffed with 12x 15k disks..covering both banks. Carved that up into a bunch of LUNs...little RAID 1 pairs for the OS's of each guest, and larger LUNs for the data volumes. Has 9 servers on it...via 2x different DL360 servers. Gets trickier to manage all the LUNs and which one goes to which server. So you had a trade off...better performance, but complication.
 
I'm not sure who said it, but 100 users does not in any way require a dedicated domain controller, at least not in terms of utilization.

Even a 5 year old server can support thousands of users.

http://technet.microsoft.com/en-us/library/cc728303(v=ws.10).aspx

It's not a load point of view (yes..most of us know the load on a DC is quite small), it's just having a separate box that does nothing else....primary reason...stability. File servers end up also being print servers....end up installing print drivers...sometimes requiring a reboot. File servers sometimes end up hosting database that require a serving component (Quickbooks Database Server Manager for example). You may end up having to reorganize/move large shares on a file server. What if you have to reorganize partitions for some reason? Requiring reboots and extended time offline as partitions resize? Over time..while the primary purpose of file server may have originally been just to host/share files...often it starts taking on a bit more. I'd rather keep the lines drawn clean. He already has the licensing...and the space....so why not? Because at this point it is not an added expense.
 
Back
Top