Replace server hdd raid 5 with ssd raid 1?

Kryogen

Gawd
Joined
Feb 3, 2002
Messages
937
I have a dental office ibm server (2012) with 15 or so computers that access all data on server. Patient files, schedule, x rays and photos.
Right now it runs on 3 hdd in a raid 5. Lately i find that its slower to get data.
I wonder if I should just replace the hdd raid 5 with a simple ssd raid 1(for quick recovery if one fails as we cannot afford downtime). I still have daily backups on an external hdd and online.

I only have 300gb data or so, so imo, a raid 1 with 2 ssds approx 1tb would perfectly fit the bill.
And because i am doing a raid 1, and easy to replace, do i really need enterprise grade ssd or should i just buy two samsung pro ssds and thats all?

I am unsure about what kind of hdds are in there though, i think that i have hot swap drives or something so maybe i need to make sure which drives would fit? Anyone with server experience knows?
 
Last edited:
IBM System x3400 M3 7379 - Xeon E5620 2.4 GHz
Processor
1 x Intel Xeon E5620 / 2.4 GHz ( 2.66 GHz ) ( Quad-Core )
RAM
12 GB (installed) / 128 GB (max) - DDR3 SDRAM - ECC - 1066 MHz - PC3-10600

Storage Controller
Controleur Raid 5 - avec batterie M5015
Server Storage Bays
Hot-swap 3.5" capacité de 8 disques
Hard Drive
4 fois 300GB - 15Kb rpm 4 spots remaining

Sooooooo, i dont really know what kind of drives i can/should use for faster access.... technically 3x15k hdds in a raid shouldnt be so slow, is it going to be faster going the ssd raid 1 route?
 
Last edited:
Hot swap is nothing special on the drive end, just requires a controller that can handle it. Should be able to stick any SATA drive in there.

I agree with just RAID 1 the SSDs, I don't think you need to spring for enterprise level with only 15 clients, but it won't hurt.

As far as speedup. SSDs won't necessarily be any faster reads than 15k's at sequentials (since your still on SATA after all), although pretty much ~anything~ is going to be faster than RAID5 at writes (it's notoriously slow at that, no matter what drives you use). It's probable that your bottleneck is something else entirely (network, poorly written queries, bad caching strategy etc), or that you have a degraded RAID5 and that's what's really holding you up.
 
Check RAM usage too. 15 users with 12GB might be pushing it.
 
Take the opportunity to get a new server with WSE 2016. Your server is 2009 vintage and I would be concerned about hardware failure. Plus you will have the old server as a spare.
 
Run a performance counter and digest the log with PAL on the physical disk usage and confirm first that there is a bottleneck. I was able to conclusively show for a similar optometrist office with 20 users that the 7.2k drives in RAID-5 were creating a bottleneck. After moving to raid-10 SSD, the performance counters were impressive and the slowness complaints stopped. Also see if the battery feeding the raid controller is dead because by default, controllers will disable write back caching when a dead or discharged battery is connected and this hurts performance significantly. You want to be absolutely sure that the drives are a problem if the client is paying for parts because they will be raging mad if it doesn't improve noticeably.
 
Also see if the battery feeding the raid controller is dead because by default, controllers will disable write back caching when a dead or discharged battery is connected and this hurts performance significantly.

Given the age of the system, this is a very good suggestion.
 
How do you check for used ram and drive performance?
What kind of battery is in there?
 
I checked the battery using MegaRAID Storage Manager and it appears to be ok.
I have an UPS connected to the server also.

I checked and in fact, I have a raid 5 with 4x seagate cheetah 15k rpm 300 gb SAS 6gb/s
all 4 drives report fine

will test tomorrow with all users online to see how much ram is used.
What should my available ram be? Right now with no one one, it says 8 used 4 free.
 
Have all of the users logged in doing regular work and check total usage including page file. Whatever that number is, a good place to start would be to add at least add 50% more to that number, though double is better. Would have to check and see the max amount of memory the server can accept.
 
Have all of the users logged in doing regular work and check total usage including page file. Whatever that number is, a good place to start would be to add at least add 50% more to that number, though double is better. Would have to check and see the max amount of memory the server can accept.

12gb installed, 128 max.
double to 24 to make sure? probably not that expensive to get another 12gb kit?

I have no clue what kind of ram I can use or a server. Do I need to use the same thing?

Right now I have: Kingston KTM-SX313SK3/12G 12GB 3X4GB Kits DDR3-133E Reg ECC for Selected IBM Memory Kits
 
We guaranty this kit will work in your machine. 4pcs. 4GB 1333MHz ECC REGISTER Module 2rx4 Genuine CMS brand, CMS is one of the most trusted names in the Computer Memory industry and this product carries a Lifetime Warranty from CMS. Max Memory: 48 GB ECC UDIMM 128 GB Reg ECC RDIMM Sockets: 16 Sockets Compatible with: IBM System x3400 M3 7378 IBM System x3400 M3 7379 OUR DDR3-1333 modules automatically clock down to 1066MHz and 800MHz depending on which Intel Xeon model is installed and how many module Banks are populated. Due to chipset limitations, DDR3 Quad Rank memory is limited to operate at a maximum of 1066MHz. When two Quad Rank modules are installed per bank, the processor will automatically clock them down to 800MHz. System only supports (6) Quad Rank DIMMs per CPU or (12) Quad Rank DIMMs with two CPUs. The server supports both 1.5 V and 1.35 V DIMMs. Mixing 1.5 V and 1.35 V DIMMs in the same server is supported for Intel Xeon 5600 series processor-based systems. In such a case, all DIMMs operate at 1.5 V. Intel Xeon 5500 series processor-based systems do not support 1.35 V DIMMs.

jesus what does that mean?

System only supports (6) Quad Rank DIMMs per CPU

so what must I do?
 
ok specs from manuf, can anyone help? I'm lost. What memory must I use, where do I place it.... to run optimal speed

Memory options
IBM DDR3 memory is compatibility tested and tuned for optimal System x performance and throughput. IBM memory specifications are integrated into the light path diagnostics for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide.

The IBM System x3400 M3 supports DDR3 memory. The server supports up to eight DIMMs when one processor is installed and up to 16 DIMMs when two processors are installed. However, the maximum number of DIMMs is limited by the number of ranks in the DIMMs:
  • RDIMMs
    • Up to 16 single-rank RDIMMs for a maximum of 64 GB (16x 4 GB)
    • Up to 16 dual-rank RDIMMs for a maximum of 128 GB (16x 8 GB)
  • UDIMMs
    • Up to 16 single-rank UDIMMs for a maximum of 16 GB (16x 1 GB)
    • Up to 16 dual-rank UDIMMs for a maximum of 64 GB (16x 4 GB)

Each CPU has three memory channels, two of which contain three DIMMs per channel and the third contains two DIMMs. RDIMMs can be populated up to three per channel. However, UDIMMs can only be populated two DIMMs per channel. That is, you can have up to 16 RDIMMs installed in the server, but only up to 12 UDIMMs. Mixing UDIMMs and RDIMMs is not supported.

Maximum memory speed is limited by memory speed supported by the specific CPU (that is, if the CPU only supports 1066 MHz, then the memory speed cannot exceed 1066 MHz in any case) and by the number and type of DIMMs installed (whatever is lower), as follows:
  • Intel Xeon 5600 series processors:
    • 1333 MHz when one or two single-rank or dual-rank RDIMMs per channel are installed or one UDIMM per channel is installed
    • 1066 MHz when two UDIMMs per channel are installed
    • 800 MHz when three single-rank or dual-rank RDIMMs per channel are installed
  • Quad-core Intel Xeon 5500 series processors:
    • 1333 MHz when one single-rank or dual-rank RDIMM per channel is installed or one UDIMM per channel is installed
    • 1066 MHz when two single-rank or dual-rank RDIMMs per channel are installed, or two UDIMMs per channel are installed
    • 800 MHz when three single-rank or dual-rank RDIMMs per channel are installed
  • Dual-core Intel Xeon 5500 series processors only support memory speed at 800 MHz.


The server supports both 1.5 V and 1.35 V DIMMs. Mixing 1.5 V and 1.35 V DIMMs in the same server is supported for Intel Xeon 5600 series processor-based systems. In such a case, all DIMMs operate at 1.5 V. Intel Xeon 5500 series processor-based systems do not support 1.35 V DIMMs.

The following memory protection technologies are supported:
  • ECC
  • ChipKill (for x4-based RDIMMs)
  • Memory mirroring
  • Memory sparing

If memory mirroring is used, then DIMMs must be installed in pairs (a minimum of one pair per CPU), and both DIMMs in a pair must be identical in type and size. If memory sparing is used, then DIMMs must be installed in sets of three, and all DIMMs in the same set must be identical in type and size. Memory sparing is only supported for Intel Xeon 5600 series processor-based systems.

The following table lists memory options available for the x3400 M3 server.

Table 5. Memory options
Part
number Feature
code Description Maximum quantity
supported Standard models
where used
RDIMMs
ecblank.gif
ecblank.gif
ecblank.gif
ecblank.gif

49Y1405 8940 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16 (8 per CPU) A2x, A4x, B2x, B4x, C2x
49Y1433* 8934 2GB (1x2GB, 2Rx8, 1.5V) PC3-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16 (8 per CPU) 22x, 24x, 32x, 34x, 42x
49Y1406 8941 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16 (8 per CPU) -
49Y1435* 8936 4GB (1x4GB, 2Rx4, 1.5V) PC3-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16 (8 per CPU) 52x, 54x, 62x, 72x, 74x
49Y1407 8942 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16 (8 per CPU) 56x, 58x, D2x, F2x
49Y1397 8923 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16 (8 per CPU) -
UDIMMs
49Y1403 A0QS 2 GB (1x 2 GB, 1Rx8, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM 12 (6 per CPU) -
49Y1404 8648 4 GB (1x 4 GB, 2Rx8, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM 12 (6 per CPU) -
* Withdrawn from marketing.
 
ok, I can use 8 banks
its ECC registered
RDIMMs
Up to 16 single-rank RDIMMs for a maximum of 64 GB (16x 4 GB)

so right now I have 3x4gb, I could add 4x4gb
Each CPU has three memory channels, two of which contain three DIMMs per channel and the third contains two DIMMs. RDIMMs can be populated up to three per channel.
my mem is 1.5v
Intel Xeon 5600 series processors:
1333 MHz when one or two single-rank or dual-rank RDIMMs per channel are installed or one UDIMM per channel is installed

Ok, sooo, I have 3 channels, but to keep max speed, I can run only 2 dimms per channel, (3 channels)

SO that's 6 sticks max basically?
so that would be 6x4 = 24 gb max, which would double what I have.

Right?

So I need to find 1333 1.5v 3x4gb kit and its going to max the system?

Any brand or type recommended?

stock kit is :
Kingston DDR3 12GB (4GBx3) 1333MHz IBM DIMM RAM Memory KTM-SX313SK3/12G

oem specs are single rank, registered.

Sooo, I could just add another kit extacly like that and be done with it? But ram looks discontinued. What would you buy?
 
Last edited:
for some reason after a server reset I am using 4gb ram and have 8 free.....
soooo I guess that its ok ram-side.

might just replace the server when its too out dated...
 
I cant figure out what ram to buy

Just to clarify, are you a dentist or an IT guy?

Here's the RAM you need: http://www.crucial.com/usa/en/compatible-upgrade-for/IBM/system-x3400-m3

But you really need to determine that lack of RAM is the problem before making a purchase. You should also make sure that the OS installed can make use of extra RAM. WSE 2012 has a limit of 64 GB (as does WSE 2016), so you should be okay. I would also suggest you look at this issue from a business, not technical, side. You've mentioned the critical importance of business continuity. This box is nearly a decade old. It's long-since fully depreciated. It's time for a new server. And they keep the old server as a reserve. Note that WSE 2016 does not impose CPU core licensing, unlike full Server 2016.
 
Just to clarify, are you a dentist or an IT guy?

Here's the RAM you need: http://www.crucial.com/usa/en/compatible-upgrade-for/IBM/system-x3400-m3

But you really need to determine that lack of RAM is the problem before making a purchase. You should also make sure that the OS installed can make use of extra RAM. WSE 2012 has a limit of 64 GB (as does WSE 2016), so you should be okay. I would also suggest you look at this issue from a business, not technical, side. You've mentioned the critical importance of business continuity. This box is nearly a decade old. It's long-since fully depreciated. It's time for a new server. And they keep the old server as a reserve. Note that WSE 2016 does not impose CPU core licensing, unlike full Server 2016.

Hi, I am a dentist (business owner), this is why I am kind of clueless with servers. I am ok with normal pcs, but never had to work with a server before.

I have windows server 2008.
The server was bought new in sept 2012. It's 4.5 years old.

we have windows 7 on the workstations.
 
Last edited:
there is single rank, dual rank, and x4 or x8 based. What do I need to buy?
also, I think my processor allows ram at 1.5 and 1.35v, but it's going to run the ram all at 1.5. right?
 
Likely you need this one. Given that Crucial only specify 1.35v RAM I'm somewhat dubious that yours is 1.5v. The best thing to do is physically open the server, take out one of the DIMMs and write down the details and get that. Take a photo too.

Now, do you have full Windows server 2008 or Windows Server 2008 SBS? Note that the RAM limit for both is 32 GB. Even though you purchased the server in 2012, it would have been old stock then so you should be considering replacement.

Let me suggest a strategy for you:

1. Buy and fit extra RAM
2. Buy and fit two SSDs in RAID 1 in addition to the RAID 5. Note that you may also need to purchase hot-swap caddies.
3. Copy your data to the SSDs and switch the share to point to the new data. Don't delete the old data just yet, just in case.
4. Test.

If performance has improved, great. If not, you can use the SSDs in a new server.

Something you should also check is the workstations: are they low on RAM? Is the local storage full?
 
I have windows server 2008 standard. 32 gb max i guess.
I would double the ram from 12 to 24, just add another 3x4 i guess.

Stock ram is kingston 3x4. At 1.5v 1333. . I physically checked it and got the numbers posted above.
My system only runs it at 1066 anyway.

Their min mem specs probably just got better with time, now all at 1600 and 1.35v. The stock ram is discontinued everywhere.


Enterprise grade ssds are ridiculously expensive. Should i just go with the samsung 960 pros in a raid 1?
My hot swap box is sas. can you add sata drives to a sas hot swap?

Jesus thats complicated.
Or just get another server with better components, but its going to cost a few thousands and its probably not needed so much right now?
 
You should talk to your accountant about asset depreciation.

I'm most surprised you've got full Server 2008. Do you run special software which demands it? Certainly your upgrade path - absent aforementioned special software - should be Windows Server Essentials 2016 so you can keep things simple, like dispensing with CALs.

You're wise to get the 1.5v RAM; don't mix 1.35v RAM with 1.5v RAM.

I'll let those more knowledgeable than myself advise you on the SSDs, but with regard to the server as a whole, I urge you to think as a businessman and not a technology enthusiast.
 
No special software that I know about. We just have a few dental programs that have the database on the server that's all.
(yes, windows server 2008 r2 standard)

In fact, we probably don't need a very powerful server, it just needs to handle file sharing (pictures, xrays) and database access for patient data. We just need fast access to files, basically.

I guess that the next server will be more simple, with a regular quad core, mirror SSD, that's it basically...
 
So you're running SQL server as well? Interestingly, SQL Server 2016 does not appear to support installation on WSE 2016, but does on WSE 2012.​
 
SQL? I don't think so. Those programs do not need any special server with sql, they could even be installed on a normal computer that acts as a server...
 
Kryogen.. you can build a new server yourself or buy one for the hardware support side. If you want to go the build it yourself route... you COULD go with one of the new X270 motherboards and a 7700 intel CPU. The reason is the new storage topology they are introducing.

Put in a small SATA or NVMe drive to act as your OS drive. (128 GB or something to that effect.)

Then add another pair of larger NVMe drives in Raid 1. Have them act as part of your Optane group. Then add a set of 5 1 or 2 TB SAS drives to your setup. (Basic spindle drives in raid 5 config or raid 10 if you want to throw more space away for little speed gain.)

The Optane technology brings a enterprise class storage solution into the consumer space. Tiered auto managed storage. What this will do for you is allow you to have a drive total storage of your Raid 1 plus your Platter drives in one logical group. Data will be written to all of them as one storage pool to your OS. ON the back end what is happening is data that is written will go to the NVMe drives first. As data ages and is not used it will move to lower speed storage on the fly. (I presume based on a integrated rule set but you may be able to define this. Hopefully you can segment your storage out into MB logical allocations and define what is stored fast all the time and not. But then we are getting into some specifics... that you need a real IT background to get into.)

This will give you a bit of the best of both worlds of high speed storage and capacity you don't have to pay out the nose for. So the shared file of Bob's new kid laughing while watching sponge bob won't clog your fast storage forever taking up space, and the stuff you use all the time will stay in the fast storage area.

Also I would look to maxing out your ram on that platform. (would be 64 gig for most consumer grade motherboards today.)


It is looking to the future for a bit but will get you a good and fast storage solution that should see your small business through and give you a nice tech boost for a few years to come.


You can get the same out of a enterprise class setup.. but you will pay out the nose for it.
 
If you build your own you can get last gen Xeons with 8-12 cores on ebay for about the same price as a current gen quad. Not sure how many of your 15 terminals access the server at the same time, but just something to keep in mind. And agreed with all above, lots of ram with SSDs.
 
You should talk to your accountant about asset depreciation.

I'm most surprised you've got full Server 2008. Do you run special software which demands it? Certainly your upgrade path - absent aforementioned special software - should be Windows Server Essentials 2016 so you can keep things simple, like dispensing with CALs.

You're wise to get the 1.5v RAM; don't mix 1.35v RAM with 1.5v RAM.

I'll let those more knowledgeable than myself advise you on the SSDs, but with regard to the server as a whole, I urge you to think as a businessman and not a technology enthusiast.

I was told that the essentials version (SMB) was slow and full of crap like exchange that made it slow and hard to work with.
Um
 
How mission-critical is the data on this server?

Normally I see best-practice for SQL Server being dual RAID-10 nowadays (one for the database, one for the log.

PROBABLY overkill in a small office.
 
How mission-critical is the data on this server?

Normally I see best-practice for SQL Server being dual RAID-10 nowadays (one for the database, one for the log.

PROBABLY overkill in a small office.


that depends on your size.. for instance.. in our environment we have a 12 TB storage pool of SSD and SAS drives carved out. The pool itself is actually multiple 5 drive groups in raid 5.

Within the pool we configured several drives. A drive for Tempdb, A drive for indexes, A drive for Data, A drive for storage... you get the idea.

Due to how the VNX does storage tiering we have our storage drive to hold just non active DB files set to low. our Temp DB and Index drives set to high. (SSD) and our others set to tier between the two as space/use dictates.

This gives us SSD class performance for everything. (WAIT what how is that?) due to the 1 TB of flash cache in the storage array.

But our use case is for a somewhat large but very high transaction count DB we run that is mission critical to our business.

Same reason we have multiple instances of this setup running at multiple locations.

Damn it's expensive though!!!

(near a million just on storage)
 
well I have 300 gb of data that is mostly xrays and pictures on a 900gb total avail. raid 5, 4 disks.
will probably double in 2 years or so.
database probably isnt too large.
15 clients on the local network. thats it.
 
What kind of budget do you want to allocate to this. I would gladly offer you input but really I'd be shooting into the dark without knowing what you are looking for dollar wise.
 
well, not too much, as in the minimum possible for something decent.

Oh well, according to the tech, the read performance isnt too bad and he blames the software which could make sense.
how can I test the drives speed?
 
Well I can do reasonable.. for say 5 TB of storage.. relatively high speed..

I would go with a new CPU, Ram and z270 chipset motherboard. When the drives come out for intel's new storage pools I wold buy 4 4 tb regular drives, 2 1 tb SSD, and a 1 TB High speed PCIe drive that is intel's new format. I would build a storage pool using that.. and ALSO get a simple 250gb SSD to act as your OS drive. Maybe a pair. Yes you will need an expansion controller.

With Intel's new tech you will have the benifit of ton's of capacity (on the order of 18 TB if not more) but also you will have stuff that needs speed have speed, stuff that you put on the drive to just save moved off to the slower platter based disks. It's is a very enterprise LIKE setup without dropping 100k for an entry level enterprise storage array, and another 5 k for a fiber switch and another 2k for Fiber channel cards. Not to mention an actual server...

You might spend what 5k on something like that outside of whatever software licenses you will need. But it will meet your user demands nicely and give you room to grow.

Though I would ALSO spend another 1200 on a nice 12TB NAS to back everything up to that was a little raid 5 setup. (hell that mapped to your server may meet your needs if you don't need speed.)
 
wow thats wayyyy too much. It feels like a 4 disk raid 5 is already as complicated as I am willing to go.
I don't really need file priority. as I am constantly using different files of different ages.

Why not just 2x 2TB PCIE 960 pro in raid 1 ?
how can I add m2 drives like that to my old server? need an expansion card?

my raid controller w battery M5015 wont run those disks I guess, so I would have to keep it to 850 pros SATA and that kind of defeats the high speed purpose.

or when I replace the server, just get a server that runs dual M2 PCIe and get those 2 disks in a raid and thats it?
 
A raid zero like that would be fine for your purpose. Again your idea of a business budget build and mine are different. :)

What I was suggesting would have been a full system rebuild not just storage and adapters.

I'm sure if you get a good controller that can accept m.2 SSD's and has an NVMe capable port you can do some magic. :)
 
Back
Top