ESXi - do I need a hardware RAID card

LSI 9260-8i is what the Dell PERC H700 is modeled after.
Good cards but it is one of the "older" cards as I mentioned previously.

I don't remember any issue with SATA drives not running at 6Gb/s speeds ..but it's been a while.

How about the:

LSI MegaRAID SAS 9271-8i


This has 1GB cache memory and x8 lane PCI Express 3.0.

When you say "older card" I assume this isn't a bad thing...or is it when using it with modern SSDs?

Would the cache on these RAID cards allow me to get good performance? By good I mean at least 200-300MB/sec?
 
It's older in terms of generation ... in the Dell world .. the H700 is at least 2 generations back.
It can support SSD ...but I have never done it and not sure of differences/need for Enterprise SSD vs consumer grade makes a difference.

I had a long drawn out post ..but thought twice about posting it.

Bottom line is you may want to do some testing on your own with the SSDs you have on the LSI 2308 and see how they perform for you before buying anything.

Your 8-10 VM load is half of my 20 VM lab ..but my lab is not IO intensive and my storage is a NAS.

Also .. I have seen my PERC H700 do 400-600MB/s reads with 8 7200RPM SATA Drive in RAID10 ...the drives sit in enclosures/adapter and I never tested direct connect from the H700 to the drives to know if it would be faster. Don't remember writes ..but of course they were slower.

So yes...the controller will do the 200-300MB/sec you asked about ..but it depends on your config (RAID level + number of spindles used).
Forgot...my testing was Windows 2008R2 and bare metal FreeNAS 8 or early 9
 
Last edited:
It's older in terms of generation ... in the Dell world .. the H700 is at least 2 generations back.
It can support SSD ...but I have never done it and not sure of differences/need for Enterprise SSD vs consumer grade makes a difference.

I had a long drawn out post ..but thought twice about posting it.

Bottom line is you may want to do some testing on your own with the SSDs you have on the LSI 2308 and see how they perform for you before buying anything.

Your 8-10 VM load is half of my 20 VM lab ..but my lab is not IO intensive and my storage is a NAS.

Also .. I have seen my PERC H700 do 400-600MB/s reads with 8 7200RPM SATA Drive in RAID10 ...the drives sit in enclosures/adapter and I never tested direct connect from the H700 to the drives to know if it would be faster. Don't remember writes ..but of course they were slower.

So yes...the controller will do the 200-300MB/sec you asked about ..but it depends on your config (RAID level + number of spindles used).
Forgot...my testing was Windows 2008R2 and bare metal FreeNAS 8 or early 9

Yeah, my next post was going to outline what I was going to do next before purchasing anything!

The plan is to format the server and install ESXi on a USB key. Then I was going to test the following with my SSD drives (the Samsung Pro 840 128GB drives) and HDD (1TB and 2TB drives) on the LSI 2308:

1) Single SSD
2) Mirrored SSD
3) Single HDD

I was just going to setup a couple test VMs and copy file between them to see what sort of read/write speed I get from the LSI2308.

I was also going to flash the firmware back to IR mode to test the mirroring. I'm really hoping ESXi sees the mirrored volume and not the individual drives.

If the speeds are decent then GREAT! I don't need a RAID card. If they aren't great (which is what I am expecting since ESXi doesn't do caching) then I am keen on the LSI MegaRAID SAS 9260-8i RAID card.

When setting up the system for production use, the drives will probably be setup as follows:

1) Mirrored 128GB Samsung Pro

2) Mirrored 512GB Samsung Pro

3) Single 1TB HDD

4) Single 2TB HDD

So will I get the 200-300MB/sec speed with the 9260-8i RAID card with the above config for the SSD drives?
 
Yeah, my next post was going to outline what I was going to do next before purchasing anything!

The plan is to format the server and install ESXi on a USB key. Then I was going to test the following with my SSD drives (the Samsung Pro 840 128GB drives) and HDD (1TB and 2TB drives) on the LSI 2308:

1) Single SSD
2) Mirrored SSD
3) Single HDD

I was just going to setup a couple test VMs and copy file between them to see what sort of read/write speed I get from the LSI2308.

I was also going to flash the firmware back to IR mode to test the mirroring. I'm really hoping ESXi sees the mirrored volume and not the individual drives.

If the speeds are decent then GREAT! I don't need a RAID card. If they aren't great (which is what I am expecting since ESXi doesn't do caching) then I am keen on the LSI MegaRAID SAS 9260-8i RAID card.

When setting up the system for production use, the drives will probably be setup as follows:

1) Mirrored 128GB Samsung Pro

2) Mirrored 512GB Samsung Pro

3) Single 1TB HDD

4) Single 2TB HDD

So will I get the 200-300MB/sec speed with the 9260-8i RAID card with the above config for the SSD drives?

Well ... I would think so, but you know what ... let's test!!!! :)
Just so happens I cancelled my trip to the Atlanta VMUG event and have time on my hands.

I don't have any available spare SSDs to test..but I am putting my "old" storage box back on the bench now. It has the PERC H700 + 8 SATA 7200 RPM consumer drives in RAID10. Good enough for basement work, right?

I will download and install ESXi 5.5 the latest update and see ......
 
Awesome! :D

I am going to try and do some serious testing this weekend. If I do I will report back with my findings.

To buy a hardware RAID controller or not that is the question...
 
Done on single VM running:
- Windows 7 Ultimate x64 SP1- default load w/ no updates or system changes
- Installed VMware tools (defaults)
- HDTune 2.55 using block sizes 64K, 512K, and 1MB x3 Runs each
- Created 4GB drive from RAID10
- Created 4GB drive from single SATA drive

System:
=======
Dell PowerEdge T110
- CPU Xeon X3440 2.53GHz
- RAM 8GB ECC UDIMM

Controllers:
===========
Dell PERC H700 w/ 512MB cache and BBU
- Reset to default settings, then created RAID10
- The entire vdisk (RAID10) seen as one datastore in ESXi

Intel Ibex Peak (Motherboard SATA)
- No changes - left in AHCI mode (not soft RAID)
- Each of four drives seen as single datastore in ESXi

ESXi 5.5 Update 2 data stores:
==============================
RAID10 = 8 x WD5000BPKT (2.5" 500GB 7200RPM 16MB Cache SATA II 3Gb/s)
SATA0 = 1 x ST31000528AS (3.5" 1TB 7200RPM 32MB Cache SATA II 3Gb/s)

==================================
Below: 64K Block Size ... left = SATA, right = RAID10
==================================
SATA_64K_1.png
RAID10_64K_1.png

SATA_64K_2.png
RAID10_64K_2.png

SATA_64K_3.png
RAID10_64K_3.png


=========================================
Below: 512K Block Size ... left = SATA, right = RAID10
=========================================

SATA_512K_1.png
RAID10_512K_1.png

SATA_512K_2.png
RAID10_512K_2.png

SATA_512K_3.png
RAID10_512K_3.png



==================================
Below: 1MB Block Size ... left = SATA, right = RAID10
==================================
SATA_1MB_1.png
RAID10_1MB_1.png

SATA_1MB_2.png
RAID10_1MB_2.png

SATA_1MB_3.png
RAID10_1MB_3.png


==================================
HDTune Pro ...first SATA then RAID10.
Way too many options for me - like to keep things simple :)
============================================


HDTPRO_SATA.png

HDTPRO_RAID10.png


So .. it looks typical right? The 4 disk operating together perform about 4x better than
the single disk. Who knew!??!!
 
I am really REALLY interested to see what kind of read/write speeds you get with ESXi if you plug you Samsung SSD directly into one of the SATA ports on your motherboard (ie: not on connected to the RAID card)!

Great benchmarks!
 
Does anyone know if the LSI MegaRAID SAS 9271-8i RAID Controller has been replaced by a newer model card?
 
Did you check out the 12Gb/s
MegaRAID SAS 9361-8i ?

Yes but this is going to be too much :) On ebay they start around £350-400.

I'm still quite keen on the 9271-8i (feel free to recommend another card if I am wrong!). It has PCI Express 3.0 and 1GB cache memory. I have found one online for £250 and it includes the LSICVM01 CacheVault & Battery.

If I look at the accessories on the LSI website for the 9271-8i:

MegaRAID SAS 9271-8i

The one accessory that caught my eye and that interests me is:

MegaRAID FastPath Software

I contacted the seller of the 9271-8i card and asked him if I needed any license keys to unlock the full potential of the card and he said: All of the features are available without further license (I emailed him again asking about FastPath)

Does anyone know what kind of performance I can expect with SSD drives connected to this card in RAID 1 (mirroring) when used in an ESXi host? Is MegaRAID FastPath needed to fully unlock the potential of this card?

I'm also interested to hear anyones experience or thoughts on the CacheVault & Battery that this card comes with!
 
Just to expand on my previous post, I started looking at Adaptec cards and found this amazing card:

Adaptec 81605ZQ

Its more than my budget allowed but oh well! :) About £380 on ebay.

It has great specs and has VMware certified drivers for ESXi 5.5. It also comes with the backup battery built in...nice!

The one thing that caught my eye was this:

Operating Temperature

0°C to 50°C* (with 200 LFM airflow)

200LFM...first time I have come across that term so I looked into it. I use the following fans in my case:

SILENT WINGS 2 PWM 120mm 1500rpm - I have two of these in the front of the case and one at the rear. LFM rating is about 570 each.

SILENT WINGS 2 | 140mm - I have one of these at the top of my case. LFM rating is about 687.

(The CPU fans does about 650 LFM)

The reason I mention this is due to me reading about some peoples experience with RAID cards that have overheated (the RAID chip gets hot!).

I think with the cooling in my case that I'll have sufficient cooling for this Adaptec RAID card?

Anyone have any experience with this RAID card in an ESXi 5.5 host?
 
How about the:

LSI MegaRAID SAS 9271-8i


This has 1GB cache memory and x8 lane PCI Express 3.0.

When you say "older card" I assume this isn't a bad thing...or is it when using it with modern SSDs?

Would the cache on these RAID cards allow me to get good performance? By good I mean at least 200-300MB/sec?

Those are very good cards. We use the 4i version on our ESXi hosts. They are based off of the LSI 2208 ROC.
 
Those are very good cards. We use the 4i version on our ESXi hosts. They are based off of the LSI 2208 ROC.

Both LSI and Adaptec cards seem really good. I am leaning towards the Adaptec as it seems to run cooler? The Series 8 card looks awesome.
 
Today I finally managed to install ESXi 5.5 Update 2 on my server. I basically had the following in the server to test the disk performance:

1) LSI is in IT mode with firmware version 16

2) I setup two datastores in ESXi...one on each of the Samsung Pro 840 SSDs

3) I installed one Windows Server 2012 R2 virtual machine on disk 1 and another on disk 2. I then did some big file copies between the two machines and copied the files between different folders in the same VM.

The performance surprised me! It was better than I thought. I was seeing speeds of 100MB/s and more. In some cases I was getting 200-300MB/s.

Sorry I don't have more info yet but I really rushed this test before having to pop out. I am hoping to do some more tests tomorrow.

I still want to test flashing the firmware to IR mode again and upgrading to firmware version 19.
 
I am guessing the 100MB/s is from server to server (ie Gig-E speed)
and above that would be local SSD.

Seems low...but try HDTune and see what you get.

I'm still doing trial/error/learning with FreeNAS and heat issues on mine.
I flashed mine to IT mode with the version 19 firmware.

Were you sticking to 16 for a reason???
 
I am guessing the 100MB/s is from server to server (ie Gig-E speed)
and above that would be local SSD.

Seems low...but try HDTune and see what you get.

I'm still doing trial/error/learning with FreeNAS and heat issues on mine.
I flashed mine to IT mode with the version 19 firmware.

Were you sticking to 16 for a reason???

Ok, ran HD Tune on each server and here are the results.

Server 1:

Minimum: 227MB/s
Maximum: 2399MB/s
Average: 466MB/s
Burst rate: 257MB/s

Server 2:
Minimum: 237MB/s
Maximum: 3181MB/s
Average: 743MB/s
Burst rate: 238MB/s

How does this look? The maximum speeds were a bit surprising.

I'm only using firmware version 16 since that is what was available when I flashed the LSI2308 to IT mode in January. It does need updating. I can't remember, do you have to unplug your SATA drives before upgrading the firmware?

I have been thinking about the heat issue too. Will a RAID card run ok in my case? I have lots of high quality fans in the case (3 x 120mm and 1 x 140mm) and the CPU has dual 120mm fans. I was thinking of maybe getting a "PCI card" fan to put next to the RAID card?
 
Ok, ran HD Tune on each server and here are the results.

Server 1:

Minimum: 227MB/s
Maximum: 2399MB/s
Average: 466MB/s
Burst rate: 257MB/s

Server 2:
Minimum: 237MB/s
Maximum: 3181MB/s
Average: 743MB/s
Burst rate: 238MB/s

How does this look? The maximum speeds were a bit surprising.

I'm only using firmware version 16 since that is what was available when I flashed the LSI2308 to IT mode in January. It does need updating. I can't remember, do you have to unplug your SATA drives before upgrading the firmware?

I have been thinking about the heat issue too. Will a RAID card run ok in my case? I have lots of high quality fans in the case (3 x 120mm and 1 x 140mm) and the CPU has dual 120mm fans. I was thinking of maybe getting a "PCI card" fan to put next to the RAID card?

Those numbers look better for local storage via HDTune.

I flashed mine to version 19 of IT firmware from Supermicro's site.
I do not see a reason or know of a requirement to detach drives before updating firmware.
I do perform a complete power down (not just a reset) after a successful flash.

As far as heat goes, it will depend on your equipment in use: Case, fans, devices in use generating heat, etc.
In my situation, my storage box is an NZXT Source 210 case with 5 fans.
I only run 8 3.5" 7200RPM drives and nothing else. I was seeing my CPU temp stay around 55C while ambient temps
were 80F.

I found out I had made a rookie mistake: My top fan (exhaust sitting just above the CPU) was backwards and pulling air into the case.
Once I flipped that around I now see CPU temps around 40C with ambient temp currently around 75F.
A 15C degree difference - pretty amazed myself.

Heat is a concern for any performance add-in cards (RAID or 10Gb NICs) so you do want to make sure there is airflow around them.

I am not using any add-in cards in mine at this time.
My old storage box (PowerEdge T110 with PERC H700) had a shroud that helped direct fan flow over add-in cards, memory and CPU ...
so I never saw any issues.
 
<snipped>
I flashed mine to version 19 of IT firmware from Supermicro's site.
I do not see a reason or know of a requirement to detach drives before updating firmware.
I do perform a complete power down (not just a reset) after a successful flash.
<snipped>


Just a note/update I came across as I play more with FreeNAS:

I'm reading more over on the FreeNAS forums and found that the driver version should match the firmware version used.

It looks like FreeNAS 9.2 uses version 16 driver ...so I need to reflash from firmware 19 down to 16.

^^ That would be the reason not to use the latest from Supermicro if you use FreeBSD/NAS 9.2
 
I'm still doing trial/error/learning with FreeNAS and heat issues on mine.
I flashed mine to IT mode with the version 19 firmware.

Were you sticking to 16 for a reason???

With LSI it is very important that your drivers and firmware are on the same phase as they call it (we would just call it version)

LSI won't even support you unless they are on the same phase.

Phase mismatch can result in everything from performance and stability issues, to outright data loss.

The version of FreeBSD that the current release of FreeNAS is based off of, has Phase 16 drivers, so it is best to stick with Phase 16 firmware.

Next FreeNAS release will presumably be 9.3.0, and will be on Phase 17 firmware, requiring a re-flash.

In future revisions they have written a script to warn you about these firmware to driver mismatches. For right now, just read the release notes, and it will say which phase the included drivers are at.
 
Zarathustra[H];1041144088 said:
With LSI it is very important that your drivers and firmware are on the same phase as they call it (we would just call it version)

LSI won't even support you unless they are on the same phase.

Phase mismatch can result in everything from performance and stability issues, to outright data loss.

The version of FreeBSD that the current release of FreeNAS is based off of, has Phase 16 drivers, so it is best to stick with Phase 16 firmware.

Next FreeNAS release will presumably be 9.3.0, and will be on Phase 17 firmware, requiring a re-flash.

In future revisions they have written a script to warn you about these firmware to driver mismatches. For right now, just read the release notes, and it will say which phase the included drivers are at.

Yeah ... thanks ... figured it out about 30 minutes ago ^^^^
So far I'm not impressed with performance of my newest setup vs. QNAP box vs Hardware RAID box.
 
Those are very good cards. We use the 4i version on our ESXi hosts. They are based off of the LSI 2208 ROC.

Would you be able to expand on that comment please? :p

I'm looking at the LSI MegaRAID 9271-8i card but I am concerned about the heat issue. I don't have my server at home in an air conditioned room but I do have plenty fans in the case to keep the air moving around.
 
I have just come across this RAID card:

IBM ServeRAID M5015

It is supported by ESXi 5.5. It is an older card but has PCI Express 2.0 and 512MB cache memory. Needs a key to unlock RAID 6 and 60 but I won't be using these.

Does anyone have any experience with these cards with ESXi? They very reasonably prices on ebay (between £90 and £140).
 
yes, I have a couple. Flashed LSI firmware for the one for my home server. For a client machine, I left the ibm firmware. They are just rebadged lsi 9260-8i. They work great. The ibm firmware does take a while to post, but if you are never rebooting, then it's no big deal.
 
yes, I have a couple. Flashed LSI firmware for the one for my home server. For a client machine, I left the ibm firmware. They are just rebadged lsi 9260-8i. They work great. The ibm firmware does take a while to post, but if you are never rebooting, then it's no big deal.

Thanks for the reply!

I will be using this card in a 24/7 ESXi server and be rebooting the host VERY rarely (like twice a year maybe).

What are the pros/cons for using the IBM firmware compared to the LSI firmware? I don't mind the long boot time since it will be used in a server.

What sort of performance can I expect with Samsung Pro 840 and 850 SSDs? Will the performance be ok without FastPath?
 
I don't use ssd's on the raid controller on my lab machine, those are hooked up directly to sata motherboard ports, so I can't help you there. I just have slow sata drives on the raid.

Also not sure of the pro/con of the firmware. I've used both long term, no issues either way. I've seen some people say you shouldn't use the m5015 with the ibm firmware in non-ibm systems, but for the client site it's running in an HP server with retail wd red and wd Re4 in different arrays and it's been rock solid for 18 months or so. Since my home server is a lab machine, it gets rebooted a bit more often (5-6 times a year), so I went with the lsi firmware. also rock solid for probably 2.5 years now.
 
I had a read through:

LSI Interoperability List

And my Samsung 840 Pro 128GB SSD drives are listed there :)

I was planning on getting two Samsung Pro 850 512GB SSD drives but I am concerned that they will not work with this RAID card since they are not on the list? Would it be better to get the Samsung Pro 840 512GB drives rather since they are on the interoperability list? I know they are "older" but they are still excellent SSD drives. The specs between the two drives are almost identical.

I just don't want to have compatability issues!
 
Also, are these the correct cables to use from the IBM RAID card (mini SAS to 4 SATA drives):

Molex Mini SAS to 4 SAS/SATA HDD Driver RAID Cable 79576-3003

Theres so many different types of these cables so I just want to make sure!

I ended up ordering the IBM ServeRAID M5015 SAS/SATA Controller today. I was having a look through my motherboard manual (Supermicro X10SL7-F) and it has two slots on it:

PCI Express 2.0 x4 in x8 slot (= 2000MB/sec)
PCI Express 3.0 x8 in x16 slot (=8000MB/sec)

I know the IBM ServeRAID M5015 has a x8 PCI Express 2.0 host interface (=4000MB/sec) so which slot should I use on the motherboard? Is the first one (PCI Express 2.0 x4 in x8) too slow for the IBM card?

Will I get a boost in speed if I use the PCI Express 3.0 x8 in x16 slot?
 
Last edited:
For drive compatibility, I would guess that the 850s would work if the 840s do. But, getting drives you know are compatible will potentially save you time and energy. If you won't see any practical difference in performance between the drives, might as well get the cheaper drives that are compatible (assuming the 840s are cheaper).

You'll need SFF-8087 FORWARD breakout cables for your card, since you're connecting the single plug on your RAID card to four individual drives. I can't tell if the cable on ebay is that exact cable type. There are REVERSE cables, for when you want to connect four SATA ports to a SFF-8087 connector. These are less commonly used internally.

Monoprice SFF-8087 Forward Breakout Cable

I believe that putting your RAID card in the x8 PCI-e 3.0 slot will get the full speed of the interface, as the PCI-e 3.0 will downgrade itself to PCI-e 2.0. That will eliminate that interface as a bottleneck to the largest extent possible. Unless you have another device that can make better use of the bandwidth, such as a GPU, you might as well use the best interface you have.
 
For drive compatibility, I would guess that the 850s would work if the 840s do. But, getting drives you know are compatible will potentially save you time and energy. If you won't see any practical difference in performance between the drives, might as well get the cheaper drives that are compatible (assuming the 840s are cheaper).

You'll need SFF-8087 FORWARD breakout cables for your card, since you're connecting the single plug on your RAID card to four individual drives. I can't tell if the cable on ebay is that exact cable type. There are REVERSE cables, for when you want to connect four SATA ports to a SFF-8087 connector. These are less commonly used internally.

Monoprice SFF-8087 Forward Breakout Cable

I believe that putting your RAID card in the x8 PCI-e 3.0 slot will get the full speed of the interface, as the PCI-e 3.0 will downgrade itself to PCI-e 2.0. That will eliminate that interface as a bottleneck to the largest extent possible. Unless you have another device that can make better use of the bandwidth, such as a GPU, you might as well use the best interface you have.


Theres only about £10 price difference between the 840 Pro and the 850 Pro 512GB drive. If I could I would go with the 850 since it is more current but I don't want any compatability issues with the RAID card!

Thanks for the info on the cables. The ebay seller won't ship to the UK so, is this the correct cable I need:

3Ware 60cm CBL-SFF8087OCF-06M Cable

I will plug the RAID card into the PCI-e 3.0. Better performance and gives me a bit of room to put the fan card next to it to help keep the RAID chip cool(er).
 
Very likely the 850 will work. You could always buy a single drive, test it, then buy the others when you know it works. If it doesn't, then you have an 850 to use in another machine, or you could sell it for a bit of a loss.
 
Very likely the 850 will work. You could always buy a single drive, test it, then buy the others when you know it works. If it doesn't, then you have an 850 to use in another machine, or you could sell it for a bit of a loss.

Good idea!

Is the cable the correct one I need for that card?
 
That looks like the correct cable. You have a SFF-8087 (mini-SAS) going to your individual SATA drives. As long as the length is good, you should be up and running easily.
 
That looks like the correct cable. You have a SFF-8087 (mini-SAS) going to your individual SATA drives. As long as the length is good, you should be up and running easily.

Thanks for the help and to everyone for their input. I have the RAID card and cooler on order. Will order the cables tomorrow and then test the card with my current drives. If all is ok I will order the other drives.

I'm still concerned about the RAID card overheating but time will tell.

Will report back when I have it up and running! :D
 
Does anyone know if the IBM ServeRAID M5015 card supports 6Gb/sec for SATA drives?

I have read the specs on the IBM site:

Specifications

The ServeRAID M5015 and ServeRAID M5014 adapter cards have the following specifications:

  • Eight internal 6 Gbps SAS/SATA ports
  • 6 Gbps throughput per port

But then further up the page it says:

6 Gbps SAS 2.0 technology has been introduced to address data off-load bottlenecks in the direct-access storage environment. This new throughput doubles the transfer rate of the previous generation. SAS 2.0 is designed for backward compatibility with 3 Gbps SAS as well as with 3 Gbps SATA hard drives.

So will a 6 Gbps SSD connected to this card run at 6 Gbps or 3Gbps?
 
Your drives should run at 6Gbps. The card will just support the 3Gbps interface as well if you have drives that use it. Hard drives I think still come largely with 3Gbps interfaces, since they really can't take advantage of a 6Gbps interconnect.
 
Your drives should run at 6Gbps. The card will just support the 3Gbps interface as well if you have drives that use it. Hard drives I think still come largely with 3Gbps interfaces, since they really can't take advantage of a 6Gbps interconnect.

Thanks, I just thought it was a bit confusing. All the SSD drives I will be using will be 6 Gbps so I just wanted to make sure.

The RAID card arrived today ;) But the mini-SAS to SATA cables will only arrive next week so I can't do much with it yet :mad:

I plugged the RAID card into my server and had it on for about 10min, damn that heatsink on the card gets HOT! I'm glad I have the fan card on order now!
 
yeah, they do get hot, although I think the design tolerance is pretty high. The client HP server had enough cooling, and I just zip tied a 80mm fan pointing towards the raid card and used a couple of vented slot covers to allow the air to move past in my T110-II. It doesn't take much to keep it cool, just can't be in a stagnant air flow area.
 
yeah, they do get hot, although I think the design tolerance is pretty high. The client HP server had enough cooling, and I just zip tied a 80mm fan pointing towards the raid card and used a couple of vented slot covers to allow the air to move past in my T110-II. It doesn't take much to keep it cool, just can't be in a stagnant air flow area.

I have one of these on order:

Gelid Solutions PCI Slot Fan Holder with two slim 120mm UV Blue Fans

So hopefully that keeps the card cool as it'll be on 24x7!
 
I think that would be plenty. Mine is on 24/7, and the smaller 80mm fan is probably 4-5 inches away from the heatsink and it's fine. And if I remember correctly, I think I wired that fan down to 7v to make it quieter, so it's pretty slow also.
 
Back
Top