Hoping for a little help here. Hopefully it's not just the nature of the beast.
I'm running a ServeRAID M5015 9260-8i card. Updated to the latest firmware (v12.15.0-0248). I currently have a 2x1TB Samsung 870 array in RAID0 (Expandable to 6 drives) and will be adding a 2x8TB Barracuda Pro RAID1 array next week. The driver/card is not properly resuming from standby or hibernate states. For S1/S3, the system will go into standby alright but crashes upon resuming. I was first using the out of box Windows driver which identified the card as an Avago and was released in 2015. With this driver, the system would resume but the card would show in device manager with an error and after aabout 60 seconds, the system would bluescreen and show the error "DRIVER_POWER_STATE_FAILURE". I updated it to the M5015 IBM ServeRAID driver released in 2018 which operates the card properly but now the system does not resume from S1/S3 at all. You just get a blank screen for a few moments upon waking the system up and then the whole thing resets and begins the POST cycle.
When attempting an S4 hibernate state, the system powers down and resumes properly however upon waking back up, the ServeRAID BIOS is not invoked during POST. This results in a longer than normal resume time at the Windows logo and when it finishes resuming from S4, the card shows up in device manager with an error and any arrays/drives connected to the card are not identified until the next full reboot. I verified it was the card by removing it from the system. Everything functions normal in its absence. Fully shutting down the system or restarting with the card installed do not give any issues.
System specs:
Asus Crosshair VIII Hero
Ryzen 5800X
32GB Corsair Vengeance LPX
Asus RTX 3070 Dual OC
Cudy WiFi 6 AX200 Bluetooth/WiFi card
Windows 10 Pro 64 bit
500GB NVMe Samsung 970 for OS drive
Is there a fix for this? Or is it the nature of the beast with this type of card?
I'm running a ServeRAID M5015 9260-8i card. Updated to the latest firmware (v12.15.0-0248). I currently have a 2x1TB Samsung 870 array in RAID0 (Expandable to 6 drives) and will be adding a 2x8TB Barracuda Pro RAID1 array next week. The driver/card is not properly resuming from standby or hibernate states. For S1/S3, the system will go into standby alright but crashes upon resuming. I was first using the out of box Windows driver which identified the card as an Avago and was released in 2015. With this driver, the system would resume but the card would show in device manager with an error and after aabout 60 seconds, the system would bluescreen and show the error "DRIVER_POWER_STATE_FAILURE". I updated it to the M5015 IBM ServeRAID driver released in 2018 which operates the card properly but now the system does not resume from S1/S3 at all. You just get a blank screen for a few moments upon waking the system up and then the whole thing resets and begins the POST cycle.
When attempting an S4 hibernate state, the system powers down and resumes properly however upon waking back up, the ServeRAID BIOS is not invoked during POST. This results in a longer than normal resume time at the Windows logo and when it finishes resuming from S4, the card shows up in device manager with an error and any arrays/drives connected to the card are not identified until the next full reboot. I verified it was the card by removing it from the system. Everything functions normal in its absence. Fully shutting down the system or restarting with the card installed do not give any issues.
System specs:
Asus Crosshair VIII Hero
Ryzen 5800X
32GB Corsair Vengeance LPX
Asus RTX 3070 Dual OC
Cudy WiFi 6 AX200 Bluetooth/WiFi card
Windows 10 Pro 64 bit
500GB NVMe Samsung 970 for OS drive
Is there a fix for this? Or is it the nature of the beast with this type of card?