Nov 7, 2019
Hi everyone, I'm new in this forum, and I have a huge problem:

I bought an HighPoint SSD7101A NVMe RAID controller (PCIe) for my hackintosh (a PC with Mojave running).
I built a first raid with 4x Sabrent Rocket NVMe 2TB. After made the raid-0 via WebGUI and formatted in HFS+ in Disk Utility, and after a while testing and working fine with it, the alarm sounds and WebGUI Event reports one of the disks failed. I tried to build new raid configurations, many times, but with the same result: after an unpredictable while, the alarm sounds and WebGUI Event reports one of the disks failed. It should not be a disk problem because the reported failed disk is randomly different each time, and also because when the WebGUI alarm starts it's sufficient just restarting the PC to continue working (if it were actually a drive failure I should hear the alarm sound as soon as the PC restarts).

Then I tried changing all the bunch of disks: I switched to 4x Samsung 970 EVO NVMe 2TB drives. I built the HFS+ raid-0 (WebGUI + Utility Disk) and all seemed working flawlessly... Instead after some hours the raid failed again. (Event reports that one of the disks failed, and it happened in a moment when I was not even working on the raid at all! but simply restarting the OS and all seems fine again... for how long?)
The point is: with EVOs, the time between a crash and the next is much longer then with Sabrent, but nor EVOs actually solved the problem!

[EDIT:] Is it possible that the problem comes from a loose connection in the drives' plugs of the controller?

Support from HighPoint is slow and uneffective. I'm seriously downhearted.

Please, can anybody kindly help me?
Below my hardware (if it can help...) Thanks really a lot.

macOS Mojave 10.14.6 on single EVO 970 1TB;
Gigabyte GA-X99P-SLI;
Ram: 64GB;
2x Radeon VII;
HighPoint SSD7101A PCIe data-RAID-0 (4x EVO 970 2TB);
DeckLink Mini Monitor 4K.
Last edited: