Search results

  1. D

    Fan noise reduction on a Supermicro SC826

    1. It's a 2U JBOD, so 80x80 is the only option. 2. There really isn't anywhere convenient to screw things to. The PWM controller seems to work fine, so I think I'm gonna declare victory :)
  2. D

    Fan noise reduction on a Supermicro SC826

    So, my new JBOD has 3 80x80x38 fans mid-chassis. Run either at 7000RPM (full) or 3600RPM (optimum). The former is jet plane annoying. The latter is just loud enough to be irritating (my lab is in a corner of the family room downstairs.) I got a couple of noctua fans, but they don't fit in...
  3. D

    Windows XP to a scientific instrument in a VM?

    I knew a guy at a relatively small company years ago. He had an old DOS program that communicated with some custom hardware over a serial line. The original (386) PC was having issues. IIRC, the memory was getting flaky, and finding replacement memory even on ebay was problematic (and/or...
  4. D

    Error when copying files over 2gb

    Can you plug it into a sata port on a desktop and try your repro? That would at least tell if the drive is bad?
  5. D

    need help badly with raid5 recovery

    I remember reading about a medium/small company that religiously backed up their business stuff to tape (this was some years ago). No IT person (of course), so some random bro was doing it. Whoever set up the tape HW/SW hadn't tested properly, and the tape backups were garbage. Then, they had...
  6. D

    Linux SCSI messages - cause for concern?

    what i was afraid of. time to see if the buyer will take a return :) thanks!
  7. D

    Linux SCSI messages - cause for concern?

    I got a ZeusRAM off ebay to use as SLOG in my dual-head ZFS server. It seems to be working fine, except every 5 minutes, I see messages like these: Jan 17 11:53:00 centos-vsa2 kernel: sd 33:0:1:0: [sdf] tag#0 Sense Key : Recovered Error [current] Jan 17 11:53:00 centos-vsa2 kernel: sd...
  8. D

    Proxmox VE vs VMWARE ESXi for Windows 10 VM

    Runs fine with 8GB. Hasn't described his requirements, really...
  9. D

    SCSI-3 persistent reservations and SAS NL disks?

    There isn't a vendor, per-se. This is for a home lab. It's a ZFS cluster serving NFS to vsphere. I was following ewwhite's blog post on this. Seems to work fine, except for the spinners not cooperating...
  10. D

    SCSI-3 persistent reservations and SAS NL disks?

    Thanks for the info! Very strange, as the ZeusRAM is on the same controller (literally in the same jbod), and it *seems* to work. At the moment, I am using fence_vmware_soap, since these are virtualized storage appliances. I wanted to use fence_vmware_rest, but the CentOS 7 implementation...
  11. D

    Proxmox VE vs VMWARE ESXi for Windows 10 VM

    ZFS on linux is perfectly usable at this point for an all in one.
  12. D

    SCSI-3 persistent reservations and SAS NL disks?

    So I'm trying to stand up a pacemaker cluster. Back-end is a SM 826 JBOD. Storage is existing 7200 RPM 1TB Constellation spinners. I wanted to use SCSI fencing, but when I enter the command, giving the 8 drives, I get this: [ 2020-01-12 15:34:56,313 ERROR: Cannot get registration keys ]...
  13. D

    Virtualization and ZFS

    I think I'm stuck. Got the physical CC node setup, as well as the two virtual storage appliances. It looks as if the CC unit needs to access the storage subnet? This is not possible for me, as it is a 50gb mellanox link directly between the two vsphere hosts. I had thought the CC would send...
  14. D

    Virtualization and ZFS

    Ah, okay, that makes sense. Thanks! I was wondering how the control VM would avoid crashing/hanging if the host it was running on got borked along with the storage appliance on that same host. Fortunately, I have a low-power sandy bridge host in a micro-atx case I can deploy as the control...
  15. D

    Virtualization and ZFS

    Waiting for my SC826 JBOD to arrive. I'm looking at your docs, and am not sure where the cluster controller VM is supposed to go? Storage wise, I mean. Is it required for the cluster to function under normal circumstances?
  16. D

    Virtualization and ZFS

    Got it, thanks! Looking now...
  17. D

    Virtualization and ZFS

    Always an adventure :) I did the key swap, but your ISP's SMTP server rejected the self-signed certificate on my SMTP proxy. I've added that host to the exempt list. Hoping it retries soon...
  18. D

    Virtualization and ZFS

    Trying to test your cluster in a box setup, but apparently the PRO license I already have isn't adequate? Need a PRO complete or something? Is there a way to evaluate this?
  19. D

    Virtualization and ZFS

    I downloaded the zip file from your site with the 2 OVA files. I am running ESXi 6.7u3. Neither one deploys successfully (both complain about checksums not matching the manifest?)
  20. D

    Virtualization and ZFS

    thanks!
  21. D

    Virtualization and ZFS

    That makes sense, thanks. My JBOD at the moment is a 4x2 ZFS RAID10 of NL-SAS spinners, so performance isn't critical...
  22. D

    Virtualization and ZFS

    I think I was not clear, sorry. That JBOD has two versions - single expander and dual expander. What is confusing me is that each expander has *two* inputs, not just one. So the SM manual shows (for one particular configuration) two SAS cables going from a single HBA to the two inputs on the...
  23. D

    Virtualization and ZFS

    Not thinking I need SAS3 either at this point...
  24. D

    Virtualization and ZFS

    Looking at the Supermicro 12-drive JBOD you listed. Looks very interesting. I'm puzzled as to why their docs show connecting *two* 8643 or 8644 cables to the same HBA. Is this to get 8 lanes instead of 4? It can't be the number of devices supported, since the JBOD has an expander that the...
  25. D

    Virtualization and ZFS

    Okay, this makes sense, thanks. So is it reasonable to have 2 ESXi hosts (and head VMs) but only one dual expander JBOD?
  26. D

    Virtualization and ZFS

    Thanks, I downloaded this and am reading it. I'm a little confused. You can have two storage VM running on the same host, which seems the only usable way to share SATA disks? My current setup is 2 ESXi hosts using hyperconverged Starwind (windows 2016 appliances). They have recently come out...
  27. D

    Virtualization and ZFS

    Not sure if this is the right thread, but here goes. I have a 2 host ESXi 6.7 cluster. I'd like to do some flavor of your 'cluster in a box', but I don't care about SAS/SATA shared storage. Each host has 2 1TB nvme cards - I'd like to do something using that (shared nothing). Does this work...
  28. D

    VirtualBox - Enabling encryption allocates all dynamic storage

    Yeah. I remember being annoyed trying to back up my wife's work laptop, which had mandatory block-level encryption. She was only using 20% or so of the hard drive, but nope, 128GB of backup file. And no way to do incrementals either :(
  29. D

    Is it possible for force link up on an esxi nic?

    Did you see my comment about iSER?
  30. D

    Is it possible for force link up on an esxi nic?

    If the above is what you mean, I think I'm SOL. The reason I am passing through an SR-IOV instance is to take advantage of iSER. And it works really well. If I'm going to forego that, I may as well set up a vSwitch with vmxnet3 vnics on each storage appliance...
  31. D

    Is it possible for force link up on an esxi nic?

    Not sure what you mean? Team the 50gb and 1gb connections? And use link loss detection to switch to the 1gb link?
  32. D

    Is it possible for force link up on an esxi nic?

    Nope :( For now, I've hacked around it by setting up a 1gb link through my switch, and setting the iSCSI pathing to prefer the high-speed link (
  33. D

    Is it possible for force link up on an esxi nic?

    If I understand you, yes. Nothing special in the config, just 2 high speed links back to back. When host B goes down (power-wise), host A loses link, and esxi apparently will not let any traffic pass through the physical nic, so esxi sending a packet into the physical nic, and back out a...
  34. D

    Is it possible for force link up on an esxi nic?

    My lab has two 6.7 hosts connected back to back by a high-speed mellanox interface. Each host has a VSA running, using iSCSI multipathing, so one host going down won't hose the other host. For max performance, each host has a virtual function of the mellanox NIC passed in via SR-IOV. Sadly, I...
  35. D

    Starwind VSA HA

    This was the only plug&play HA solution I could find (especially for two nodes). Everything else either works best with many more nodes than 2, or requires some kind of home-brew setup...
  36. D

    Starwind VSA HA

    So for a couple of weeks now, my home lab has been running a completely HA iSCSI datastore on two ESXi 6.7 hosts. Each one has a starwind VSA on local SSD. The actual files where the datastore resides are a pair of 1TB NVME drives (one on each host.) Each ESXi host sees two paths, one to each...
  37. D

    "TRIM" Over CIFS/NFS/iSCSI?

    Necro champion of hardforum!
  38. D

    Proxmox is not free?!

    World class necro, dude!
Top