These aren't Cisco switches, the 10g swiches (which I'm having problems with one of them) is a Qunata L6BM switch. The OS is quasi-cisco which is quite annoying to use.
Well, I still don't know what the problem was. Working from home last week and the beginning of this week.
I think there is something weird with that switch, a weird routing database or something that wasn't cleared an isn't easily found.
My work around, move the cable to the other switch and...
Alright, I reset the switches and still having the same problem... 10.10.1.81-84 cannot ping 10.10.1.99(backup server)
Here is the base config (just admin pass and ip set):
(FASTPATH Routing) #show run
!Current Configuration:
!
!System Description "Quanta LB6M, 1.2.0.18, Linux 2.6.21.7"
!System...
Well, looks like there must be some sort of port separation, vlanning or something.
The switch was laid out like this:
Port 1-4, 10.10.1.81-84
Port 23, backup1
backup1 can ping 10.10.1.81
backup1 cannot ping 10.10.1.82-84
Moved backup1 to port 5
now it's different
backup1 cannot ping...
Well, arp -a is interesting. Going to have to investigate this some more:
root@pve1-backup1:~# arp -a
? (10.10.1.81) at 00:8c:fa:5a:b3:a0 [ether] on enp8s0
? (10.10.1.82) at <incomplete> on enp8s0
? (10.10.1.83) at <incomplete> on enp8s0
? (10.2.0.2) at 00:50:56:8f:2b:ff [ether] on vmbr0
...
Sorry for the late reply.
Three subnets are what the software asked for...sort of.
10.2.x.x /16 - Management/Network traffic, dual gige
10.10.1.x/24 and 10.10.2.x/24 is for High availability and cluster management, 10g
there is one more not listed cause it works fine and doesn't need to goto...
Please check out this setup.
The 10.2.1.x network works no problem
However the 10.10.1.x network is causing me issues.
This is not a routed network, just a 10g switch with the 10g cards plugged in and no gateway.
All the networks are /24
10.10.1.81-84 can all ping each other no problem...
Hey,
So solved this with Veeam. Since we have a valid vmware license Veeam was able to do a quick migration from one storage to another. Even with the IO errors every 30 minutes it was still able to migrate.
Thanks everyone for the suggestions!
We do have replacement drives. My fear is that the drive will start to rebuild the newly inserted one and the other will fail causing the array to fail so trying to get all the VMs off those failing drives to shared storage.
We use Dell branded drives in these servers.
A cluster...
I'm trying this: https://kb.vmware.com/s/article/2141355?lang=en_US
vmotion.maxSwitchoverSeconds =900
fsr.maxSwitchoverSeconds = 900
See if that helps at all.
Hey,
So at my work we have a 4 node ESX 5.5 cluster on r710s. Each node has local drives (this one has 4x 1Tb SATA in R10) and dual gige to a 3220i with 24x500gb drives.
Yes I know it's all old but still running good and plans to get onto a 4 node c6220 later this year hopefully.
Anyways, the...
I possibly have found it. Just by measuring the battery slot it looks to be a 1632 type. The BR is the one that Cisco recommends for longer drain life but the CR1632 is what I'm going to use since it's readily available.
I'll update again if this does the trick or not.