Zarathustra[H]
Extremely [H]
- Joined
- Oct 29, 2000
- Messages
- 38,868
Zarathustra[H];1041824129 said:Interesting. That makes absolutely no sense to me. Fiber SHOULD be inherently higher latency due to needing a transducer on each side... The act of changing the signal from one form of energy to another is always going to add some delay.
They must have REALLY botched the 10GBaseT implementation in that case.
The higher power use I understand, and expected, but didn't think it would be enough to be significant compared to the rest of the server (especially considering how many spinning HD's we are talking about)
Ahh. In that analysis it looks like it would be less of an issue in my application.
I'd use the Xeon-D board for bare metal ZFS storage right next to my switch and ym ESXi server. It would have a 2ft long cable in each of the 10GBaseT ports, one to the switch, and one to the ESXI server, minimizing both latency and wattage due to the short cable length.
I mean, we'd be talking about 0.7 microseconds vs 2 microseconds, which just doesn't seem like a significant difference, unless you have many cables you need to transcend and they add up, which wouldn't be the case for me.
Either way, the scenario above is partially fictional, as I neither have a Xeon-D or a 10Gig capable switch yet