Hi submesa, I'm hoping you can help me with my first FreeBSD install.
I just downloaded 8.1 amd64 (though same problem with 8.0). After booting the install disk, I get an error:
Trying to mount root from ufs:/dev/md0
/stand/sysinstall running as init on vty0.
I am trying to install this on a Dell T110 with an Intel Xeon 3400 and Intel 3420 chipset. I have tried in AHCI and RAID modes. As it stands I have 4 SATA hard drives and 1 SATA CD-ROM plugged into the motherboard.
Then i could just use 10GBaseT on my two main workstations, the rest could use gigabit as they do not require high-bandwidth access. So i would need three NICs (2 workstation + 1 server) and a switch. As an alternative, i could drop the switch and only connect one workstation directly to the server (cross-link; its auto-sensing anyway since gigabit made that mandatory).
I have 5 workstations running Linux that are diskless, a legacy windows workstation with local disk and a FreeBSD server running ZFS, iSCSI for the workstation drives, NFS for central storage and Samba for legacy access for windows.
Currently i'm using a gigabit switch about 1.5 meters away from the switch. The server is about 10 meters away from the switch. Pretty close.
Infiniband would be nice, but i'm kind of betting on 10GBaseT. I would be able to use my existing CAT6 cabling and integrate well with my secondary network setup which is also copper.
I actually think 10GBaseT will get quite popular as soon as good PHY/MAC chips are available, and could be integrated on even cheap desktop-class motherboards. That might still take several years though, the Intel 10GBaseT product is still about 500 euro. Once that goes down to 100, and affordable switches become available, i think i'll join the wagon.
"For the zvol-exported iSCSI filesystems, you can apply snapshots just like on normal zfs filesystems. I think it's great and very flexible. Though i would like an upgrade to 10GBaseT once it comes available, giving me 10 Gigabit throughput; or like 600MB/s in practise. Then i would really reap the rewards as both my system drive and central storage will have SSD-like performance, even though most is stored on the HDDs."
how many machines do you want to connect, what OSes do they run, and how far apart are they? with two old infiniband nics and a cable you can give two machines point to point 10Gbps for under $300. AND you'll have, like, 4 microsecond latency.