Network pics thread

The 4.x version ALWAYS craps out on any system I try it on. The only solution I found was to not use it.

I don't see anything about SMP in the BIOS.

Tried the newer version, similar deal:



Just locks up.
 
Last edited:
The 4.x version ALWAYS craps out on any system I try it on. The only solution I found was to not use it.

I don't see anything about SMP in the BIOS.

4 is old too. Latest is 5.something. If it fails with SMP then test it without again to make sure. Sometimes SMP has issues with certain motherboards.
 
Where would I find the SMP option? Can't seem to find it in BIOS, unless it's called something else.

Memtest 5.01 seems to not crash... so far so good.. So what makes older versions start to crap out? I figured something like memtest was simple enough that it either works or it doesn't. I have seen the older version work fine before.



Also found this thread: http://hardforum.com/showthread.php?goto=newpost&t=1700793 I hope that's not the issue I'm having... I really don't want to go through the trouble of updating the Bios or doing any oddball stuff like that. I don't have a floppy drive on that machine.
 
You hit F2 when first starting MemTest if I remember correctly. It forces the test to use all available CPU cores. Speeds up test time and is a more "accurate" test. Just because it fails with SMP, it may still be good. That's why you double check without it. (Your latest screenshot shows it as running without.)
 
Oh it's a memtest option, thought it was a bios option. Since without is default should I also try with to make sure?

The memtest with the latest memtest86 passed, so the issues I had may have been a false alarm. Though it still kinda bothers me that it crapped out like that, the older version worked fine before on other systems so not sure why it would suddenly not work. But it does make me feel better so guess at this point I just have to install an OS and hope for the best, that this pass is really a pass and that I don't have any other issues, as it looks like this motherboard has lot of them: http://www.lucidpixels.com/blog/supermicrox9scm-fissues

 
I'd let it run for a good ten passes with and without SMP if you want to be certain. Usually errors will show up after the first one or two passes, but I've seen some take 7 or 8 to get an error.
 
One thing I can't get over is just how quiet this case is. I keep forgetting it's behind me. I will probably rack it in a few days after I let it go through more passes. It's not on a UPS right now, so probably kinda risky to leave it like that too long. We get lot of power blips now that it's close to summer.

IPMI works nicely, but it's too bad I need to use a local Windows VM to use it. Would be nice if it worked in Linux. Though with VirtualBox seamless mode it's practically like using it in Linux I guess.
 
Racked it today. It's now on UPS power and fully connected and labeled.







Starting to run out of ports on my switch! I have a 10/100 switch I scored cheap off ebay a while back, I might actually put it into service. A lot of stuff can be moved to 10/100 such as my HTPC, IPMI, Wifi APs etc...

Now that it's out of the way I will just leave memtest running till I decide to start working on it. The nice thing is from this point I can do everything remotely including installing an OS. I love the fact that it has a dedicated IPMI port, so when I decide to mess around with link aggregation I don't have to worry about losing network.
 
@Red Squirrel.

We use Supermicro's at the NOC all the time. Including the IPMI's. I love them. Reload servers in the Colo without ever leaving my desk.

I ran Ubuntu on my NOC work station for quite some time. And Supermicro's standalone IPMIVIEW (Or whatever they call it) application worked fine. You can download it directly from them. I rarely use the Browser based one anymore.
 
Good to know about that app, I'll have to check it out. I did manage to get it to work in Linux though but pretty much had to do everything manually and google everything. I don't know why the java installer can't do that for me like it does in Windows though.
 
Some Foundry porn:

BZN8Us0l.jpg


Yes, those are 10GBase-ER optics :eek:
 
Not [H]ard really, but heres my home setup:

3Te0s07.jpg


Top to Bottom:
Furman power conditioner, connected to a APC 1500VA UPS (Not Pictured)
NTI Unimux USB VGA KVM
Adonis 250 running pfSense
Cisco Catalyst 3750G PoE-48
2x Cisco Catalyst 2970G-24T, not in use
Firebox x series, used to be pfSense box, have another two, all 512mb upgraded.
Dell 2950 III with 2x Xeon E5440, 8gb RAM, 6x750gb SATA drives running FreeNAS
IBM x366 with 4x Xeon 7040, 16gb RAM, and 3x146gb 10k SAS, too loud and too power hungry so I don't use it much anymore.
 
would you be willing to part with a 2970G-24T there NSimone621? i have a 2975 that the PSU(AFAIK) crapped out on and really liked it.
 
Just placed a massive order on Friday. Hopefully have a few pics later this week.

QFX 5100's
SRX 550's
EX4300, and some other goodies.
 
Proof of concept & initial setup for new home ESXi farm.

newrack.jpg


4x Rackable 1U servers
2x XEON L5420
16GB Memory
QLE2460 FC card
ESXi 5.0u3​
HP DL385G2
2x Opteron 2218
16GB Memory
SAS3801e SAS controller for the external enclosure
P400 for the boot drive (4x 72gb 10k SAS Raid0+1)
QLE2464 in FC Target Mode​
HP MSA70
25x 72GB 10k SAS drives
1TB usable space
1 Pool consisting of 8x 3 drive RAIDz1 VDEVs + 1 Hot spare​

Software on the DL385 is OI+Napp-it, using comstar to serve FC Targets to ESXi hosts. ESXi hosts boot from SAN.

new_benchmark.jpg


Quick benchmark from a VM. ZFS volume is thin provisioned, and so are the disks in ESXi. Only lost 20MB/sec sequential speeds. Random I/O was not affected. Worth it in my book.

To do:
2x SSD ZIL's (plextor m6e 256GB)
Quanta LB4M for top of rack switch
Rackable SE3016 3u enclosure with 8x 2TB drives for media storage and TV recording VM.
Cable management setup for front to back and vertical.

$1360 into the whole setup so far, everything you see including the rack.
 
Last edited:
Pics from Vectorama 2014 Lanparty.

Virtualization host:
devid21164851902078375475_win32_1401923349660_rama2k14-7.JPG


1/10GbE Core and 1GbE collection switch:
devid21164851902078375475_win32_1401923348060_rama2k14-9.JPG


1G collection switch:
devid693126808153817850_macintel_1401979481598_IMG_7873e.jpg


The internet connection usage meter, yes the bike does a wheelie as the usage goes higher.
devid2138350008436227208_win32_1402186007816_20140607-IMG_6735-2.jpg


How the table swithes were connected to the collection switches. 2 Fibers to each switch.
devid381837495943319771_win32_1401960853824_Vectorama-2014-36.jpg


Fun stuff. 500 computer seats for lanpartiers, 100 organizers/staff, 10Gbps connection to internet.
More pictures from vectorama 2014: https://app.younited.com/events/?wsb-xzz-jxl
 
Start of a network upgrade at work though it may not look like much yet. Still quite a bit left so hopefully I can get a few more pictures, though most of it's just orange pipe and dirt so it's not too exciting yet.

Looks like a shipment of vaults, handholes and interduct has shown up - what's going on here
6NqPTggl.jpg


And a boring machine (or 5)?!
8qMjdtMl.jpg


Pulling some pipe
APy1GLbl.jpg


Closer view of the pipe - 8 in total, 4x 2" and 4x 1.5"
kmAvpEal.jpg


Rack and chassis is installed and powered
2x Calix B6-12 chassis, loaded out will have 12 blades @ 48 subs each (active ethernet, no splitters)
Patch panels are 3x288 per side, though the bottom ones are only half loaded
K4KfwUnl.jpg


...and the chassis has power
Yy9loayl.jpg


Started about a month ago, but been busy with various parts of the project (read fixing cut cables and hanging ONT enclosures along with my various other duties) to take many pictures.
 
^ What did you all run on the VM host?

Game servers, DNS/DHCP servers, graphers. That beast was mostly idling. It was hard to put 96 threads and 384Gigs of ram into use. :D

As virtualization platform we had Proxmox VE with newest 3.15 linux kernel and zfs running on one of the nodes.

Also VRTX is a surprisingly quiet machine.
 
depends on what you wanna do with it. For now our VDI runs on HP DL580 G5 (freaking old stuff with Xeon 7400s) with 64-384GB RAM, depending on the host, and a few RAID0s (whoever designed that should be killed). Only one host is a DL580 with Xeon E5s. Users usually don't complain about the speed, even with 40-60 VDIs on one host.
 
Networking gear showed up today. Some stuff not pictured, this is just the core pieces.

2 SRX 550's
2 QFX5100 24QSFP
4 QFX 5100 48SFP+

2014-06-16%2016.15.52.jpg
 
Networking gear showed up today. Some stuff not pictured, this is just the core pieces.

2 SRX 550's
2 QFX5100 24QSFP
4 QFX 5100 48SFP+


Might I ask what those QFX's are replacing or being used for? I love them in our data center. So quick!
 
It's fast... really, really fast. Commits are done within 1-2 seconds.

They sound like a airplane when I powered that stack up. Quite hilarious.

nice! I know what you mean about the commit time, it's annoying waiting for my SRX240 cluster to commit, our config is getting rather large, and commit does take a while.
 
Might I ask what those QFX's are replacing or being used for? I love them in our data center. So quick!

They are being deployed as the the core on the 24q's, the 48S are the switching for iSCSI and server connectivity. They are replacing some homegrown netgear system that is currently in use.

There was talk of needing to deploy internet to peoples homes in the future, so we went pretty large to be able to cope with that in a few years.

nice! I know what you mean about the commit time, it's annoying waiting for my SRX240 cluster to commit, our config is getting rather large, and commit does take a while.

I am serious, my commit times are in sub second most of the time.

The SRX cluster is a tad slow, but still way faster than my 240 cluster.
 
Back
Top