Network pics thread

Nice, we are an all Dell shop here. Looks like you have a few old hard drives :p

Same here. The SFF dells are awesome. Having them standardized everywhere makes it very easy to swap a computer. Group Policy takes care of the rest.

I have a box of about 75-100 80GB SATA drives here pulled from old 745 and 620s with bulging caps.
 
I have quite a few 620s and 745s here too. I want to smash the 620s with a hammer.


i always hated the USFF Opti745/755s....

the damn fan that sat under the HDD on those things failed....CONSTANTLY. i lost count how many of those i replaced back in the day. and it's not like the computers were tortured or in a bad environment. brand new/clean office building and these things would just torch themselves endlessly. and i couldnt ignore them since it was obvious Dell's design was using the fan to cool the underside of the HDD. the one i did ignore for about a month(pulled the fan actually) cooked its drive. :mad: thankfully i'm not in desktop support anymore :D:cool:
 
i always hated the USFF Opti745/755s....

the damn fan that sat under the HDD on those things failed....CONSTANTLY. i lost count how many of those i replaced back in the day. and it's not like the computers were tortured or in a bad environment. brand new/clean office building and these things would just torch themselves endlessly. and i couldnt ignore them since it was obvious Dell's design was using the fan to cool the underside of the HDD. the one i did ignore for about a month(pulled the fan actually) cooked its drive. :mad: thankfully i'm not in desktop support anymore :D:cool:

Between the CMOS batteries dying because of shitty caps and the stupid hard drive fans, they can be a PITA. We have about 800 620/745/755. We get them off lease so they are like 200 bucks a pop so no big deal if they die in 3 or 4 years.

The 620/745 are HORRIBLE for bulging caps. I'm currently upgrading everything to Windows 7 and that means replacing the hardware, checking it over, and redeploying it if it's good or recycling it if it's bad. Probably 2/3s of the 620/745s I'm pulling out have exploded caps. Not just bulging, but fully leaking and corroding.
 
Between the CMOS batteries dying because of shitty caps and the stupid hard drive fans, they can be a PITA. We have about 800 620/745/755. We get them off lease so they are like 200 bucks a pop so no big deal if they die in 3 or 4 years.

The 620/745 are HORRIBLE for bulging caps. I'm currently upgrading everything to Windows 7 and that means replacing the hardware, checking it over, and redeploying it if it's good or recycling it if it's bad. Probably 2/3s of the 620/745s I'm pulling out have exploded caps. Not just bulging, but fully leaking and corroding.

Not just me then :D Unfortunately the majority of our ~1000 PCs are still Pentium4 520/620s or similar with start-up battery warnings and blown caps :( Most still with <2GB RAM too... Kinda jealous of the budgets some people seem to have :p
 
I'm digging it so far, currently it's just sitting idle running some folding for burn in. Turns out it has a Samsung 830 in there, benchmarks quite nicely.

That's about a summer's worth of pulls from dinosaur machines in the field. I wait a while to load up before we recycle most of it.
 
I want to smash anything Dell and replace it with HP. I still have a handful of GX520 SFF in service and 1 spare, as well as a stack of GX520 towers nobody wants to use. 1 GX280 in service and as of last month 0 spares. Plenty of HP DC6700's still in use, can't recall ever seeing failed caps on any those.

Honestly the P4's with Aero onboard graphics, 2GB RAM and a decently fast hard drive run Windows 7 just fine for our purposes which is mainly Outlook and web-based stuff.
 
After dealing with popping PSU's from the HP D530 I have an eternal fear of plugging in any HP from now on.
http://www.youtube.com/watch?v=nHPf6jHdEFg

I use a power strip and face the PC away from me, cross my fingers and hit the switch.

But hey, we're both complaining about OLD computers that could be replaced by low end smartphones at this point..
 
Ohh! How about the PWS 390s I have that sound like they are brewing coffee all day?
The Precision T3500s are nice, those are our engineering workstations.
 
Call me a biast dell tech, but i like all the dell stuff.. :) I've see tons of failed hp machines too, so not one of them is perfect, each have issues. Main concern is is it fixable and is it free to fix etc etc..
 
Hey at least none of you guys don't have a crap ton of MPC computers, lol.
Slowing we are replacing computers with Dell computers.
Due to how weird our University is, each department pays for their computers. It saves our IT budget from buying end user computers, however we can't force departments to replacing aging computers (yet). Some of them like to hang on to their 5 year old MPCs that has gone through 3 PSUs already. :(
 
Call me a biast dell tech, but i like all the dell stuff.. :) I've see tons of failed hp machines too, so not one of them is perfect, each have issues. Main concern is is it fixable and is it free to fix etc etc..

I also dont mind Dell stuff. Their hardware hasn't really ever done me wrong. The only Dell hardware I've had a major issue was(and it wasn't really hardware related) was the PowerConnect 6248's and their unadvertised limit of dynamic routes that they can handle. Other than that, they are great.
 
Just a few things ive been working on this week:


img0015ho.jpg



img0016sq.jpg



img0017pd.jpg



img0019qg.jpg
 
Last edited:
Such a funny pic. Our networking rack looks awfully similar. Blue Coat, Nexus, etc. Except my laptop is a W530. Silly Dell junk :D
 
Jzegers24,

how do you like that BlueCoat appliance? the last BC web proxy/filter i had to deal with was a PITA to t'shoot and deal with. the never ending "blahblah.com" is blocked for no reason tickets.....i can feel my blood pressure rising just talking about it. :mad:
 
I just built probably the most retarded, inefficient, overdesigned yet underpowered file server ever:

IMG_2093s.jpg


IMG_2094s.jpg



It's an IBM eSeries x336 file server with dual P4-vintage Xeons @ 3GHz with 6GB of 533Mhz RAM. It has two 36gb U320 10k SCSI drives in RAID-1 running Windows 2008 R2 SP1.

The MSA1000 has 28 74GB U320 10k drives in RAID-6 with 1 hot spare with two FC controllers running at 2GB with 256MB battery-backed cache. A dual-port PCI-X Emulex card is coming in the mail from eBay which will allow me to turn on the second controller for an Active/Passive (or maybe Active/Active if the MSA firmware will support it) configuration.

There are two 450w power supplies on the x336 and two 750w power supplies on the MSA1000 and the MSA30 expansion tray, for a whopping 3900 watt draw.

All this to serve up 1.7TB of space at a sustained disk throughput slower than modern SATA drives. :D

I need to thank my employer for allowing me to turn electricity into heat in his cage at the end of his 100mbit pipe.



But for some better, more modern goodness, say hello to the new database pod:

61 blades. 488 cores. ~8TB RAM, ~67TB storage. (It will be 64 blades, but three were DOA)

IMG_2092s.JPG
 
Agrikk, as much as i hate to say it, that top setup would go great in my basement.

Until the power bill comes in. Running this SAN in my house 24/7 cost $300 a month.

For one month, until I said to hell with that! :)
 
quite a few of the higher density SANs have drives behind drives, like my new ones:

DSC_0019.jpg

This...is...stunning...:D It's like...the Rolls-Royce of Backblaze pods! :D Apart from Dell, does anyone else do chassis' like these?
 
My company is getting a 1.2TB RAM with 144 physical cores + HT (= 288 HT'd cores) with 360GHz accumulated setup.. will attempt to snag a screenshot, and see if I can get a virtual machine using 100% resources built for a task manager shot. :D
806.4 GHz without HT (1.6128 THz with HT), 2304 GB of RAM (2.25 TB), 22.573046875 TB (2.473046875 TB of which is SSD), 288 physical cores (or 576 vcores with HT) to specific for a company in the 500-1000 employee count bracket. In addition, *all* printers, *all* desktops, *all* Thin Clients, and *all* laptops getting replaced with new printers and AIO 24" LG Zero Clients; all switches and routers getting replaced with POE Cisco (need in preparation for VoIP phones too) with a from scratch, ground up, brand new network design and infrastructure. We will be getting Exchange 2013, Office 2013, Microsoft Windows Server 2012 Enterprise/Datacenter, Microsoft Windows 8 for VDIs (terminal server-free environment), Microsoft Lync, Barracudas for Firewall+Anti-SPAM+Web-Filtering, Trend Micro cloud-based Anti-Virus. All of this will be spread over 4 DCs around the world with Riverbed WAN acceleration hardware at each DC; cloud-based backup; each host has redundant/dual-6Gb SAS connections to redundant DAS; each host has eight 1Gb connections configured in two 1Gb teamed pairs for Production; dual gigabit connections to two separate management connections; 1Gb dedicated backup network; and 1Gb dedicated vMotion network (yes, all ESXi/vSphere-based); running Teradici cards in our full VDI environment; all VDIs are SSD-based. This past year 5 of our 8 production locations received fullblown Cisco wireless systems; the rest are to come later this year.

And no, my supervisor won't create a single virtual machine allocating 100% resources just to take a screenshot of Task Manager. :D

Old system: servers that are over 7 years old (HP DL380 G4s and older), 2-4GB RAM per server, not more than a couple or few hundred GBs of diskspace per server (yeah, we have low diskspace issues), mostly SMP dual-core Xeons at around 2GHz (if I remember correctly), Windows Server 2003 Standard (expensive retail licenses straight from Microsoft). All the terminal servers are way overloaded with around 30+ concurrent users; using 7zip to archive a file as a ZIP with Ultra compression runs at about 200-250 KB/s. Shady/untrustable configuration of DNS and DHCP servers, Checkpoint firewalls running on very old HP servers (old/outdated version of Checkpoint as well), HP wireless access limited (quality, reliability, and range very poor; overlapping zones on same frequency too). Most of our locations only have Office 2003; corporate was upgraded to Office 2007 at the beginning of this year. Late last year Corporate upgraded from 1.5Mbit DSL to 100Mbit fiber. There is heavy scripting involved at logon; all terminal servers would have thermal label printers installed locally (and be identical), there would be a single Windows 2003 server as an x86 print server for laser, color, and multi-function printers (no thermal printers allowed by old philosophies and standards). Was also a Windows Server 2008 Standard x64 print server (brand new DL380 G6) to serve Windows 7 users/x64 drivers. Individual user home drives must be manually created and shared; there are probably around 100-150 GPOs just for restricting access to folders/software/shares; AutoArchive is a disabled function of Outlook via GPO, instead a custom written program copy and pastes PST files to a hidden $ share on a server; the mail server has no proper backups (only specific people are backed up via scripting for legal reasons) and is horrendously slow (about same horsepower as our average DL380 G4). We have an Equilogic SAN of like 4TB diskspace that was setup to have a backup of something copied and pasted over to be a backup of a backup (worst use of SAN ever? -- also, only real SAN they even have), backed with an autoloader tape system... Old philosophy/standard went nuts on as-granular-as-possible restrictions over the network, and a strange way of setting up servers that I think were done by local group policies -- you HAVE to use a very specific AD account to administrate the server; if you use any other account regardless of what memberships it has, server will treat you like a regular user, won't let you access control panel, right-click, etc.

That's like, only the icing. There are so many things I can't even think of them all that I haven't written down.
 
Last edited:
This...is...stunning...:D It's like...the Rolls-Royce of Backblaze pods! :D Apart from Dell, does anyone else do chassis' like these?

EMC has the high density DAE's to achieve this but yes, pretty much everyone hsa this in their line-up nowadays. Haven't seen a high density DAE in the field around her though.
 
LOL I was looking at the picture and thinking "those look just like the SCSI drives from the Proliant G3 series"
 
EMC has the high density DAE's to achieve this but yes, pretty much everyone hsa this in their line-up nowadays. Haven't seen a high density DAE in the field around her though.

25 disks per DAE, 16 DAEs per drive bay.
 
I would love to build a system with that many drives. The problem is finding a controller with that many ports without needing to remortgage my house. :D

I think that's the future, top loading drives. It's more space efficient and as long as there's good airflow and vibration dampening it's ok for the drives. The rails would need to be VERY smooth though so you can pull it out without any bumps for drive replacing.
 
yes this design for disk arrays has been in the works for years, a friend of mine did some of the early testing, i cannot say for which company, but it was a little different, the idea was originally to have so much disk per unit, that if a physical disk died, you didnt have to replace it, a number of spares existed towards the back to pick up the slack, and typically by the time you ran out of spares the device was EOL anyways and replaced by its next unit, kinda cool- but this idea and its design has become well known and widely spread now.
 
Yes, practically everyone.

EMC has the high density DAE's to achieve this but yes, pretty much everyone hsa this in their line-up nowadays. Haven't seen a high density DAE in the field around her though.

yes this design for disk arrays has been in the works for years, a friend of mine did some of the early testing, i cannot say for which company, but it was a little different, the idea was originally to have so much disk per unit, that if a physical disk died, you didnt have to replace it, a number of spares existed towards the back to pick up the slack, and typically by the time you ran out of spares the device was EOL anyways and replaced by its next unit, kinda cool- but this idea and its design has become well known and widely spread now.

Thanks for the replies. Allow me to clarify my question: does anyone else do chassis' like these that I can buy and populate myself? :D Someone like Supermicro, perhaps?
 
replacing a 1811 today, and walked into this, :( the 1811 just kept rebooting every 20 seconds, got hit by a power serge today in one of our storms.

IMG_0139.JPG
 
replacing a 1811 today, and walked into this, :( the 1811 just kept rebooting every 20 seconds, got hit by a power serge today in one of our storms.

https://lh5.googleusercontent.com/--hMQwdyjtMg/UL6zunPE0tI/AAAAAAAARIA/-nASKP7_X70/s640/IMG_0139.JPG
That looks like a crappy dollar store outlet strip...
That setup needs at least one of these:
http://www.amazon.com/Tripp-Lite-IS...2?s=electronics&ie=UTF8&qid=1354683671&sr=1-2

.....Not to mention a proper going over of that mess of cables :p
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Back
Top