Dashcat2 Build

Just got back from buying a centerpunch, a scratchawl, a Titanium drill bit and a new battery for my digital caliper.

The plan is to pre-mark the spots for the new holes on my rails so I can drill all of them on one session. I should be good to go this weekend.
 
The correct screws arrived. I'm still not out of the woods yet, though. The heads are just slightly too tall so every screw is going to need to have its cap ground down by a millimeter to clear the rails. That's 88 screws.

Additionally, to get the rack bolts to give a perfect fit in the standoffs, I need to grind the self-alignment stub off of each of those, too. This is going to mean a lot of grinder and dremel time. Fortunately, I can drive the screws into standoffs for the grinding process instead of holding onto them with pliers or my fingers. I've got about a hundred dremel cutoff wheels (kept losing track of my kits and they're $4 for a 30-pack here) so I may as well put them to use. I just hope my Dremel holds up.
 
yeah, but i've seen racks with shitty screws that snapped and had to be drilled out : (

So quality within budget is always the goal.
 
I found that the screws spin out of the standoff when I Dremel them. I switched to holding them with a pair of wire cutters (huh huh... dykes) at the very top where damage to the threads means nothing. It worked, but takes a long time. I got to thinking there had to be a better way and I found it. Lowes had some flathead countersink machine screws that will only require me to use a larger bit to bevel the edge of each hole, which had been the plan all along as I want the new holes de-burred.

The best part is this will make the rails self-aligning.

I tried a test-fit with the modified panhead screws and found that my rack is not proper spec and has to be modified itself. I have to grind away at least 1/16" of steel from both sides at the front to get the rack to take my rails and the associated servers.

I tried to upload photos of my recent progress, but my FTP server is being a prick about bulk uploads so I can't for now.

I have some bar nuts I can use as guides while I Dremel the hell out of the rack. That will have to wait for the weekend, however. It means the machines are going in this weekend, though. Adjustment will probably take another week and that will give me time to wait for my next delivery of cooling fans. The ones on my CPU heatsinks are overpowered, I found. They will be replaced with low-speed units while the high-speed fan in each machine will be replaced with an original mid-speed unit.
 
Modified screw
DSC01825s.jpg


Just enough clearance for screw heads
DSC01829s.jpg


The standoffs in-place at the rear
DSC01837s.jpg


Mounted rail
DSC01839s.jpg


As I mentioned before, I need to modify the rack itself because the space between the rails is too narrow for the servers to mount. It's time for some serious work now.

I'll be using my Dremel to cut a notch just wide enough to give me the needed space. After that, I'll smoothe the cut with emery cloth, prime and paint for rust protection.

I'm debating on painting the fronts of the rails so that the entire rack face matches.
 
Test fit of first node after grinding for extra width.
DSC01841s.jpg


Cutting the steel was a real pain. I have to do this another ten times and I'm not going to sleep until it's done.
 
All nodes and server are mounted in the rack and it's a heavy beast. The master node/server is on the bottom of the stack. I'm debating on moving it up to the top. The ninth and tenth nodes still need fans and Windows license stickers.

The empty space near the top is where I'll be putting a console and a tool drawer. As for the space at the bottom, that's reserved for a pair of 2U APC UPSes that will add an extra 300 pounds to the weight of the rack.

DSC01846s.jpg
 
Last edited:
My shipment of cooling fans arrived today. It's time for the next step when I'm off work.

So, people of the [H]... should I wire LEDs to light up the insides of the cases when the nodes are powered on? I was considering purple since red and green are played out and blue is getting to the same point. The machines have green and amber rectangular LEDs for the Power and HDD activity indicators. I could replace these with almost any color and thought White would be a unique power indicator while assigning Blue to HDD activity (it's not _that_ played out) and high-intensity Red as a fault indicator.
 
Last edited:
Fans are installed and tested. All ten nodes work fine. The server still has high-speed fans, but they are going to be set for 5VDC with a PWM circuit to come later.

I just realized this rig has a total of 90 cooling fans in it. Eight fans (PSU has two) for each node and master and two cooling fans in the Dell switch. In fact, there will be more since the UPSes each have a fan or two.
 
i'd love to see a HQ youtube clip of this setup... I feel like such a geek to say this but I love the sound of server fans wurring away :D

what kind of cooling do you have in the workshop anyway??
 
I'll make videos when it's able to be powered up. I have quite a few cables to buy, build and route before that time comes. I'll probably be completely done by the end of July.

I'm cooling the workshop with the portable AC unit shown in some of the photos. It's a 9000BTU unit so I've already outstripped the 2600W load that can handle. I'll need to upgrade. I was considering evaporative cooling, but mounting those coolers isn't easy and controlling humidity is sketchy in such a small space.
 
Okay I lied. I just powered up the rack by running extension cords across the floor. I hooked my Kill-A-Watt to the B-rail of my main feed to see what kind of load five nodes and the main server draw at the wall. The result was 1150W max during boot and 1000W when settled in. Mind you, this rail has the two freshly-built nodes on it so they were sitting in BIOS mode as opposed to loading Windows XP Pro and sitting at the desktop. I've got a lot of work to do still.

I found that the controller (where the nodes are on relays) only consumes 11W. When I'm not running the compute nodes, the farm waits like a ninja on less than a dollar a month with startup of the whole cluster only costing me a penny and two minutes each time in waste for render jobs that may take hours at a time.
 
wowza thats alot-o-watts :p
Though not too bad in the grand scheme of things.

I'm really looking forward to the final build pix and vidz (not to mention what kind of render times you get) :D
 
Looks like I've got another project to work on simultaneously.

I got this from my brother because it let out a bunch of magic smoke. He had it built for him by some fly-by-night posers. This is going to suck, but I hope it's worth it in the end with the i7-940.

http://hardforum.com/showthread.php?t=1530104

Look where they put the f**king radiator!!

DSC01847s.jpg
 
Just curious... What do you need all this for? Folding?

Contrary to posts 9, 11 and 14 of the thread, it's not a performance-on-demand render farm for 3D graphics animations in Blender, it's actually an attempt to compensate for a tiny dick after my wife threatened to leave me for a pygmy fieldmouse.

And I just finished installing the network link today.
 
I'm spraypainting a couple of damaged rails to see what finish, if any, I want for the galvanized steel rails. I'm also changing the layout of the cluster a bit since I won't be going with a rackmount console now after moving the rack next to my work desk where I can use one of the monitors I already have and just keep a keyboard nearby.

I'll be going with an IBM UltraNav PS2 keyboard/touch combo that will be kept in a holder mounted to the side of the rack closest to where I work.

This effort just saved me 1U of space and about $400 I can use elsewhere.

I'll post photos later.

I've got enough KVM cables in my collection to allow me to hook all of the machines up. I don't have a daisy-chain cable for the KVMs, however. It's a special kind, supposedly, but it looks like the standard DB-25 link cable combined with two VGA cables and nobody carries that exact part. I wonder if I can use a standard DB-25 cable and a pair of generic VGA cables? I'll have to look into this.
 
Contrary to posts 9, 11 and 14 of the thread, it's not a performance-on-demand render farm for 3D graphics animations in Blender, it's actually an attempt to compensate for a tiny dick after my wife threatened to leave me for a pygmy fieldmouse.

Wow, l m f a o. ahahahahah NICE. Btw post moar pix! ;)
 
This is looking really interesting. :)

However ... what would it take to convince you to let go of one of those O2+? :) What spec do you have there? R7K, R10K or R12K?
 
This is looking really interesting. :)

However ... what would it take to convince you to let go of one of those O2+? :) What spec do you have there? R7K, R10K or R12K?

I don't have O2+ machines. The ones with the newer case style are medical (one just sold last week, actually) while the other two are R5K-200. I also have two empty systems.

A year ago, I had about a dozen of the uber rare RM7000-350 systems. Those were bought by Nekochan.net members and eBay customers. One of the R5K rigs is actually a Discreet system.
 
Photo time!

Worthy piece of angle iron as a stand. See the paint can reflected in the paint?
DSC01895s.jpg


The color is more of a silvery dark dark grey than any kind of Black
DSC01896s.jpg


I think it'll work.
DSC01897s.jpg
 
I don't have O2+ machines. The ones with the newer case style are medical (one just sold last week, actually) while the other two are R5K-200. I also have two empty systems.

Interested in letting one of the empty systems go? I'm really after an O2 chassis. I have a O2 sitting in Colorado with my sister which I hope to see in September sometime. It's just a R5K-150. Hoping to revive it when it gets over here.

MisterDNA said:
A year ago, I had about a dozen of the uber rare RM7000-350 systems. Those were bought by Nekochan.net members and eBay customers. One of the R5K rigs is actually a Discreet system.

Ah, a fellow nekochan member. :-D I've made several attempts over the past 3 years or so to get my hands on a onyx2, but every time I get close I wind up hitting some stupid other expense which kills the idea.

Wouldn't mind getting a R7K, but for now I'm pretty interested in just an O2 shell for a project. Time to hit ebay and go peek in at nekochan again.
 
I decided to play with the networking side of things for a bit. Even though it's just to break the tension (like having a wank, but with your mind), I'm happy with the progress:

iceboxtelnettest.jpg


This was done from my indoor workstation. I'm talking to the ICEBox 3000 via Telnet. The LAN port on the front of the ICEBox is connected via a short piece of Cat5 to a port on my badass switch. It runs at 10mbps, but that's far more than enough when all it's doing is collecting data from ten nodes at RS-232 speeds.

I think I'll have a looksee at what's stored on the box, if anything.

I'm so very glad I didn't have to get ClusterWorX, which has become SGI's ISLE package since the acquisition of all LNXI's IP. I can write some Python kruft to build a front-end for this and make it numbnuts-resistant for my uses.
 
I'm reverse-engineering the LNXI ICECard temperature/reset module right now. I've learned a lot. I have one working in the main server right now and it's giving me temp readings, but I need more. I was only ever able to get one ICE cable, but they are straightforward to build and all I really need are the right tools for that job. The connectors are weird. I'll have to post photos at some point.

Temperature monitoring is handled via a Maxim MAX1668 Remote/local Temperature sensor IC. I found the datasheet whitepapers from Maxim's website and it was there that I learned why the damned sensors didn't act like a thermistor. It's because they aren't!

http://datasheets.maxim-ic.com/en/ds/MAX1668-MAX1989.pdf

This is pretty clever. The sensors are actually SOT23 package 2N3904 transistors (in my case since they are more accurate within the temperature range these machines will see) connected as diodes and in turn connected in parallel with 2200pF ceramic capacitors for EMI suppression then onward to the IC. I remember CPUs having thermal sense diodes, but this is the first time I've seen them outside of that application. Right on.

I only counted four sensor sockets, but there are five readings. One is internal. I get a reading off of the IC package itself. The IC communicates over SMBUS.

The other IC is an NXP PCA9554APW. I've yet to figure out how they are connected.

http://www.nxp.com/documents/data_sheet/PCA9554_9554A.pdf

That's it for tonight. I'm exhausted.
 
After a worst-case server-only test with the machine running with no ventilation or cooling of the ambient air, I caught CPU temp readings of 50C (122F) over the ICE system with 107F ambient air temp. I checked the machine's system monitor in BIOS and found readings of 48C and 45C directly from the CPUs. I have a temp difference of 15F or 8C between CPU and ambient. I'm quite happy with that. I can certainly cool the machines with the hottest ambient air my climate has to offer.

What this means is my cooling gear works FAR better than I ever could have hoped. I was merely expecting to come home to a machine hovering close to 80C, but that didn't happen. I'm sure a lot of it has to do with the fact I have fans attached to heatsinks that were designed for passive cooling. I guess it makes plenty of sense I'd be seeing this, considering all of the active 1U coolers I could find were copper midgets with impeller-type blowers on top.

This kind of has me wishing I could overclock, but I'd be getting greedy there.
 
Looks like my AC unit is dead. The compressor is stalled and trips a 15A breaker. Should have known when the Kill a Watt was reading 1500W consumption when the UL plate reads 8A (960W).

I thought I would just do warranty service, but Amcor shut down operations last year.

As for DIY repair, the compressor is a Rechi unit from China that can't be bought anymore.

In addition to that lovely nugget of data, it turns out the compressor is a 7000BTU part while the AC unit is supposed to be rated for 8900BTU/hr. With the load I was putting on it, that headroom was crucial and it's probably why it died.

Stay away from Amcor.
 
I'm doing a bit of software stuff while I'm left unable to do much hardware.

While the Broadcom Ethernet chips on the HDAMA board are great for a lot of things, PXE being limited to 100mbps sucks. I can do better.

Etherboot, now called gPXE, is supported on those chips and at least one motherboard already has it loaded on both. This enables up to gigabit speeds.

The idea here is to boot from network for Linux while keeping Windows on the hard disk inside each node. The machine will first try to boot from the LAN and will do so if the MAC address matches a whitelist kept on the server. If it doesn't match, it is denied and fails over to hard disk boot, loading Windows.

If there were a way to boot Windows XP over the network and have the licensing and all that work right, I'd probably do it. I wonder if it's possible to have Windows XP in one activated image on the server's disk and have that "just work" when multicasted out to the other nodes since all the image would think is "Okay, the MAC addresses on the NICs are different, but that's okay." I'd still have separate dedicated licenses for each node to keep it all legit, just not activated for reasons of simplicity during shutdown and reboot since I'll be switching modes so often and would rather just be able to cut power to the machine without a shutdown since the image will just be reloaded into RAM on startup next time.
 
I found that the Belkin KVMs would take a D-sub 25 both ends type cable (like LPT scanners and Zip) as Daisy Chain cables.

I've hit another snag, though. When the nodes enter XP, the keyboard flashes the Lock lights as during POST and shuts off. No keyboard. I'm using a crappy Dell keyboard and I have a few others I can try, including an IBM Model M, FOCUS models, and SGI Granite variety.

I just really hope the special IBM keyboard will work with this rig under both XP and Ubuntu. If I have to dig out my oscilloscope for this at any point, it's for all the cookies.
 
I tried the KVMs again this morning with the Dell shitkeyboard and they worked. I figured as much when there was nothing stated regarding keyboard compatibility anywhere in a Google search. The UltraNav keyboard will be bought soon.

I'm working on mounting a server for Project Housecat as well. It turns out the rack I have that one in wasn't universally meant for rails, either, so I had to bust out the Dremel again. I have the front taken care of, but the odd thing is I have to carve up the back on this one, too.

Omnimount's RSF series rack gear is too expensive for what you get (when you're paying full price, unlike me, that is). It's all about looks, unfortunately. It's as if they expect you'll only use their shelves that are included with the racks.

There should be more racks available with a 24" depth. 30" is just too big for home and workshop applications like this (where you don't want the rack jutting out from the wall much further than a desk) and Omnimount can't make a serious one that can be used to hold servers in a manner anyone would describe as convenient without the warranty being voided.
 
The KVM I bought recently is bad, it turns out. No H and V sync at either console.

Fortunately, Belkin made other KVMs with dual consoles. I didn't know about the Matrix2 series so this was fortunate. The best part is the 16 port one I bought on eBay fits in 2U so I save the 2U the second 8-port Matrix 2x8 took up. I'll be putting the working Matrix 2x8 in the Housecat machine to replace the TrippLite KVM that's in there now.
 
Glad the forum is back up. I've got news. Pictures=words:

The old KVMs were 1.5U, really. I was able to move the ICEBOX down by .5U and that rocks.
DSC01903s.jpg


I named each port for the node attached to it as I tested each and every one.
DSC01905s.jpg

DSC01906s.jpg


The cables for this KVM are quite thin. They don't make for a pretty picture at SXGA 60Hz, but they make for readable text and that's what I wanted. I'll probably drop the res to XGA, really, since the bandwidth is 60% that of SXGA. It all depends on how badly the signal degrades with the next step: Remote console over CAT5(e)/6.
 
I like what you are doing, I always want to build my own servers, but I'm not there yet. I'm starting something small, which will be with vmware for now.
 
Back
Top