Dashcat2 Build

What a journey... I've fought the war of my life over the past year. I'll be updating rapidly the next few weeks. So much has happened and even more has changed. I've grown as a person and my level of focus is probably an order of magnitude better.

The Dashcat2 / DashcatAKN (that's the permanent name, now. This machine, in 16-2 hotspare form is DashcatAKN Mark 1.) supercomputer hardware is five years old (in my possession, anyway; it's actually from, like, 2006) as of the end of this month. That's fine. Nothing lasts forever and that's why I built it to be capable of upgrades without changing the cases and racking.

I still have a near 500GFLOP 64-bit cluster with 16GB RAM per node that will teach me how to use the eventual upgrade.

The biggest mistake I made, post-divorce, was in thinking that any of the prior rules and methods of living space and practical application still applied. I wasted a lot of time trying to force a square peg into a round hole, as the saying goes.

The biggest thing was my Ebay operation, from which I fund my projects. It's November and I have only done 1/4 of the business I'm limited to for the year. Inventory and staying on top of moving stock turned out to be an epic fail and it screwed up my financial situation. I'm still "behind" in terms of having to get to the second (of two) paycheck of the month to cover my bills, but I've got that much covered.

Well, sometimes we really have to start from a clean slate and I did two weeks ago. I've gotten rid of what's useless, but I still haven't reconfigured for the new method.

That's changing this month. The biggest problem I faced is finally gone.

I won the lawsuit. Me. No lawyer. Love is engaging and neutralizing your enemy yourself, not hucking money at someone to do it for you.
If that fucking cockroach lawyer pops up out of the woodwork again, he knows he'll get stomped and disbarred. It's up to him now.
I really hope he has nightmares about me.​

I've got some CAT5e to run. Later, friends.
I remember reading this threat eons and ages ago and it's still going. Good on you for keeping the faith with [H] and your projects.
I remember reading this threat eons and ages ago and it's still going. Good on you for keeping the faith with [H] and your projects.

Thanks. This would have been complete eons ago, too, if it hadn't been for basically everything changing following the divorce.

When I die, I want to be described with words like persistent and tenacious.

Hmmm... Tenacious DNA.
Hard to get time for Dashcat work other than weekends. I'm taking a step back to take a bunch of steps forward. I'm moving the half-rack out of the basement and right next to the frontend in my living room on the ground floor of my house.

I need to be able to see how the cluster reacts without running downstairs each time I change something. I have the same six nodes in the rack. I'm changing some configuration stuff around.

I learned that a single top-end server with a pair of 16-core Opteron CPUs will bench at 240GFLOPs, or roughly half the speed of my whole cluster. I have an upgrade path. That's not a problem. What is a problem is the power situation. I don't know that I have enough capacity on a 240V 30A circuit to run the Mark 2 version of Dashcat. Fortunately, I face no such limitations with the remote operation. I learned, during an upgrade project where I replaced every outlet with child-safe versions, that every outlet on the East wall of my living room is shared with the circuit that runs the lights (only) in my basement lab. Those lights consume <100W. I have over 1700W of unused capacity on that circuit now that my home theater isn't plugged into it anymore.

Now to get the half-rack up the stairs.

Welcome to my Hell (in-process)...


- 4 AKN -
Your submission could not be processed because the token has expired.

Please push the back button and reload the previous window.

Well never mind, then.

My Mom died yesterday, but [H] couldn't hold my login long enough for my post and I couldn't recover it. Gone forever. I'm doing this on my own with no updates now, not that anyone is watching anyway. Fuck this shit. I'm out.
Well never mind, then.

My Mom died yesterday, but [H] couldn't hold my login long enough for my post and I couldn't recover it. Gone forever. I'm doing this on my own with no updates now, not that anyone is watching anyway. Fuck this shit. I'm out.

Sorry for your loss. That is completely rough chief. Please don't go over to youtube because [H] had a time out on post!
I suspect there are still a fair number of us watching and praying for you...
im always watching and waiting for updates DNA, but if you have more important things to do than sit and write a long forum post for us every few days, then I think we all completely understand and can deal with that. im so sorry for your loss bud. keep your chin up.

I'm not stopping. Anger is either fleeting or all-consuming. Mine is and always has been the former. That is the Hamson (Mom's side) way. We get the job done no matter what tries to stop us. That will never change.

I'll start copypasta-ing my posts to notepad or something to avoid the failure point of, I guess, cookies.

I need to tell a story. I'll start a new post for that.

And, from the bottom of my heart and soul: Thank you.
I am truly sorry about your loss.
I have been a lurker on this forum for a very long time (years before I joined and lurked some more ) and I have followed this thread and your story since the beginning.

I basically never post but I wanted to let you know there is still intrest and we're rooting for you.

Last edited:
Sorry for your loss.
If it means anything, i've subscribed to this thread since your first post and every time i see an update, it makes my day.

As a few people have already said, i really do think that there are more people than you know here who have always been rooting for you and eagerly awaiting any sort of update whether it be about your servers, projects, cars or general life, myself included.

All the very best to what you will do for your future, but i can assure you that if you do continue with this thread, we will all be here supporting you :)
Intake - Compression - Power - Exhaust

"I'll be the guard-cat of all your fever dreams." #Immortals


This is Mom's cat, Sassy. As usual, she's close by. That's Mom's leg you see in the background. I took this photo in 2013.
Last edited:
I'm recovering. This feels so weird. It's a combo of loss and regret intermixed with relief and redemption. I won't pretend to understand it. I want to curse because that's just me, but it would be a distraction.

I'm shattering a driveshaft this drag season...
The wide-band email I sent to my fellow warriors at work. Anything in [] or () is a sidenote specific to [H] only:


A lot of you already know the story about my Mom's battle, but her battle is done. At 2PM, yesterday (2/25/15), my Mom passed away. She had undergone procedures where Varian's X-Ray imaging gear and Merit's angioplasty gear were involved. We(!) did that. [Literally! We build for Varian (I test and T-shoot the WiFi boxes, but I also test many other Varian Medical boards) and Merit Medical Maverick and Blue Diamond boards.] We are the Vanguard.

It is entirely to the credit of Huntsman Cancer Institute for giving my family and I three years and more to say goodbye. Her doctor says she holds the survival record for POEMS syndrome patients.

However, it is to her credit for being a living guinea-pig in order to advance treatment methods for blood cancer and, overwhelmingly important, POEMS syndrome, which strikes so suddenly as to give a survival rate of a weekend, which we expected long ago.

Basically, beyond Mom's 2001 Hodgkin's Lymphoma battle, which stemmed from 1999. everything really shifted gears in mid-September 2011 (coincidentally, mere days before the only woman I've ever loved broke up with me) [I hope nobody else has to deal with that shock-load]. At that point, we knew we were eventually going to lose her (that was written in stone. "How long?" was the sole question.) In February, and then May of 2012, Everything went to crap. We thought she only had the weekend, but the ninjas at Huntsman brought her back. And we would get almost three years beyond that for Mom to get to know her only granddaughter, my kid, Tesla.

A Stem Cell transplant bought her time by rebooting her immune system. It wasn't a cure, though, and we of the Hamson/Young/Peavey clans never expected it to be.

During her later life, she was protected by her very-astute cat, Sassy, a freak-accident Siamese who would nudge her when she stopped breathing. Sassy passed away last year. That cat was amazing.

Time grows short, but Mom's will stated that she didn't want a preachy service and her ashes will be spread at Wade Lake in Montana. For my part, everyone on every shift is being treated to donuts because Mom was always about the little things that boosted happiness.

Rest In Peace, Julie Hamson Young -- November 9, 1961 - February 25, 2015.

I'll be back Wednesday.

Last edited:
Sorry for your loss, Dude.
Even when the conclusion is expected, it's not easy to accept the finality. It gets easier as time goes on and you recount the happiest memories.

I read a few older posts about your decision to move to YouTube. You wrote that you didn't think anyone followed your thread anyway. I've been a follower since you started building the shed next to your manufactured home. I don't have a clue how it all works, but you had my interest since the early days.

But whatever makes you happiest, follow that path.
I'm also very sorry. My father took several years to finally pass so it wasn't a surprise but it still sucks. I've also been following you since the beginning and I'm barely holding on to most of it. Thank you for continuing to share your saga here.
Thursday, I received my inheritance from my Mom...

Basically, I could take years off from work and be fine. That, however, is not how I roll. Being a lazy shit is no way to honor her, nor be a role model for my child. Yesterday, the first day of Spring, which came early here in Utah for some reason, I began a quest.

I haven't posted a relevant photo in a long time. If you look at the code in this post, you'll notice the domain name is different. Today, I dropped USD$93 on a claim at fryode.com for my pursuits over the next decade. Lovenote Digital is multimedia. Fryode Electronics is hardware.

What's a "fryode"? Three-part story.

1. A fair portion of people know a diode is something Electronics-related. In modern parlance, as in "not-vacuum-tubes", a diode is a Silicon P-N junction (while a transistor, the most important component, period, is a P-N-P or N-P-N junction sandwich).

2. Everyone knows the dreadfully-distinctive smell of burnt electronics.

3. Frying is a cooking method.

A Fryode can also be referred to as an SED (Smoke Emitting Device). The term isn't limited to Silicon devices. While your standard, and obsolete (f-ck you, Thomas Edison) light bulb is an intentional example, nobody wants to see an unintended LER (Light Emitting Resistor).

Basically, if you smell a fryode and your gear stops working, I fix it. I also collect things where only one part has gone fryode while the rest is usable. I part that stuff out and sell the good bits. And, if I can fix the fryodes on whatever board went rogue, I do it and sell that, too, as long as I can ensure it will keep working when fixed.

I'm starting a new YouTube channel lineup as soon as I can figure out how to do it. It's all based on my new Fryode brand. Believe it or not, I'll even delve into more cooking stuff since proper fuel for my overclocked (controlled ADD/ADHD with the Hyperfocus trait) brain is so crucial.

I don't like half-assing anything. If my only choices are to half-ass or stay silent, I stay silent. And that brings me to another point:

In the motorhead circle, if all you do is look the part, you're called a ricer. In my industry, if you show up with a Harbor Freight or R(adioS)hack multimeter, you're basically the same thing. Same with soldering irons of the same brands.

Show up with a Meterman/Agilent/Fluke multimeter and a Weller/Hakko/Metcal soldering iron, and nobody will f-ck with you. (Lesson: Show up with Fluke and Metcal.)

So, yeah... I took my first steps toward the Major League.

In 1994, just before I turned 14, my Grandpa, Floyd J. Hamson, bought me my first digital multimeter. Yesterday, 3/20/2015, my Mom bought me my second one that's also an oscilloscope. I just wish she were around to see it. :(


From the practical/frugal standpoint: This kit in the standard form goes for $1325 at this very moment. The add-on kit that came with this (Needed!), with the storage hardcase, isolated USB interface cable (so you don't nuke your computer if something goes wrong), and data acquisition software is $400. To find them together is almost impossible. All-in, this new equipment cost me $650. Essentially, I got the Fluke 123/003 20MHz Scopemeter, probe kit, and charger for $250. Calculating for inflation, my $95 meter from 1994 would be $152 today. Less than $100 on top of that bought me this. Thank you, Mom. I miss you so much.
I will be posting my first video to my new YouTube channel today. In this first video, I show how to repair a Shimano Deore LX front shifter that shifts between the two smallest chainrings just fine, but won't upshift to the largest chainring. The bonus, for those who use their large chainring solely as a bash guard and prefer to disable it (to avoid throwing the chain in the event of an accidental upshift onto broken teeth) is that doing the reverse of what I do is both entirely reversible, and easy.

Last night I bought an upgrade from my Sony Vegas Movie Studio HD package (Imagination Studio Suite 2). I need Vegas Pro in order to work with footage larger than 1080p, of which I will have much. I was ready to drop $600 on Vegas Pro 13, but the $800 Vegas Pro 13 Suite package was a far better deal in terms of time savings from being able to automate a lot of repetitive work (and get Soundforge 11 Pro included). Ultimately, I found a discount that let me upgrade using my current software and end up with Vegas Pro 13 Suite for $410. Half price is pretty good.
No surprise I didn't post the video on time. I went to my Dad's house to grab my Mom's computers. I'll freely admit I broke down. I break down to build up. That's why I'm up just before 2AM Utah time.

The upgrade planned for Mom's computer is nothing short of glorious. I'm glad it has a PCIe 16x slot.
We got your back bro, if you need anything from me, even in terms of YouTube help let me know....
I bought an upgrade to my desktop rig as part of the gear I need for the video work. Newegg was running a deal on Crucial M500 960GB SSDs for $289 each with a coupon code. I scored two and a 4-bay Enermax 2.5" RAID backplane that fits in a single 5.25" slot.

With the pair of drives installed, connected to my P6T Deluxe V1 SAS controller, set up with Windows RAID 0, and formatted, I turned them loose in Sandra. Result: Pegged at 200MB/s. Why? Because the SAS controller on that 2008 motherboard is connected to a single PCIe 1.0 lane. Inconceivable!

Connecting same to the onboard six-port SATA controller, which appeared to have 4 PCIe lanes, I ended up with 537MB/s. Not bad, but short of the spec sheet. Why? Turns out the X58 chipset has SATA-II ports and I saturated the hell out of them. I need a SATA-III controller to get full speed. But that requires a card... A card of about $200 in price for a decent one with legit RAID.

What to do? I don't want to choke Vegas Pro. I have tuned the hell out of my machine, but it's old now. My i7-940 weighs in at 102GIPS and 55GFLOPS (Why so slow? Should be 50% faster on FPU than that.) with 24GB/s RAM bandwidth (tuned the shit out of my asymmetrical 5 1600MHz sticks at 1664MHz).

I did some homework and overclocking an i7-5960X to 4.6GHz would supposedly grab me 364GIPS, 222GFLOPS, with 70GB/s RAM bandwidth (quad-channel... holy hell, dude).

I'd dock the prok in an Asus X99-E WS mobo and stock it with a decent $400-$450 DDR4 32GB set with 2800MHz rating.

But that CPU/Mobo/RAM combo would be $2000.

That got me thinking about the i7-5820K. It clocks as high or higher than $1050 CPU, but costs about $400. It only has six cores and is missing 12 of the 40 PCIe lanes, but that's still 75% of the raw power for $650 less right off the bat. Sure, I'd be castrated if I tried to run SLI or Crossfire, but I never have and Vegas will only use one GPU for acceleration anyway. I could even step down to an X99 Deluxe motherboard without giving up much while saving $130 more.

That combo would cost me $1140.

My Radeon 5870 is old and might be having problems so I looked into what will accelerate Sony Vegas and learned of the $300 Radeon R9 290, which cuts render/encode times in half, bare minimum. With increasing complexity comes increased change for the GPU to handle things faster.

At any rate, if I could stop my machine from BSODing with the atikmpag.sys thing, I'd probably wait a few months. It was really bad when I still had Flash installed, but it still happens once in a while. I don't want to lose my work. One crash even corrupted one of my SD cards full of photos. I've yet to run recovery on it.
I need to quote something. This will take a while. Since this is the 10th Anniversary year of when the OSP275FAA6CB Opteron-DC 2.2GHz CPUs in DashcatAKN / Dashcat were intro'd, I thought it only appropriate to update my May 30, 2010 comparison post. Five years has become ten. Good thing it was meant to be proof of concept from the start!
I've cut a piece of an article and am giving it an update comparing it to my own render farm.

URL: http://findarticles.com/p/articles/m...4/ai_17812444/
Nevermind. Link is dead. You can get the same text with a quoted search of anything obvious in Google, though

Pixar's RenderFarm

"Sun worked closely with a team from Pixar to create its RenderFarm, which serves as Pixar's central resource of computer processing power. The RenderFarm uses a network computing architecture in which a powerful SPARCserver(TM) 1000 acting as a "texture server" supplies the necessary data to the many rendering client workstations needed to complete the rendering process."

Why does it shake like a Top Fuel Dragster?

Nobody worked closely with me. I'm using outdated 2005 hardware that was destined for a recycler and I got some assistance from [H] member kogepathic through AMD forum member AndersN, luckily, which netted me a BIOS that gave my motherboards a shot in the arm for double the performance capability. I don't need an amazing server to handle the load of ten compute nodes, or even twenty if I decide to go that far (note: I wrote this when I was planning ten nodes and ended up with eight).

In 1995, they were using 10BaseT Ethernet their SPARCStation 20s came with. Okay, maybe 100BaseT if they added expansion cards, but they didn't. Texture server... probably because the nodes were limited to 512MB RAM. Likely so, since the SPARCserver 1000 was able to take 2GB RAM. Each of my nodes will take 2GB almost for free and 8GB without costing me too much, while the server will take 16GB.

Hell with that. I have sixteen. I'm not an Enterprise operation at this point in time. I can afford to have a node go down. That's the idea. If the heartbeat doesn't trigger for a long time or other input is received, a la "This is MAC ID: DE:AD:BE:EF:42:69 and I had a BSOD in Linux. Yeah I know that's odd, but it happened... man." It gets a reset. If that doesn't work and I don't get it back, I get a flag to pull and disable (farm figures out disables by it not being there to talk to, like a shitty boyfriend.) If one node out of sixteen goes down, the farm is still at 94%. They are not snowflakes, but the system and I try to get them back online like they are.

The RenderFarm was assembled by Sun and Pixar engineers in less than a month and drew upon Sun's own experience in setting up "farms" of many systems linked together. Some facts about Pixar's RenderFarm and the computing aspects of "Toy Story":

Theirs required engineers. Mine required... me, power tools, good timing. That, and a lot of determination. I wish I could do it in a month. Hell, it's taking me half a year the way I've planned it out. (This was written when my target completion date was April 29 to coincide with Ubuntu 10.04 LTS being released).

2010... Almost five years ago as of this point. 2011 was supposed to be the completion, but my divorce brought that to a halt, right-quick. The tunnels I dug by hand, er, shovel, with the PVC cord pipes in place... that was a lot of work. I need a girl who understands that for what it is..

-- The RenderFarm is one of the most powerful rendering engines ever assembled, comprising 87 dual-processor and 30 four-processor SPARCstation 20s and an 8-processor SPARCserver 1000. The RenderFarm has the aggregate performance of 16 billion instructions per second -- its total of 300 processors represents the equivalent of approximately 300 Cray 1 supercomputers.

That's right! One of their processors equals a Cray 1. Yes. In 1979, Popular Science said the Cray 1 "will cruise along at 80 MFLOPS." That's an aggregate speed of 24GFLOPS. Mine, at 281GFLOPS (492GFLOPS in 14-node form), wipes the floor with theirs. Granted, I'm doing this fifteen years after the Toy Story farm was built, with technology from 2005. One AMD 275 Dual core CPU (17.6 GFLOPS) almost matches their entire 1995 farm--and I have sixteen of them. (28 now)

No.. 32 CPUs and 64 cores now. But it's being compared to technology that is now also 20 years old.

-- Each system is the size of a pizza box, and all 117 systems work in a footprint measuring just 19 inches deep by 14 feet long by 8 feet high.

My farm fits within four square feet and it's counter-top height. (Okay, that was with the short rack. It now fits within eight square feet and that includes the cooling system.)

Still the same size, really.

-- Sun is the price/performance leader, in Pixar's own rankings. The SPARCstation 20 HS14MP earned a rating of $80 per Rendermark (a Pixar measurement for rendering performance), while the comparable SGI Indigo Extreme came in at approximately $150 per Rendermark.

(note: Sun? Oracle, now.) They're comparing it to an Indigo2 Extreme, not an Indigo "1". The article has a typo. They compared an I2 with a 200MHz R4400 CPU. Slow end of the I2. I know this because I own an I2 R10K-195 Impact. I don't know a thing about their metric, but since my entire rig is intended to fall within a budget of $3000 (note: still within that range)... I think I'm getting some good value here. I saw that the 30 quad processor machines cost $47,395 each at the time. The 87 duals were $43,895 each. Okay, their CPUs ran at 100MHz in both dual and quad machines. I just did the math and found that cluster, minus any discounts they may have gotten, cost $5.24 million. That's almost 1,750 times my own cluster budget.

I've wasted more money than that on worse things over the years.

-- Using one single-processor computer to render "Toy Story" would have taken 43 years of nonstop performance.

So 80MFLOPS means 43 years? Let's do a little math here. I'm going to surmise their farm did it in 1/300th as long with 300CPUs. That's 52 days and eight hours. To render Toy Story on one of my AMD 275s would take 71 days and 8 hours. One dual-CPU node would take 35 days and 16 hours. My whole farm in 16 CPU form would do it in 4.5 days, using 22 dollars worth of electricity. (In 28-CPU form, 2 days 13 hours)

Just over two days now in 32 CPU form.

-- Each of the movie's more than 1,500 shots and 114,000 frames were rendered on the RenderFarm, a task that took 800,000 computer hours to produce the final cut. Each frame used up 300 megabytes of data -- the capacity of a good-sized PC hard disk -- and required from two to 13 hours for final processing.

300MB? That's it? Okay, granted, the hard disk I was using in 1995 was 420MB. I could fit 300MB in a small corner of my RAM, let alone a modern hard disk. My boot disks are microdrives 6GB in size and that's a desktop hard disk size from 1998. (again, obsolete data. I planned the microdrives when I was going to use the Verari nodes as-is. I have 160GB disks now, which makes this even more funny.)

My desktop machine has 20GB of RAM... and I find 32GB of DDR4 appealing for an upgrade, if not 64GB. That I can come that close to the same RAM in my whole cluster today is pretty remarkable.

-- In addition to the high-resolution final rendering, the RenderFarm was also used to generate the test images animators needed to plan and evaluate lighting, texture mapping and animation. Since fast response is key in doing tests, RenderMan could produce test frames in as little as a few seconds.

The film was rendered at 1536x922 pixels. I'm really only going for 1280x720, which is 65% as many pixels. I don't know how big of a deal the number of pixels actually is anymore.

It is a big deal now. If it wasn't, 4K wouldn't be a thing.

-- Scalability is built-in: the RenderFarm can be upgraded (with more processors and disk storage) to a nearly four-fold performance level, without requiring any additional space. The RenderFarm also integrates seamlessly with Pixar's existing computer network containing different types of machines.

Scalability isn't a luxury these days--it's a requirement. I wouldn't be worth a damn if I couldn't connect a lot of computers and have them cooperate. A 24-port gigabit switch will allow care and feeding of sixteen nodes with eight lines to anything else I want to link to, such as servers, workstations and NAS boxes. Two ports for a NAS box, two ports for a server and four ports for a shotgunned link to the workstation switch. I doubt I'll realistically need more than twenty nodes. With technology moving the way it is, I'll be able to replace the motherboards by the time the cluster is inadequate. Only the power supplies may need changing.

Stuffing a GPU into each node is really my only concern for the future.

Now Dreamworks:
Shrek render farm 2001

Product - Quantity - OS - CPUs - Description

SGI Origin200 - 406 - IRIX 6.5 - Dual R10000 180MHz - 512MB SGI RAM, 3U + 1 rackmount, 9GB HDD
PC Vendor #1 - 292 - Linux - Dual PIII 450MHz - 1GB PC-100 SDRAM, 2U rackmount, 39GB HDD
SGI 1200 - 324 - Linux - Dual PIII 800MHz - 2GB PC-133 SDRAM, 2U rackmount, 39GB HDD
PC Vendor #2 - 270 - Linux - Dual PIII 800MHz - 2GB PC-133 SDRAM, 1U rackmount 39GB HDD
SGI O2 - 190 - IRIX 6.5 - Single R10000 - 256-512MB SGI RAM, 9GB HDD


1482 cpus - 836 boxes - 443 dual processor Linux boxes, 203 dual O200s, and 190 O2s.

Pentium 3 CPUs manage one FLOP per cycle.

P3-450 boxes: 131.4 GFLOPS
SGI P3-800s: 259.2 GFLOPS
P3-800 boxes: 216 GFLOPS

SGI R10000 CPUs manage 2 FLOPs per cycle.

180MHz Origin 200s: 146.16 GFLOPs
200MHZ (estimated) SGI O2s: 76 GFLOPs

828.76 GFLOPs total

by the same standard of manufacturer design numbers, my Opterons nuke four FLOPs per clock cycle.

16 2.2GHz DC Opterons: 281.6 GFLOPs

My farm is 34% the speed of the 2001 Dreamworks farm. That's a bit of perspective there. I would need 24 nodes of the same config to have that kind of performance. I'll keep what I've got. (note: No, I won't. 14 nodes gives me 60% of the Shrek farm's performance.)

16 node form is 563.2GFLOPS. Single machines are approaching that level of performance.

This does not include my i7 rig, but I could recruit it.
Last edited: