He/she doesn't fold for us but they sure have some major firepower at their disposal
http://folding.extremeoverclocking.com/user_summary.php?s=&u=551390
http://folding.extremeoverclocking.com/user_summary.php?s=&u=551390
So I am breaking in three racks, each rack has 4 chassis ,each chassis has 14 blades. Each blade is a dual processor machine with 6 cores per processor and 36gig of RAM. I will probably stop running the folding first thing Monday as I am just trying to provoke any more failures. About 26 out of the 168 have an issue of some sort and the supplier will be in at some point to fix them. Each chassis is drawing about 3KW right now.
- 15% failures within a week!About 26 out of the 168 have an issue of some sort
i thought bigadv under windows and linux had no difference recently?
So each blade is putting out around 38kppd. I want to see bigadv numbers with linux and the wrapper. 10 million points a day?
Neil Young said:Its better to burn out, then to fade away
From what I've heard it is quite the norm. I think the usually expected failure rate is just above 10% or thereabouts. CMIIR.15% failure in hardware isnt as bad as some may think
Imagine if the likes of Dell and HP did this for their burn-in testing (Patriot you know what to dowtf, that's a lot of ppd right there. damn.
Maybe over 3 years (with drives) but this is within a week and isn't the idea of blades is less moving parts (less to go wrong)?. No PSU's, no fans, possibly no drives or interface cards <- All the main suspects.From what I've heard it is quite the norm. I think the usually expected failure rate is just above 10% or thereabouts. CMIIR.
Imagine if the likes of Dell and HP did this for their burn-in testing (Patriot you know what to do), 48-hours folding on every server shipped.
Maybe over 3 years (with drives) but this is within a week and isn't the idea of blades is less moving parts (less to go wrong)?. No PSU's, no fans, possibly no drives or interface cards <- All the main suspects.
um... if we ever peak past 5% we are gonna be pissed...
no 2% is the 3 yr acceptable failure rate... and that is on regular proliant... dunno about other manufactures blades... for the most part... ours have a pair of drives with a raid card.
if you do double density blades... 32 blades per enclosure... then you need a 2u storage pool to draw drives from... but yes you could have 32 dual hex machines in a 10u space... might need to have all 8 psu's installed and all 10 fan units... but that would be some horsepower.
normally I would agree that 2% should be the failure rate, but over the past year I've seen the failure rate of the HP Blades go through the roof, meanwhile the IBM Blades have been rock solid at 0% failure rate
Dunno what the difference in supply is, but HP seems to be cutting corners
We've seen the QC issue on both Intel and AMD based BL 400 series blades (bl460/bl490) but they have been real solid in reliability up until last May, right around then all the new blades have had a really high failure rate. QC just isnt there on the new ones, memory and motherboard issues on about 1/3rd of all the new blades we get in = maddening
and just like that, he was gone....
![]()
If he's a new folder he may not be aware of the ramifications. Many long term folders don't even know the reasons why it's not good for the research to terminate WUs before completion. Another possibility is that his work may have required him to shut down these systems within a prescribed period. Don't know if that's the case but if I had knowledge beforehand that I only had a very limited amount of time to run the systems, I don't think I'd use them this way.And 168 SMP units have to time out .... wish he could have -oneunit'd those clients and taken them off line cleanly instead of pulling the plug.
I totally agree and to think this project has been in operation for a decade...I really like that other DC projects, such as GIMPS and projects using BOINC, make it very easy to abort WUs so they can be immediately reassigned. For F@H being so popular, it has some of the worst client software and infrastructure of any DC project.
4M today so far...Looks like they are at it again. I wonder what the deal is?
thats 2016 cores and 6Tb of RAM
Now set to " -bigadv -smp 12 " Thanks...
Well, it looks like he's shut down for the week.
Those numbers are pretty scary indeed. It's a lot of science for sure and we can use the competition from lower ranked teams to get us more motivated. Although I've seen people hit above the 1M mark before, I don't recall anyone producing this much.Plus, while it's good for the project, I do hope he's not putting out those numbers long term haha.![]()