The FAHBench numbers from the AT article give plenty to make a very good estimate of performance. It looks like the 290x PPD will be right in line with a 780, but use 30-40% more power.
Looks like a great gaming card and a terrible Folding card.
Even though this is an enthusiast board, it's built off a server foundation. The headers are designed for server fans (5w+ each), so running multiple normal fans (like the Noctuas that draw ~0.5w) is no problem.
The core 'worked' fine on AMD during the alpha testing (at the end, there were lots of problems early on), but tpfs were more than double compared to Windows, so they decided not to support AMD-Linux.
It's card dependent.
The single biggest thing you can do for stability on core-17 units is underclock the memory. I have my cards at -500 (so net 5000). Memory speed has zero effect on tpfs and a huge effect on stability. I could barely overclock my titans at stock memory (6000), with the...
For folding, a 770 is a 680. You might be able to OC the 770 a little more than the 680 on average.
680's get 75-90 depending on how high you can OC and still finish 8900s.
Core_17 is hard coded to do a checkpoint every 2%, it's not adjustable (and separate from the global checkpoint setting). This 'feature' may or not make it out of beta (I suspect it will).
Nvidia using a full CPU core is a driver issue. It won't change until NVidia changes the default...
Where did you get this info?
In the manual, it states slots 1-4 are provided by CPU1 (1/3 run in x16 if 2/4 both empty, x8 otherwise, 2/4 run x8 always), which 5-7 are provided by CPU2 (5/7 run x16 always, 6 runs x8).
I don't think it matters.
I've tried getting fancy and splitting/pooling slots based on CPU, assigning GPU core to particular CPUs, and have never gotten any improvement (but in many cases managed significantly worse) in performance.
I've never used a display for my linux box, but I know...
He has said that, and I'm not sure where it came from, because no one I know of during the internal test of the linux client saw any speed advantage (on Nvidia, AMD is a CF in Linux).
GPU boost works in linux (on every card I've tested). There's no appreciable difference between windows and linux in terms of TPF (clock for clock). So, all things being equal (OS costs not among them...), windows makes more sense for a dedicated GPU box just because of the ease of...
I know the 16 dimm slot SM 2P boards say they will run with 128gb worth of UDIMMS (16x8Gb), but I wouldn't count on it. That's asking a lot of the memory controllers (and note speed is limited to 1333 in that arrangement).
I use that motherboard and do bigadv with 4 GPUs in linux. How many cores you give the GPUs depends on how many you have, and how fast they are. Core_17 has a decent amount of overhead, but the total amount of overhead depends on how fast your cpu is (not an issue in the scenario you're...
Officially phi requires a xeon. I don't know if that's actually true or not. Also, the MB has to support 64-bit PCIe addressing, most consumer boards do not. The boards you listed all support it, but you still may need a xeon to drive it.
9kw is rated capacity (actually it's 8.88, panels are rated at 240w each, have 37 panels). Since the conditions are never ideal for all the panels (slightly different angles to sun, etc), my system peaks at ~7.8kw. The system is integrated into the grid. On sunny days, during the day, I...
Total system is ~9kw, which covers the roof that faces away from the street (faces SW). When it's generating more than we're using, the meter spins backwards. My folding uses ~2.7kw, so even in an ideal month the solar does not cover it. Add in an electric car, a son who lives to play...
It's a NUMA issue with multiple CPUs. The fix requires a kernel re-write, which the linux team is working on, but there's no eta. I don't know how Windows deals with it, or if it's an issue there.
I have solar, last month (May), it generated 1600kwh, enough to bring my power bill down to $350 or so. It's very sunny here, but electricity is ridiculous.
It is in beta, so problems are to be expected. That being said, a vast majority of failures are OC related. 8900 is very hard on GPUs, and any OC (especially on Nvidia) is potentially too much (even factory overclocks). If you're running Nvidia, I'd recommend dropping to reference clocks, at...
I've heard rumors of as low as $450, but ~$500-550 is more common. But, there's also a significant sales history on ebay now, so the metric may have moved a little.
You can force core_17 to use less than a full core (for Nvidia), but how much less (before you degrade performance) will depend on your cpu and gpu. Core_17 actually does a lot more than previous cores. There is still debug code, and it writes a checkpoint every even %. How much work core_17...
Can you see how high it will go? Plenty of thermal headroom there. I'd also consider dropping the memory OC, doesn't help for folding, and adds heat/stability issues.
It will reach an equilibrium, where the temperature will no longer change at a given point in the loop, but that does not mean the temperatures will be equal at all points in the loop.
Temperature delta is usually pretty small (assuming adequate flow), on the order of a few degrees C.
TPC does not measure (as AMD chips do not report) actual temperature.
If you read the thread, they comment that it was boosting to a max of 1202. That's the same that most Titans will hit. Clock speed is limited by temperature, and a hard limit on power draw. Since the 780 shares the same PCB, and has the same limits, as Titan, I'm guessing without a modified...
You probably dropped below 80% successful with all the errors (assuming you correctly entered your passkey when you re-installed things). So, two options: complete (successfully) enough WUs to get back above 80%, or request a new passkey and successfully complete 10 work units (in the long run...
There are two possible Phi usage scenarios:
1: As an OpenCL device. In this setting the PPD will likely be the range of top end GPUs (really hard to predict, but based on relative performance of functioning OpenCL programs on Phi and Intel CPUs, and comparing CPU performance to GPUs on the...
At the moment, there's no way to use a Phi for FAH. Intel's OpenCL driver does not play nice with openMM at the moment (what the new GPU core, Core_17, utilizes). Intel and FAH have work to do to fix this. At some point it will happen (the motivation for FAH is not the Phi per se, but adding...