P7810

AndyE

Limp Gawd
Joined
May 13, 2013
Messages
276
One of my AMD 7970 (GE) picked up a P7810 unit today.

For those who are interested:
TPF (P7810) is 2m30s. According to the bonus calculator this equates to 76k ppd.
To compare: the very same card achieved 105k ppd with P8900 units

ppd of P7810 seems to be approx 72% of P8900 on this AMD GPU.

Is this in line with other folder's experience?
 
There are reports of NVidia cards suffering a drop in points over at FF but I have yet to see an AMD report
 
I don't know if this is a similar project, but one of my 670's just picked up a 7811. So far, it appears to be averaging slightly higher than a 8900 in PPD!

If I had to guess, it would be about 3K PPD higher. That particular GPU usually scores about 85K PPD for 8900 and currently averaging 88K for the 7811. TPF averages out at about 1:39. So it looks as though it's a bit smaller and faster than a 8900 as well.
 
Finally got one of these bad boys (P7810) and I have to say they are perfect point and runtime wise for my two 660's.


< 6 hours to finish a unit and totaling 10K per WU (currently says 5.5 hours)


Currently 7% in and showing 3:30 TPF for an estimated 10,798 Points for my very first of these WU's. When both my 660's were folding P8900 units it was 12 hours per WU and a solid 85K PPD in the bank each day. Already I'm seeing a 3K boost and probably reaching 90K PPD if I got both GPU's to pick them up steady (which I doubt).

*EDIT* FYI all in all this is up from 75K PPD starting back at the old P7663 WU's. Things are definitely improving for my mediocre GPU's :).

*EDIT2* Already seeing it climb even further. Too early to tell if it's a clock skew but now I'm going from 2 x 660's = 85K on P8900 units to 102K PPD on one P8900 and one P7810. The TPF dropped to 2:40 :eek:
 
Just got a 7810 after finishing that last 7811. PPD is even higher than the 7811. It is currently being run by a GTX 670 and has an avg. TPF 2:12 for about 92K PPD.

EDIT: One of my 7970's just picked one of these up. Currently looks like it's about 6K PPD less than a 8900. I'm getting an avg. TPF of 1:58 for about 108K PPD.
 
Last edited:
Ok, the TPF is flying left and right in exact numbers. Usually it takes about an hour or two for it to stabilize after a reboot, which I was forced to do in order to update the Core and get it to fold the new WU. But it originally started at 3:45 then dropped to 2:42 then back up to 4:16 and its constantly bouncing between 2:42 to 4:16 exactly nothing less or more.

Hopefully it irons out the craziness before I check out for the night in a couple hours but its been doing it for the past hours. Fluctuates from 2+ hours to 5+ hours and PPD drops to either 75K to 105K.

At the lowest now that both 660's picked up P7810 units it should be 70K at the highest 120K, but until the damn TPF stops bouncing I can't know for sure, but I'm pretty sure its going to be in the higher category. Damn I hate restarts for this very reason lol.


*EDIT* 3 hours later both GPU's are still bouncing between the same numbers every few minutes. Will be curious to see how this plays out tomorrow on the PPD side of things. Nothing changed besides a normal restart. Wonder if the WU's are just really that inconsistent or if anyone else is experiencing similar issues.
 
Last edited:
Thanks guys. Don't know why my 7970 went so low.

Just got another P7810 on one of my Titans. Based over a 15% execution period, the avergae TPF is 1m25s, giving 178k ppd. This is about 8-10% higher than the P8900 units on this stock clocked Titan.
 
The P7810 finished on the Titan in 2h20m50s
avg TPF = 1m24.5s
= approx 180k ppd (based on Bonus Calculator)

A P7811 finished on the Titan in 1h48m53s.
avg TPF = 1m5s
= approx 166k ppd

All numbers with a stock clocked Titan

Andy
 
Checked my other GPU systems. About 40 WUs with 7810/7811 were processed yesterday.

Some numbers, based on time completed and ppd calc by Bonus Calculator
All cards on stock frequencies

GTX Titan, 7810: TPF between 1:25 and 1:28, ppd = between 170k and 178k
GTX Titan, 7811: TPF between 1:05 and 1:08, ppd = between 155k and 166k

GTX 780, 7810: TPF = 1:42, ppd = 135k
GTX 780, 7811: TPF = 1:15, ppd = 134k

AMD 7970 GE, 7810: TPF = 2:06, ppd = 99k
AMD 7970 GE, 7811: TPF = 1:36, ppd = 93k

Andy
 
My 7970s ppd seem to drop about 15% on 7810 based on the clients estimate
 
Prior to this morning my 780 SC's were getting:

7810 - avg. TPF of about 1:28 for a PPD of about 169K
7811 - avg. TPF of about 1:06 for a PPD of about 162K

But then this morning, while analyzing the logs from these cards, I noticed the dreaded
Code:
Bad State detected... attempting to resume from last good checkpoint
error. Nothing terrible occurred from it aside from the client restarting from a previous checkpoint... and it only happened once so far. But, being slightly OCD about these things, I'm reducing the factory OC on these cards until I see they're completely stable. So the results I posted above may not be my final results from these cards. :(
 
Nothing terrible occurred from it aside from the client restarting from a previous checkpoint... and it only happened once so far.

Depending on the point in time someone recognises a OC induced failure, the restarted WU will have lackluster points. It is for this very specific reason I stopped OC any of my GPUs to maximize the produced points over a longer period of time. Since I stopped OC, my daily production went up :)
 
Depending on the point in time someone recognises a OC induced failure, the restarted WU will have lackluster points. It is for this very specific reason I stopped OC any of my GPUs to maximize the produced points over a longer period of time. Since I stopped OC, my daily production went up :)

True... but thankfully it has only happened twice in the past week or so.

I'm still backing off the OC little by little until I see a point at which it doesn't do it anymore. I'm still not 100% convinced this isn't at all driver related as there have been numerous reports of instability with the 320.xx series of drivers lately. But I guess I would have to revert my cards to stock in order to test that fully.

I know what you're saying though Andy. They don't do factory OCs with 24/7 folding in mind, that's for sure.
 
True... but thankfully it has only happened twice in the past week or so.

I'm still backing off the OC little by little until I see a point at which it doesn't do it anymore. I'm still not 100% convinced this isn't at all driver related as there have been numerous reports of instability with the 320.xx series of drivers lately. But I guess I would have to revert my cards to stock in order to test that fully.

I know what you're saying though Andy. They don't do factory OCs with 24/7 folding in mind, that's for sure.


I noticed the same. I know Core_17 is a lot more finicky despite the fact I used to overclock my GTX 260 while folding on Core_15 with no problems and no bad WU. Completely screwed me when I first got my 660 because so many bad WU's got thrown out and it would take another WU or two before it fixed itself. No driver crashes or anything I've received with the 600 Series GPU's. When something happened I knew it on the GTX 260/Core_15 because it would slow anything GPU related to a crawl, pop up the Nvidia warning, and do some flickering of the screen with aero kinda half-in half-out.

Flash forward to my upgrade with the 600 Series and I never see a driver crash, nothing flickering on the screen when something goes awry, and barely notice a slowdown if I happen to be playing a video when it happens. Look at GPU-Z and its going at 2D Core speeds, restart, failed WU, new WU, things are good until it happens again. Rinse and repeat. I'm also only using the 314.22's and it basically forces me to watch F@H every 10 minutes to make sure it doesn't crash. Doesn't matter if its just the factory OC, an additional OC, or running below factory OC. Eventually it'll crash at least once a week often times I happen to be on to catch it.
 
Just a heads up about Bad_State_Detected. I was getting them every so often like everyone else when I was running with the 780s Overclocked to 1110 Mhz.

I've now reflashed the cards with a modified BIOS that disabled Turbo-Boost and fixes the voltage and have the cards running permanently at 1110Mhz and I've not seen a Bad_State_Detected since.

I really think that the whole Turbo Boost/vcore dropping every 2% on these WUs leads to higher failure rate with the overclocks. It's not an option for everyone, but it's works wonders for me I'm also running 320.49 drivers. Although as very curious note, the Gigabyte BIOS when locked to those settings still infrequently caused a glitch but the EVGA BIOS which has a higher revision number is the one that has been running well for me.
 
The finickiness of Core 17 must be related to certain cards, generations of cards, or drivers though. My 670's which were factory OC'ed and I OC'ed much further have never had any issues with core 17 units. Of course, I am still running driver version 311.06 on that system which, because it has been extremely stable, has given me no reason to upgrade. So, I hope you can understand where I'm coming from when I say that OC may not have everything to do with it.
 
Just a heads up about Bad_State_Detected. I was getting them every so often like everyone else when I was running with the 780s Overclocked to 1110 Mhz.

I've now reflashed the cards with a modified BIOS that disabled Turbo-Boost and fixes the voltage and have the cards running permanently at 1110Mhz and I've not seen a Bad_State_Detected since.

I really think that the whole Turbo Boost/vcore dropping every 2% on these WUs leads to higher failure rate with the overclocks. It's not an option for everyone, but it's works wonders for me I'm also running 320.49 drivers. Although as very curious note, the Gigabyte BIOS when locked to those settings still infrequently caused a glitch but the EVGA BIOS which has a higher revision number is the one that has been running well for me.

I just dropped my core by 10MHz when the last error occurred and both of my cards appear stable so far with no discernible change in TPFs or PPD. Of course it was about 12 hours or more before I encountered the first "Bad State detected" error.... so I still have a lot of observation to go before I call things fixed. I hadn't actually thought of giving it a bit more vCore to see if that helped yet. I'm not quite at the point of wanting to disable turbo yet.

As for the 320.49 drivers, I found them the worst. Driver crashes infrequently but randomly. I even tried rolling back farther than 320.xx going so far as to INF mod the 314.22 installer so they would install. They appeared more stable, for the short time I ran them, but my TPFs and PPD suffered quite a bit, so I stopped using them. I'm currently on 320.18 which has been mostly stable for me aside from the infrequent "Bad State detected" errors. If I get it stable on 320.18, I may move back to 320.49 if NVidia doesn't release something better in the meantime.
 
Just a heads up about Bad_State_Detected. I was getting them every so often like everyone else when I was running with the 780s Overclocked to 1110 Mhz.

I've now reflashed the cards with a modified BIOS that disabled Turbo-Boost and fixes the voltage and have the cards running permanently at 1110Mhz and I've not seen a Bad_State_Detected since.

I really think that the whole Turbo Boost/vcore dropping every 2% on these WUs leads to higher failure rate with the overclocks. It's not an option for everyone, but it's works wonders for me I'm also running 320.49 drivers. Although as very curious note, the Gigabyte BIOS when locked to those settings still infrequently caused a glitch but the EVGA BIOS which has a higher revision number is the one that has been running well for me.



I've been meaning to do this myself, but I'm a complete n00b when it comes to flashing video card bio's and the risk just seems more than flashing most other hardware components. Then again I have two video cards so it should be worth looking into. I just can't believe Nvidia wouldn't include this ability without a bios modification in the drivers. I figured they would have added that with the 700 series, but nope, forceful bios mod is the only way.

I think it's the fluctuating clockrates that really cause a lot of instability myself, but I can't know for sure. Just seems that if I could overclock previous cards while folding with no problems and I can't on the new Core + 600/700 Series than that seems like the obvious ding.
 
I've been meaning to do this myself, but I'm a complete n00b when it comes to flashing video card bio's and the risk just seems more than flashing most other hardware components. Then again I have two video cards so it should be worth looking into. I just can't believe Nvidia wouldn't include this ability without a bios modification in the drivers. I figured they would have added that with the 700 series, but nope, forceful bios mod is the only way.

I think it's the fluctuating clockrates that really cause a lot of instability myself, but I can't know for sure. Just seems that if I could overclock previous cards while folding with no problems and I can't on the new Core + 600/700 Series than that seems like the obvious ding.

It's card dependent.

The single biggest thing you can do for stability on core-17 units is underclock the memory. I have my cards at -500 (so net 5000). Memory speed has zero effect on tpfs and a huge effect on stability. I could barely overclock my titans at stock memory (6000), with the memory underclocked, they all go 1200+ core (I use Linux, so I set the clocks via a bios flash... I just use the stock bios and adjust memory/boost clocks.)
 
On P8900 I think there was a noticeable performance drop when I had the memory down as far as it would go in Afterburner on my 7970 (600ish?). I have memory back at stock now.
 
Odd, I don't remember it making any appreciable difference when I adjusted the memory up or down on my 7970.

However, if you set your memory as low as it will go, maybe you've stumbled on some sort of threshold beyond which performance suffers.... ?
 
On P8900 I think there was a noticeable performance drop when I had the memory down as far as it would go in Afterburner on my 7970 (600ish?). I have memory back at stock now.

+1 to this, I too noted a dramatic increase in TPF when memory was underclocked. It appears that core_17 is the first to suffer from this phenomenon, as standard practice for me was to always lower memory clocks to reduce power consumption and heat output.

During the alpha testing of core_17, I started up a box with an existing profile that did just this. The project at the time had a typical TPF of ~3 minutes, mine was taking ~30 minutes. Once I worked out previous profile had been applied and changed it, TPF jumped to ~3 minutes as expected.
 
Back
Top