EvilAlchemist
2[H]4U
- Joined
- Jan 11, 2008
- Messages
- 2,730
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Projects can sometimes take a long time to complete. Even then, Stanford might replace the 2665s with similar or even slower WUs worth less. We don't know. The best option right now if we want to keep our PPD up is to install different client types. That might require purchasing more hardware, but it will also mean a more stable production output. A diverse portfolio is probably the best way to go instead of the eggs in one basket approach.Well look on the bright side, the some of us are dual getting 2665 wu's on our quads and not a mix of different wu's (just assuming this based on my own experience)
So the more we get done the sooner we get a new project
Well look on the bright side, the some of us are dual getting 2665 wu's on our quads and not a mix of different wu's (just assuming this based on my own experience)
I am not getting dual 2665 on all my boxen.
I get a 3065 / 3062 ever so often but very rare.
I did catch a few stanford ppl saying that 2665 is scaling horrible on FahCore_a1.
It was really designed for FahCore_a2. That is why 2662 runs so much better.
They need to get that core out in the wild, instead of keeping it in the lab.
I agree They need to get the FahCore_a2 outa' the damn lab and get it into the "wild and wooly" public arena so's we can break it. (only kidding about "damn" and break it parts, I know the lab people at Stanford are only doin' what's right )
I gots to say, I hate the WU 2665 under FahCore_a1, but I've done about 5 of em' I'll do however more and , while the pernts are great, I'm more worried about the science. .
Please release the FahCore_a2 when you think it's ready, just please hurry up.
Yes, the site has been down since last night. Don't know the problem but they have experienced issues a number of times in the past few months.Off topic- has anyone else noticed that the extremeoverclocking stats website has been down since last night? Are they doing some maintenance or is something else at work here?
+1 for getting out the A2 core ASAP. After about 34 project 2665 WUs I'm ready for some more efficient crunching. Quads have been around for a while now and are only getting more numerous. We need a FAHcore that is going to scale to all 4 cores and beyond efficiently. It would be in Stanford's best interest to insure that the software we are using is as efficient as possible otherwise they are just wasting their resources.
Off topic- has anyone else noticed that the extremeoverclocking stats website has been down since last night? Are they doing some maintenance or is something else at work here?
Only 34??
http://fah-web.stanford.edu/cgi-bin/main.py?qtype=userpagedet&username=Sunin&teamnum=33&prange=2000
Try over 150... lol.... ouch.. that doesn't include the 40 or so I churned during the Chimp Challenge!
The only issue with the 2665 I have seen which others reported as well is the inordinately long result upload time. There's a reason for this and it was mentioned at the FCF.I'm getting a *lot* of re-occuring EUE's with the 2665 series WU's, on bone stock machines, and an getting pretty tired of dealing with them, myself. Are the rest of you guys (and gals) seeing issues with this run, or am I just riding the bad luck train these days?
I'm getting a *lot* of re-occuring EUE's with the 2665 series WU's, on bone stock machines, and an getting pretty tired of dealing with them, myself. Are the rest of you guys (and gals) seeing issues with this run, or am I just riding the bad luck train these days?
If these WUs keep taking too long on my system that they end up just starting over again...
what? so after a while they start over?
Twice the 2665 has gotten up to ~90% after taking a day or two, and then BOOM! It starts deleting the work load, or whatever, and starts the same or different WU again. One time it started a different WU, but the other times it started the 2665 one again.
Right now, one of my SMP clients that is running the 2665 has been working since June 11th... and it's at 92%... It's been about 2 1/2 days since it started?
Also, it's JUST that WU that does it. Everything else works great and completes without a problem.
@echo off
:: variables
set drive=[COLOR=Red]D:\Folding@Home\BackupSMP1[/COLOR]
set hour=%time:~0,2%
if "%hour:~0,1%"==" " set hour=0%time:~1,1%
set folder=%date:~10,4%_%date:~4,2%_%date:~7,2%_%hour%_%time:~3, 2%
%backupcmd% "...source dir..." "%drive%\%folder%\...destination dir..."
set backupcmd=xcopy /s /c /d /e /h /i /r /k /y
echo ### Backing up directory...
%backupcmd% "[COLOR=Red]D:\Folding@Home\FoldSMP1[/COLOR]" "%drive%\%folder%"
:: variables
set drive=[COLOR=Red]D:\Folding@Home\BackupSMP2[/COLOR]
set hour=%time:~0,2%
if "%hour:~0,1%"==" " set hour=0%time:~1,1%
set folder=%date:~10,4%_%date:~4,2%_%date:~7,2%_%hour%_%time:~3, 2%
%backupcmd% "...source dir..." "%drive%\%folder%\...destination dir..."
set backupcmd=xcopy /s /c /d /e /h /i /r /k /y
echo ### Backing up directory...
%backupcmd% "[COLOR=Red]D:\Folding@Home\FoldSMP2[/COLOR]" "%drive%\%folder%"
echo You have done a Killer Backup of your folding files. Congrats!
@echo off
cls
It may have been a new WU but the same project. I guess you're receiving a lot of Early Unit Ends, then? Perhaps your system is OC a little high for the SMP client and this time of year. Maybe lowering the frequency a bit might help with stability. It could also be that your combination of hardware doesn't like this particular project, see bottom reply.Twice the 2665 has gotten up to ~90% after taking a day or two, and then BOOM! It starts deleting the work load, or whatever, and starts the same or different WU again. One time it started a different WU, but the other times it started the 2665 one again.
The 2665 WUs are very slow processing, especially if only two cores are working on it as it appears is the case with your dual client setup. That's why Stanford gave the 2665 a longer 6 day deadline instead of 3 or 4. Don't worry, 2 1/2 days isn't that long for a 2665. I have a system that takes nearly double that time.Right now, one of my SMP clients that is running the 2665 has been working since June 11th... and it's at 92%... It's been about 2 1/2 days since it started?
Other people reported about 2665s crashing a lot, so you're not the only one. Evil Alchemist posted about his issues and is understandably very upset about it. For some reason I've had less problems with this WU than the 30xx project WUs, which were plaguing most of my systems with EUEs a while back regardless of system architecture or frequency. The 2665 takes a long time, but is stable and always completes on my systems... :shrug:Also, it's JUST that WU that does it. Everything else works great and completes without a problem.
OK, then keep everything as it is for now. You probably received a bad crop of WUs like I did a while back with the other projects. I knew my systems were stable because they were battle-tested for nearly a year, lol...Hmm.. I know my system itself is very stable. My Q6600 is only at 3.00GHz, and the temps go from 20C-50C (Idle-Load).. and I don't have any other problems with any other work units, or programs or anything. This WU should finish tomorrow.. hopefully it will go through fine heh, we shall see!