Im about fed up....

Crosshairs

Administrator
Staff member
Joined
Feb 3, 2004
Messages
25,321
I gotta tell ya, I'm having nothing but trouble with this new 6.22 shit....it crashes on all my machines and I'm tired of trying to figure out why.......this might just be the end for me..I'm fucking pissed.....

I get EEU's file I/O errors and all sorts of other shit.......if I ever get to 2 million, I'm calling it quits until this shit is fixed....

I need a friggen beer....




 
I pulled the smp off all of my boxes for that reason... Gpu for the win!
Sheesh, sell 1 quad, add $50 buy 2 gpu and have 9k ppd. Win!! No Beta!!

 
If the box is not for anything but folding I recommend notfreds disks. I've never had any real problems with them, even after this update. Otherwise, GPU FTW!

 
If the box is not for anything but folding I recommend notfreds disks. I've never had any real problems with them, even after this update. Otherwise, GPU FTW!


That's because notfred's uses the Linux client.

 
I have four systems out of commission and don't have time to futz with them. It is new clients and reboots due to heat that have made me have to just shut down. This couldn't have come at a worse time for me. Way too much to do and too little time. It is both SMP and GPU problems and I just can't take the time right now. Very frustrating!!
 
I'm too lazy to update the Linux clients on one machine so I'm hoping that nothing happens to the machine anytime soon so I don't lose folding time. It's using old beta clients at the moment.

 
I gotta tell ya, I'm having nothing but trouble with this new 6.22 shit....it crashes on all my machines and I'm tired of trying to figure out why.......this might just be the end for me..I'm fucking pissed.....

I get EEU's file I/O errors and all sorts of other shit.......if I ever get to 2 million, I'm calling it quits until this shit is fixed....

I need a friggen beer....





Trust me you are not alone. There is a cure though and it’s an easy one.

http://foldingforum.org/viewtopic.php?f=8&t=4369

On that page, at the top you will find two download links. One is the install for the old client but improved and the second is a replacement .exe file. Install the client, del fah.exe and drop in the new file and bingo you are folding again.

Contrary to the info we have received and what’s posted on the down load site there is NO penalty for using this extended date client. The implied penalty is that this client won’t work with the new version two core, you remember, they promised that about a year ago I believe.

The main point is what I linked is an effective client and won’t give you all the grief of the current POS.

When and if the new core is released and if it actually works with some client I’m sure we will hear about it pretty fast and then we can change over……..but then I’m an optimist;):D




 
I pulled the smp off all of my boxes for that reason... Gpu for the win!
Sheesh, sell 1 quad, add $50 buy 2 gpu and have 9k ppd. Win!! No Beta!!


YIKES!, that's scary, I just installed the SMP client to augment my GPU client......


SR

 
18 installations of 6.22, only 1 had issues... and for that I downloaded the update 5.91 and it worked...

/shrug

Seems to work most of the time! :)

 
18 installations of 6.22, only 1 had issues... and for that I downloaded the update 5.91 and it worked...

/shrug

Seems to work most of the time! :)


My 6.22s ran for two weeks problem free then promptly started with nothing but EUEs. I don't believe relic ever got 6.22 to work along with many others.

At any rate, as you found, there is a cure;)

 
I am also getting EUEs galore and haven't upgraded to the new SMP client. The temperatures here have dropped quite a bit over the past week, so I can't figure it out except for Stanford sending unstable WUs. I switched entirely to Linux SMP clients and will determine if the situation improves. The GPU client is not sufficient when you don't have any free PCI-E slots available. Another thing that's very annoying is the way the GPU client affects system performance. It's far from invisible. I experience constant freezes and slowdowns. Things have become so convoluted, running even a small farm with all the different clients and configurations is becoming a full-time job. It just doesn't make much sense to me unless things get better organized and more importantly, stable. :rolleyes:

 
I am also getting EUEs galore and haven't upgraded to the new SMP client. The temperatures here have dropped quite a bit over the past week, so I can't figure it out except for Stanford sending unstable WUs. I switched entirely to Linux SMP clients and will determine if the situation improves.


If this is true.

Not a smart thing to do when you come out with a new client. I would think as scientists that they know to only change one thing at a time.

 
WTF??? EUE's on my GPU but not the console.... Of course 2 seconds after I leave for work so I loose a day of GPU work, effectively...

No amount of close/restart would stop the EUE's. Temp was down today so not heat related. Dropped shaders 100mhz to no luck. Rebooted and all seems normal after three more close/restarts.

At this rate, breaking into the top 1000 [H] folders is taking an extra three days...AAAAAAAAARRRRRRRRRRRRRRRRRRRRRRRRGGGGGG

More beer, near here, please.
 
WTF??? EUE's on my GPU but not the console.... Of course 2 seconds after I leave for work so I loose a day of GPU work, effectively...

No amount of close/restart would stop the EUE's. Temp was down today so not heat related. Dropped shaders 100mhz to no luck. Rebooted and all seems normal after three more close/restarts.

At this rate, breaking into the top 1000 [H] folders is taking an extra three days...AAAAAAAAARRRRRRRRRRRRRRRRRRRRRRRRGGGGGG

More beer, near here, please.

/toss beer

I've been waiting on a accesses point to who up since Thursday (friggen UPS) I need to get a signal tot he other side of the street so I can get the foster boxen up and running.... this whole weekend I have a GPU sitting there doing nothing. :(

 
I just lost a 3062 that was about 75% done on a smp client for no apparent reason. :rolleyes: I'm being reminded why I left f@h in the first place. Maybe I'll just let the gpus fold and do something else on the cpus.
 
I just lost a 3062 that was about 75% done on a smp client for no apparent reason. :rolleyes: I'm being reminded why I left f@h in the first place.
Indeed, I lost a P2665 at 90% today. That was the last straw for me and as of now, I'm almost completely running the Linux SMP client except for one machine which isn't 64-bit. The only other CPU clients running in Windows are processing standard units, but they will be phased out before the end of the year if not earlier.
 
Listen to BillR. He knows what he is talking about. If you are having trouble with the new SMP client just drop in that replacement .exe file and delete your old fah.exe and you will be good to go. We'll all know when the windows A2 core comes out....someday.....but it probably won't be anytime soon.

 
Listen to BillR. He knows what he is talking about. If you are having trouble with the new SMP client just drop in that replacement .exe file and delete your old fah.exe and you will be good to go. We'll all know when the windows A2 core comes out....someday.....but it probably won't be anytime soon.
I don't know if you're addressing the issues I posted about, but I was running the old client with new exe. Nonetheless, the number of EUEs in the past week have steadily increased. Don't know why. They were all P2665 WUs that have proved very stable up until just recently. I only have a couple of GPU clients and most of my production is still accomplished on CPUs. I got fed up of the P2665s today. They take too long to complete, hog the network when they upload results and their worth is absurdly low for all the trouble. They should be valued at least as much as the P3065s, more even. Because of that, I replaced almost all my Windows clients with their Linux counterparts. I hope never to see another 2665 again, gods willing. I hate running VMs, but at this point there was no other choice. :mad:

 
I also had a few bad p2665 about a week and a half ago. One was especially bad and it's pestilence is still listed in my log file.
Project: 2665 (Run 2, Clone 545, Gen 34)
It crapped out on me for 3 days and kept restarting. Only problem was I didn't realize it....when i did, it took me many tries to get rid of it because it kept down loading back to my system. Other than that I have been reasonably lucky with them and they tend to complete ok.
 
Linux SMP under VMWare FTW !!!


Damn straight. It's the only right way to fold SMP. 2605s galore for very nice SMP PPD. It's the only way I've been able to keep my current PPD.

 
Damn straight. It's the only right way to fold SMP. 2605s galore for very nice SMP PPD. It's the only way I've been able to keep my current PPD.
Agreed, and I would have done so much earlier if points was my sole concern, but it isn't. The reason I haven't done it until now was the complexity and memory requirements of running as many as 4 VMs per machine, and the lack of a simple client monitoring solution. The associated delays in accessing the machines running VMware over the network is another reason. It's very slow to view the clients over a 100mbit network with a remote viewer, and without a monitoring app setup yet, that's the only way I can see if all the clients are running OK.

Everything just seems to run slower when there's many VMs configured. My main machine has only one VM running and the slowdowns are very noticeable. Combine that with 2 GPU clients and its slide-show city. Over the network it's atrocious. One machine takes a full half hour to get all the VMs running with their clients from a cold boot because its hard drive is a slow mobile unit (to save on power). There are many drawbacks to running VMs people don't often post about, but I knew about them for close to ten years with all my research into alternative computing.
 
VMWare under Vista x64 is a pain due to the unsigned drivers issue.

6.22 is taking a dump at the end of every SMP WU that I do, so I'm just frustrated with it all at the moment.

I'll probablyg o back to 5.91 soon enough, but for now... i guess my office will stay below 80F...
 
Agreed, and I would have done so much earlier if points was my sole concern, but it isn't. The reason I haven't done it until now was the complexity and memory requirements of running as many as 4 VMs per machine, and the lack of a simple client monitoring solution. The associated delays in accessing the machines running VMware over the network is another reason. It's very slow to view the clients over a 100mbit network with a remote viewer, and without a monitoring app setup yet, that's the only way I can see if all the clients are running OK.

Everything just seems to run slower when there's many VMs configured. My main machine has only one VM running and the slowdowns are very noticeable. Combine that with 2 GPU clients and its slide show city. Over the network it's atrocious. One machine takes a full half hour to get all the VMs running with their clients from a cold boot because its hard drive is a slow mobile unit (to save on power). There are many drawbacks to running VMs people don't often post about, but I knew about them for close to ten years with all my research into alternative computing.

I'm running two VMs on the rig in my sig and don't really notice any type of slowdowns. I'm sure the 4 gig of RAM helps but I still don't use all my RAM as it is.

I really haven't had much trouble getting FahMon to monitor my Linux clients. I set up the F@H folder as a share using Samba and once that is done, it's easy enough to get FahMon to find the network share and start monitoring. I haven't had much trouble with using VNC over 100mbit or 1gbit networks even with using VMs. I just setup the host OS with a VNC program and I can access the VMs on the machine from there.

I do understand how much of a pain it is to get the VMs started from a boot on the host OS. I have the same problem and I don't understand it. When I first started messing with VMs I was using openSUSE 10.2 or 10.3 as the host OS and the same thing as the guest OS while using VMWare. I did not have the super long startup times for the guest OSes. I thought a slow hard drive may have been the cause of it as well, but I still have the trouble on all my machines no matter what mix of OSes is being used and I'm using quick modern desktop drives on two out of three of the machines.

Still, even with the problems I don't worry about it. Once the VMs are setup and the initial start time for the guest OSes has passed I'm trouble free. Besides, it's a hell of a lot better than trying to mess with dual unstable Windows SMP clients.

 
VMWare under Vista x64 is a pain due to the unsigned drivers issue.

Do what I do, don't reboot. I'm currently at 16 days uptime on my Vista64 main box. The last reboot was due to the annoying Windows updates and the godawful constant popup to reboot the computer due to the updates.

 
Yep.. thank god for Notfreds Linux client and VMware. I've neither the time nor the inclination to fight the SMP clients or learn Linux.

The only real issue I had was with getting Fahmon to pickup the VMs on a reliable basis but Fahspy took care of that.


 
I'm running two VMs on the rig in my sig and don't really notice any type of slowdowns. I'm sure the 4 gig of RAM helps but I still don't use all my RAM as it is.
My main machine has the same amount of RAM but its in a 32-bit environment. Effective memory is closer to 3GB but there's only one VM. Granted, 95% of the slowdowns I notice are attributed to the GPU clients in XP. It sucks bad as you know very well from your own horrible experience, IIRC.

I really haven't had much trouble getting FahMon to monitor my Linux clients. I set up the F@H folder as a share using Samba and once that is done, it's easy enough to get FahMon to find the network share and start monitoring. I haven't had much trouble with using VNC over 100mbit or 1gbit networks even with using VMs. I just setup the host OS with a VNC program and I can access the VMs on the machine from there.
VNC is a great app. It's extremely light, fast, feature rich and best of all, free. But, whenever I want to view a machine's VMs, it takes a long time for the guest OS window to appear. This could be VMware, since it's not much faster when I'm working directly on the systems. So, either way, VMs are slower on my current infrastructure to view remotely than a native client setup, much slower, irrespective of the actual cause.

I do understand how much of a pain it is to get the VMs started from a boot on the host OS. I have the same problem and I don't understand it. When I first started messing with VMs I was using openSUSE 10.2 or 10.3 as the host OS and the same thing as the guest OS while using VMWare. I did not have the super long startup times for the guest OSes. I thought a slow hard drive may have been the cause of it as well, but I still have the trouble on all my machines no matter what mix of OSes is being used and I'm using quick modern desktop drives on two out of three of the machines.
Maybe it's just the free version of VMware that's like this? I can't say for sure without trying anything else, but it doesn't make any sense to me either. Imagine what it's like trying to boot 4 VMs on a 4200RPM drive... :eek:

Still, even with the problems I don't worry about it. Once the VMs are setup and the initial start time for the guest OSes has passed I'm trouble free. Besides, it's a hell of a lot better than trying to mess with dual unstable Windows SMP clients.
I'm here now, so I'm not going back to Win SMP after all the hassles despite the sluggishness of it all. If I don't notice improvements in client stability, I'm probably going to stop my CPU clients until Stanford does something about the issues.
 
Agreed, and I would have done so much earlier if points was my sole concern, but it isn't. The reason I haven't done it until now was the complexity and memory requirements of running as many as 4 VMs per machine, and the lack of a simple client monitoring solution. The associated delays in accessing the machines running VMware over the network is another reason. It's very slow to view the clients over a 100mbit network with a remote viewer, and without a monitoring app setup yet, that's the only way I can see if all the clients are running OK.
This actually brings up something I've been thinking about... I hate having to use samba to monitor F@H. Would others be interested in a small app which collected monitoring data and sent it over a tcp socket to a "server" (for lack of a better word)? Much easier to setup and maintain (in theory). I've been considering writing linux and windows clients, and a windows server for my own use. If others are interested let me know and I'll start a separate thread to discuss wants/needs in such an app (some of the things I want/need may not be what others want/need).

Also, SSH+puTTY+"tail -f fahlog.txt" is your friend.

 
The author stated it doesn't support Linux. Do you have Wine installed?

It's not running under Linux. I'm running Notfreds Linux iso in VMware server as I'm running GPU clients on my boxen as well so it's running on XP. Sorry for the confusion.


 
I use F@HMon and F@HSpy with my linux VM's... It just reads the log file over samba (windows file sharing for linux)... just map a network drive and point either one to it... It freaks out occasionally (usually a time issue on my boxen), but otherwise it works fine.

 
My main machine has the same amount of RAM but its in a 32-bit environment. Effective memory is closer to 3GB but there's only one VM. Granted, 95% of the slowdowns I notice are attributed to the GPU clients in XP. It sucks bad as you know very well from your own horrible experience, IIRC.

VNC is a great app. It's extremely light, fast, feature rich and best of all, free. But, whenever I want to view a machine's VMs, it takes a long time for the guest OS window to appear. This could be VMware, since it's not much faster when I'm working directly on the systems. So, either way, VMs are slower on my current infrastructure to view remotely than a native client setup, much slower, irrespective of the actual cause.

Maybe it's just the free version of VMware that's like this? I can't say for sure without trying anything else, but it doesn't make any sense to me either. Imagine what it's like trying to boot 4 VMs on a 4200RPM drive... :eek:

I'm here now, so I'm not going back to Win SMP after all the hassles despite the sluggishness of it all. If I don't notice improvements in client stability, I'm probably going to stop my CPU clients until Stanford does something about the issues.

I don't think I've successfully used VMWare on a 32 bit Windows OS to tell you the truth. I tried it a while back but for some reason it would not properly install on two XP32 installations I did. One of which was a clean install just to try to install VMWare. However, the problem could have been with the fact I had used nLite to slipstream SP3 and there seem to be some problems with that.

Otherwise, my experience with VMWare is on 64 bit OSes, most of it with openSUSE and just recently with Vista64. Hell, I'm only using Vista64 because of the GPU client and yes, I was the person with a horrible experience with the GPU client on XP32. I couldn't do hardly anything with the GPU client running. That was the source of all my slowdowns and problems. Since going back to Vista64 I have been relieved of almost all the problems and most of the slowdowns while using the GPU client. I still have some but they aren't bad enough that I can't live with them.

As I suggested earlier, try installing the VNC software on the host OS and then use VMWare's console to check out the VMs. It's still a bit slow but I've found it isn't too bad. I've also noticed most of the slowdowns in regards to the VMs is due to not having the VMWare Tools not installed. I have an XP32 installation running in a VM and VMWare Tools installed on one of my machines for a couple of automated tasks and it's not that slow compared to a native XP installation. I'm too lazy to do a compile on my Linux VMs to get the Tools installed and they are a good bit slower because of this. However, I rarely access them so it's not a big deal for me.

I use F@HMon and F@HSpy with my linux VM's... It just reads the log file over samba (windows file sharing for linux)... just map a network drive and point either one to it... It freaks out occasionally (usually a time issue on my boxen), but otherwise it works fine.


It's a lot easier to setup FahMon to monitor Linux clients via Samba as all you have to do is use Samba to share the F@H folder on the Linux install and FahMon will have no trouble finding it and picking it up. FahSpy is a totally different story, though. Having to map the shares as network drives is a pain that I don't see the need for. I actually prefer FahSpy as my monitoring program but I don't feel like doing all the extra work involved. Well, I like FahSpy of the current monitors. I would still prefer to use EMIII. It had all the information I wanted, easy to use and had no trouble with picking up Samba shares. I really wish LPerry had kept updating it.

 
My 6.22s ran for two weeks problem free then promptly started with nothing but EUEs. I don't believe relic ever got 6.22 to work along with many others.

At any rate, as you found, there is a cure;)


Yes, but that is what we are folding for...:D

 
Thanks guys, some good advice in this thread....Ill give it all a try...

If nothing else..maybe GPU folding is the way to go.....I have 3 running.I just gotta get more..:)
 
just a quick update..I got one machine stable with the SMP client....I'm done fucking with the other ones.....
If Stanford want my contribution, they can come over here and get it working themselves..other wise ill let them sit untill i find another use for them.

4 years ago when I started folding, it was a set it and forget it.....now its like a friggen job keeping it going......:)
 
I feel ya man!

My production has dropped to 2 borged console clients somewhere out there.

I attempted initial upgrade...on 30ish clients @ work, only to have most bomb that day...the rest suprisingly submitted a few units, then bombed shortly after. I wouldve been ok with the old one, if it wasnt for WSUS. I don't have the time to tinker with this shit either, especially @ work.
 
Back
Top