Notfred vm error

k96gnome

Limp Gawd
Joined
Feb 4, 2009
Messages
167
At the exact instant two of my VM's running smp clients hit 100% on they're WU's, they started outputting the following.
Code:
attempt to access beyond end of device
hda1: rw=1, want=2096656, limit 2088387

I looked briefly for help on resolving this error but, so far, all I can find is one person in this thread that says rebooting the VM resolves it temporarily. I don't like the sound of that because I fear losing progress or my finished WU's. Has anyone else encountered this problem, know of another solution, or a way to prevent this in the future?

Update: Well since I was too scared to reboot them, I left them alone and now fahmon shows they're progress at 2% with an * at the end of the ppd amount...Since the PRCG for those 2 SMP's look the same as they were before, I'm guessing those WU's were lost and restarted? Great.
 
Last edited:
How much RAM do you have allocated to each VM? If you have less than 1GB each, try changing them all to 1GB by editing the folding.vmx files with Notepad and changing the memsize line to 1024. I've found that running with 1GB per VM prevents a lot of the weird errors from happening.
 
I get the same error all of the time and have my VM set to use 1024MB RAM. I just let it go as there doesn't seem to be any slowdowns to folding. If I check the VM and it is doing this and is just a couple % into a WU, I'll reboot the VM, because I have lost a WU rebooting in the past.
I'm far from a Linux expert, but isn't "hda1" a reference to the virtual hard drive?
 
I'm far from a Linux expert, but isn't "hda1" a reference to the virtual hard drive?
Yes, but I don't think anyone is quite sure exactly why that message shows up, or even it if really indicates any serious issues.
 
How much RAM do you have allocated to each VM? If you have less than 1GB each, try changing them all to 1GB by editing the folding.vmx files with Notepad and changing the memsize line to 1024. I've found that running with 1GB per VM prevents a lot of the weird errors from happening.

I have the memsize set to auto. When all 4 VM's are running my total system ram usage peaks at 5.11GB out of 6GB that are available. This is probably in addition to the ~1.8GB that appears to be used by windows at idle. Should I reconfigure them all to have a forced memsize of atleast 1024MB to be safe? I really didn't run into the issue until the VERY end of the WU's.
 
Last edited:
Alright I set the memory size and rebooted. I'll cross my fingers when these new WU's hit 100% tomorrow :). Thanks for all the helpful advice Zero. Are there any other VM setting tweaks I should know about?
 
I get those same errors all the time. They can run for days and days like that and continue to complete work units with no issues. It used to freak me out but you get used to it. Every once in awhile I reboot just for good measure.
 
With the setting of 1024MB, I got the same error around the 50% mark for all 4 VM's this time. As long as this doesn't carry a risk of losing WU's, I think I'll just try to ignore it. :(
 
I got that error all the time when I was running my VMs

Its because in Notfred the Virtual HDD is set to specific max, and after a while it runs out or something.
But It never seemed to affect the WUs so I just ignored it, and kept on trucking.
 
I have the memsize set to auto. When all 4 VM's are running my total system ram usage peaks at 5.11GB out of 6GB that are available. This is probably in addition to the ~1.8GB that appears to be used by windows at idle. Should I reconfigure them all to have a forced memsize of atleast 1024MB to be safe? I really didn't run into the issue until the VERY end of the WU's.


If you can get access to VM Workstation, you may be able to resolve some of your memory issues. I loaded up VM Workstation on my i7 (which is what I assume you are using). I only have to run 2 VMs to max out all 8 cores (physical & virtual) and my memory usage is way down. In addition, going to Win7 reduced it even further.
 
If you can get access to VM Workstation, you may be able to resolve some of your memory issues. I loaded up VM Workstation on my i7 (which is what I assume you are using). I only have to run 2 VMs to max out all 8 cores (physical & virtual) and my memory usage is way down. In addition, going to Win7 reduced it even further. ]

I thought the Linux smp client didn't scale well past 2 cores. I know I saw that in more than one guide but maybe that information is outdated. I'm running Win7 64bit right now on my i7 without any overclock and am getting roughly 7000ppd with my 4 VM's. How many ppd are you getting with just those 2 VM's?
 
I think I was getting 10K PpD @ 3.9GHz. Not currently running VMs as I need to replace my WB to keep up with the heat. I'll have it back up and running in a week or two.
 
I thought the Linux smp client didn't scale well past 2 cores.
The Windows SMP client is the one with issues scaling to more than two cores. Running two 2-core VMs does produce slightly more points than one 4-core VM, but it's only a small difference and the scaling is still much better than WinSMP (with which I didn't see a difference between running it on 2 cores versus 4 cores on my quad).
 
Only difference for me is the amount of memory used by the VM. At 1024 per VM, I save 2GB of RAM by dropping to 2 VM vs 4 VMs
 
Thanks for the info. I just got a 4850 in today (was previously running an x1800) and I would like to get it in on the folding action too, but I don't know the best way to mingle it with my cpu clients. I set it up with the windows GPU2 console client with default settings + verbosity 9 for now so I'll see what ppd that gets me.
 
Thanks for the info. I just got a 4850 in today (was previously running an x1800) and I would like to get it in on the folding action too, but I don't know the best way to mingle it with my cpu clients. I set it up with the windows GPU2 console client with default settings + verbosity 9 for now so I'll see what ppd that gets me.
Follow this guide: http://www.hardforum.com/showthread.php?t=1419852

The ATI GPU client at stock will use up a ton of CPU power, so your GPU client will be starved by the VMs. Use this guide, and afterward set up WinAFC to raise the Fahcore_11.exe priority to Above Normal and lower the vmware-vmx.exe priority to Idle.
 
Whoa whoa whoa. After following that guide, my gpu client is now moving at half the speed it once was. I'm topping out at 900ppd on it right now. I realize the guide puts great emphasis on using catalyst 9.4 drivers but I assumed that was because of the posting date of the thread. I'm running catalyst 9.7 right now.
 
Hmm, I'm running 9.6 and it's working fine. Try deleting the FLUSH_INTERVAL variable.
 
Ugh. After deleting just FLUSH_INTERVAL and rebooting, It's now taking 3X as long per frame and went down to 600ppd. I think I'm just going to delete all those settings and reinsert the ati dll's.
 
Alright, give that a shot. See what kind of CPU usage you're getting from the GPU client as well.
 
With those environmental variables I was getting 2% cpu usage by the gpu client and 900ppd. Without them I get anywhere from 5-12% cpu usage with a ppd of 1600.
 
5-12% means it's using up an entire logical core, which is what I was trying to avoid. But if using the environment variables kills your PPD, there might not be another option. Can you monitor the PPD you're getting on all of your VMs and see if one of them drops when the GPU client is running?
 
FahMon shows the ppd's of all the VM's being roughly the same as they were before I added the gpu client but It's reporting all ppd calculation with *'s next to them. The gpu client is also the only one with a green status indicator while the 4 VM's are yellow (despite all of them progressing through frames normally in the log file). Neither the *'s or the seemingly false yellow indicators occurred when I had the environmental variables set or when I was running the cpu clients alone. I'm not sure what to make of this.

update: I guess I didn't give them enough time since my last reboot. Once all clients progressed 3 frames, the indicators turned green and the ppd calculation for all of my VM's dropped 150ppd. This still puts me at 1000ppd total more than what i normally get with just the VM's running and 400ppd more total than what I was getting with those environmental variables setup.
 
Last edited:
Back
Top