Error with notfred's in VMware


Supreme [H]ardness
May 27, 2007
So I've got the Q6600 up and running and I'm trying to get notfred's running through VMware. I'm getting the following error when I run the iso:

Setting up instance 1
mount: mounting /dev/hda1 on /hda failed: No such device or address
Disk image is corrupt, not restoring

And it just hangs there. Is there anything special that needs to be done that I'm completely missing here?


Mar 25, 2005
I had the same problem, it would be nice to have a fix for it, in the mean time I am working with CentOS
Apr 3, 2008
I encountered the same message. I think it has something to do with notfred's OS looking for an IDE drive ("/dev/hd") and then looking for the first partition on ("/dev/hd/hda1") to save the WUs, but VMware hard disks, for whatever reason (even when you select IDE as the HD type), are viewed by notfred's OS as SCSI drives ("/dev/sd/sda1").

From my experience, if you point your browser to the IP address notfred outputs the RAMdisk to, or to the Samba share in My Network Places > Entire Network > DISKLESS, you'll see that despite the mounting error message, that the folding process is still working. The only problem here is, if you need to restart your VMs or host OS, you lose all the progress on the WUs you're working on since there only being saved on a RAMdisk and not to a non-volatile media.

As a result of this problem, I've decided to run Xubuntu natively, and use two instances of the SMP linux client, configured for "normal" memory WUs (instead of big memory WUs) and I'm getting Project 2605's steadily--which are the same Projects given out to dual processors, or to Quad's running 2x VMs. Running Xubuntu natively, instead of Windows as host and 2x notfred VMs, I'm actually getting a 100 ppd/client boost (from 2400/ppd/client to 2500/ppd/client for a total of 5000 ppd on my Q6600).

In sum, VMware uses processor cycles to the tune of 100 ppd/instance. And, if you have a quad and you think you need to use VMware to get dual-processor type units and avoid quad-only units, it appears you can avoid the efficiency decrease by running linux natively, with 2x SMP clients from different directories, and configure your clients to avoid these "chewey" WUs by configuring your options to the "normal" memory WU type.