I came across the same problem which I think is caused by the lack of a 64 bit application for GFAM. I've posted the steps used to resolve this on the DC Vault thread.Got 20 cores (3 instances) running FAH SMP.
Anybody try to run BOINC?
I got as far as attaching to WCG via the command line interface and downloading 1 GFAM WU, but it wouldn't start running.
I left town for a few days and came back and my dead instance was fine once I dropped and put back on the public IP...Looks like things are not quite stable, or at least in zone US West 2 AZ2. I have had three servers instances go south. Can not log into them from the outside and can not reboot them from the Hp Manage page. Had to terminate the instance and create a new server. Hp support has zero explanation on why they get into this state.
I had the same error...I opened up a trouble ticket with them. Support said they would email when it's resolved.Tried doing 2x8core+1x4core on the AZ2 server, yet both 8c systems say "error". Anyone know why, even though it fits the 20core limit?
What's the limit on the second server? Multiple of ya say only 4 instances on there... 4x4core?
You set up apache correctly?man, I think HP is getting raped. I'm having a ton of issues. I can't create any 8 CPU instances, it says I already have my max # of those allotted, even though I don't have any 8 CPU instances allotted.
I'm also having trouble getting HFM to connect for some odd reason.
What kind of PPD are people getting from their 4 CPU instances?
You can run 5 (4 core) instances on each availability zone. So that's a total of 100k ppd from HP... works for me. I ran the corehack on the 8c instances, they were around 40-50k pdd each, but I only got a couple bigadv WUs before being assigned regular units again, not sure what the deal is, I don't care though after hearing that the corehack is a bad thing if I don't get the units in on time.probably an apache issue
10k PPD is hardly worth the hassle. I think I'll wait till the 8 core instances are available again
they must be dynamically changing allocationsYou can run 5 (4 core) instances on each availability zone. So that's a total of 100k ppd from HP... works for me. I ran the corehack on the 8c instances, they were around 40-50k pdd each, but I only got a couple bigadv WUs before being assigned regular units again, not sure what the deal is, I don't care though after hearing that the corehack is a bad thing if I don't get the units in on time.
You can't get 1 more 4 core instance? You're given 20 cores per availability zone.they must be dynamically changing allocations
I could only initialize two 4 core instances per zone. So all I have going is four 4-core instances, generating a total of 20k ppd.
We are pleased to have your participation in our private beta program, and we appreciate your continued support of HP Cloud Services. The response has been tremendous, and we continue to incorporate your feedback into our offerings and services.
On February 27th, we will be limiting private beta users to 20GB of RAM total for all instances within an availability zone.We are implementing these limits during the free private beta to accommodate a wider variety of private beta customers. Greater diversity in our private beta phase will help us plan for the massive scalability we intend to offer over time in public beta.
We are contacting you first because you are currently using 80GB or more of combined RAM across deployed instances. We would appreciate your cooperation in reducing your combined RAM usage to 20GB or less per availability zone as soon as possible.
Determine Your Usage
At any time, you can calculate how much RAM you are using by listing your instance types or flavors in the management console or with tools such as the novaclient and referencing the following table. By February 27 th, your combined instances should not exceed 20GB of RAM per availability zone.
As shown in the table, standard.2xlarge instances will no longer be available during private beta. Users will need to spin up smaller instances instead.
Please note that these limits only apply during our private beta period and will be increased over time during public beta. With the increase of limits, standard.2xlarge instances will again be available for all accounts.
We’re Here to Help
The new limits will help us deliver the best possible experience for all HP Cloud Services private beta customers, but we understand that in some cases these limits may not meet your needs. If you’d like to request an exception, please contact our Support team and let us know how you’re using HP Cloud Services. We will review requests on a case-by-case basis and do our best to find a solution that accommodates your needs.
Once again, thank you for participating in our private beta program. We welcome your feedback and suggestions and encourage you to contact our Sales team to let us know how we’re doing. We look forward to improving and expanding our offerings in the future.
The HP Cloud Services team
I've got four of the 2xlarge instances and two xlarge instances. Just going to let it run out then I'll reconfigure in a couple weeks.This appears to be an official implementation of what they have already been doing. I've only been able to spin up four 4 CPU instances with 8gb each. No mire.
I'm running WCG with this setup (2x4s, 2x2s) on both zones. I'm getting 11-12K BOINC credits/day. 16-17K with all of my boxes.Looks like this going to not nearly as useful for folding. Starting the Feb 27th, each zone will be able to run the following to adhere to the 20GB limit:
- (2) 4 vCPU 8GB instances
- (2) 2 vCPU 2GB instances
Or maybe running (10) 2 vCPU 2GB instances might be best. I think I might be dropping the service here....