Folding on HP Cloud Services - already happening!

What speeds are you guys getting on downloads? I just tried 1gb test files from a few EU servers and I get max 3-4 MB/s
 
Best i got so far was 6 MB/sec - most of the times it's an average of 2~3 MB/sec.
 
Got 20 cores (3 instances) running FAH SMP.

Anybody try to run BOINC?

I got as far as attaching to WCG via the command line interface and downloading 1 GFAM WU, but it wouldn't start running.

I came across the same problem which I think is caused by the lack of a 64 bit application for GFAM. I've posted the steps used to resolve this on the DC Vault thread.
 
Looks like the vCPUs are 2.66GHz with 4MB cache... or atleast what /proc/cpuinfo is reporting. 8 of those do make a pair of E5440 look like girly girl by 6000 ppd :D
 
Looks like things are not quite stable, or at least in zone US West 2 AZ2. I have had three servers instances go south. Can not log into them from the outside and can not reboot them from the Hp Manage page. Had to terminate the instance and create a new server. Hp support has zero explanation on why they get into this state.
 
At least it's really easy to chat with them. And their person had a follow up email to a similar problem. At least you can terminate the instance. I couldn't do that and had to wait for them to figure it out.

 
Looks like things are not quite stable, or at least in zone US West 2 AZ2. I have had three servers instances go south. Can not log into them from the outside and can not reboot them from the Hp Manage page. Had to terminate the instance and create a new server. Hp support has zero explanation on why they get into this state.

I left town for a few days and came back and my dead instance was fine once I dropped and put back on the public IP...

I've noticed i'm getting far better ppd out of AZ2 rather than AZ1.
 
I only have 3 instances up on AZ1. Is anyone having issues with running 3 on both servers?
 
I only have 3 instances up on AZ1. Is anyone having issues with running 3 on both servers?

no problems here. I have 3 on AZ1 and 4 on AZ2. HP updated the notice to private beta customers to say that the 20 vcpu limit is per Availability Zone.
 
Tried doing 2x8core+1x4core on the AZ2 server, yet both 8c systems say "error". Anyone know why, even though it fits the 20core limit?

What's the limit on the second server? Multiple of ya say only 4 instances on there... 4x4core?
 
On both AZ1 and AZ2 I am now running (2) 8 vCPUs and (1) 4 vCPUs instances.

What is the install image you are using? Looks like they are added several non OS iamges where one is the default.
 
This sounds interesting, I made my application, wait & see...
 
After the first set of problems of not being able to terminate or boot, mine have been folding strong since Saturday. 85k-95kppd is the range.
 
Tried doing 2x8core+1x4core on the AZ2 server, yet both 8c systems say "error". Anyone know why, even though it fits the 20core limit?

What's the limit on the second server? Multiple of ya say only 4 instances on there... 4x4core?

I had the same error...I opened up a trouble ticket with them. Support said they would email when it's resolved.

I was able to make smaller instances, just not the 8 core one.
 
From the Service Status tab ...

Private Beta Customers:
We’ve had an overwhelming response to our Private Beta Cloud Compute offering, which we are very excited about! With this great response we’ve seen high utilization of the Cloud Compute resources we have deployed for private beta testing. This high utilization may lead to some customers experiencing errors when attempting to create larger instances. We are asking that all Private Beta users terminate any unused instances and use the smaller instance sizes/flavors when possible. By doing so, this will allow for a greater number of customers to also share in the Private Beta experience. Overall instance creation limits are any combination of 20 instances, 20 floating IPs, 20 VCPUs, 200 GB Ram, or 1000 GB HardDrive. This is currently at a per Availability Zone level.

Customers currently in the Private Beta can now use BOTH US West 2 - AZ1 & AZ2 for all Compute Instance builds.
 
Yeah I'm running 5x4core machines on AZ2 right now. I'll go get around to trying deleting two of the 4c ones and seeing if the 8c instances are working again (if it is, I'll delete two more and get the last 8c going).

But yeah.. got a total of 40 of HP's cores pumping on work :D Giving me a nice boost in PPD for now... I wonder how long the beta will go on. I'm not getting this "50k" ppd some people are on their 8c machines (yes I ran the corehack)... there are new guides available though... will need to get around to starting some instances from scratch and seeing if that will help. (and yes I've got my passkey in there and yes it's done 10 smp wu's already)

For some reason FAH likes to split up its processes into multiple ones that use up less cpu (yet still equal 800%). I've restarted the instance, used the top command to see that no FAH stuff was started, start up fah again.. it'll be 1 process that uses 800% and slowly breaks up into more processes that use less and fluctuate a lot. Oh well, I think re-following some of these better guides will help. Unless that's supposed to happen...
 
200gb RAM.....seems out of place compared to the other numbers
 
How much does having advmethods enabled help with ppd? I've always been enabling it so far but I'm not entirely sure what it does.
 
man, I think HP is getting raped. I'm having a ton of issues. I can't create any 8 CPU instances, it says I already have my max # of those allotted, even though I don't have any 8 CPU instances allotted.

I'm also having trouble getting HFM to connect for some odd reason.

What kind of PPD are people getting from their 4 CPU instances?
 
man, I think HP is getting raped. I'm having a ton of issues. I can't create any 8 CPU instances, it says I already have my max # of those allotted, even though I don't have any 8 CPU instances allotted.

I'm also having trouble getting HFM to connect for some odd reason.

What kind of PPD are people getting from their 4 CPU instances?

You set up apache correctly?

I'm getting around 10k ppd per 4-core instance.
 
You set up apache correctly?

I'm getting around 10k ppd per 4-core instance.
probably an apache issue

10k PPD is hardly worth the hassle. I think I'll wait till the 8 core instances are available again
 
probably an apache issue

10k PPD is hardly worth the hassle. I think I'll wait till the 8 core instances are available again

You can run 5 (4 core) instances on each availability zone. So that's a total of 100k ppd from HP... works for me. I ran the corehack on the 8c instances, they were around 40-50k pdd each, but I only got a couple bigadv WUs before being assigned regular units again, not sure what the deal is, I don't care though after hearing that the corehack is a bad thing if I don't get the units in on time.
 
I am getting 5k and 6k ppd on my 4 core instances. Between 14k and 25k ppd on the 8 core instances.
 
You can run 5 (4 core) instances on each availability zone. So that's a total of 100k ppd from HP... works for me. I ran the corehack on the 8c instances, they were around 40-50k pdd each, but I only got a couple bigadv WUs before being assigned regular units again, not sure what the deal is, I don't care though after hearing that the corehack is a bad thing if I don't get the units in on time.
they must be dynamically changing allocations

I could only initialize two 4 core instances per zone. So all I have going is four 4-core instances, generating a total of 20k ppd.
 
they must be dynamically changing allocations

I could only initialize two 4 core instances per zone. So all I have going is four 4-core instances, generating a total of 20k ppd.

You can't get 1 more 4 core instance? You're given 20 cores per availability zone.

EDIT: I know my 5th instance took awhile before it was available though
 
You can't get 1 more 4 core instance? You're given 20 cores per availability zone.

EDIT: I know my 5th instance took awhile before it was available though
I'm limited to 10 cores per zone it appears
 
Just FYI for those that didn't see the email today:

We are pleased to have your participation in our private beta program, and we appreciate your continued support of HP Cloud Services. The response has been tremendous, and we continue to incorporate your feedback into our offerings and services.

On February 27th, we will be limiting private beta users to 20GB of RAM total for all instances within an availability zone.We are implementing these limits during the free private beta to accommodate a wider variety of private beta customers. Greater diversity in our private beta phase will help us plan for the massive scalability we intend to offer over time in public beta.

We are contacting you first because you are currently using 80GB or more of combined RAM across deployed instances. We would appreciate your cooperation in reducing your combined RAM usage to 20GB or less per availability zone as soon as possible.
Determine Your Usage

At any time, you can calculate how much RAM you are using by listing your instance types or flavors in the management console or with tools such as the novaclient and referencing the following table. By February 27 th, your combined instances should not exceed 20GB of RAM per availability zone.

Table snipped

As shown in the table, standard.2xlarge instances will no longer be available during private beta. Users will need to spin up smaller instances instead.

Please note that these limits only apply during our private beta period and will be increased over time during public beta. With the increase of limits, standard.2xlarge instances will again be available for all accounts.
We’re Here to Help

The new limits will help us deliver the best possible experience for all HP Cloud Services private beta customers, but we understand that in some cases these limits may not meet your needs. If you’d like to request an exception, please contact our Support team and let us know how you’re using HP Cloud Services. We will review requests on a case-by-case basis and do our best to find a solution that accommodates your needs.

Once again, thank you for participating in our private beta program. We welcome your feedback and suggestions and encourage you to contact our Sales team to let us know how we’re doing. We look forward to improving and expanding our offerings in the future.

Sincerely,
The HP Cloud Services team
 
This appears to be an official implementation of what they have already been doing. I've only been able to spin up four 4 CPU instances with 8gb each. No mire.
 
This appears to be an official implementation of what they have already been doing. I've only been able to spin up four 4 CPU instances with 8gb each. No mire.

I've got four of the 2xlarge instances and two xlarge instances. Just going to let it run out then I'll reconfigure in a couple weeks.
 
I wonder what they will do to those of us who are going to wait until the deadline.
 
Looks like this going to not nearly as useful for folding. Starting the Feb 27th, each zone will be able to run the following to adhere to the 20GB limit:
  • (2) 4 vCPU 8GB instances
  • (2) 2 vCPU 2GB instances

Or maybe running (10) 2 vCPU 2GB instances might be best. I think I might be dropping the service here....
 
Looks like this going to not nearly as useful for folding. Starting the Feb 27th, each zone will be able to run the following to adhere to the 20GB limit:
  • (2) 4 vCPU 8GB instances
  • (2) 2 vCPU 2GB instances

Or maybe running (10) 2 vCPU 2GB instances might be best. I think I might be dropping the service here....

I'm running WCG with this setup (2x4s, 2x2s) on both zones. I'm getting 11-12K BOINC credits/day. 16-17K with all of my boxes.
 
Back
Top