Need opinions on how to speed up a site

iansilv

Limp Gawd
Joined
Jun 10, 2004
Messages
335
OK,
I need some help here. I am running the following server on Rackspace:

Athlon 3200 2ghz, 2gb of ram. I hae my OS and my program on separate partitions. I have a website running on it, and I want to know what the best wayt o speed things up is- a new faster server, splittign the sql and program files in to two servers, etc. any thoughts?
 
you have a slow down with only one website running on the server? what type of site? if it has a huge DB, it might make sense to offload that to another server. However it could be as easy as to compress your images better, enable caching, and possibly offload resources where possible (i.e.: images onto amazon's S3). Oh and also optimize the web app running. If it's a hand built CMS or something along those lines, make sure your code is as streamlined as possible.
 
OK- it is a pretty extensive DB- right now, i am looking at 48 tables. What is the measurement of a truly huge DB?
 
how many rows there are per table?

I'll 2nd that question.

However, unless there are millions of rows total, I wouldn't really consider a 48 table DB 'huge'.

Joomla for instance, a pretty large (and generally quick) CMS, has in the range of 40 tables, not counting third party extensions.

Oh btw: has the site always ran slowly? or is this is a new occurance? If it's always been slow, I would have rackspace take a look at your machines setup. There might be something setup incorrectly in apache.

Edit: okay 'millions' is a gross exaggeration. However, a few thousand rows I do not believe is enough to cause a slow down, at least not a few thousand in a well designed database.
 
There's no one-size fits all answer to any broad optimization problem. Proper optimization involves first discovering what's slow.

If you don't have enough network bandwidth, nothing you can do the hardware is going to fix it - same if your pages are loading 40 separate javascript/CSS files & 'slow' is just latency.

If the database is the weak link, you might be able to achieve acceptable performance by tuning your queries, adding some indexes, partitioning data, using stored procedures or storing precomputed data in denormalized tables.

The problem might be that the code is just inefficient - you could be using inefficient algorithms, repeatedly querying the database for the same data, recalculating the same data, repeatedly opening new DB connections or not offloading -enough- work to the database.

If you're using CGI scripts, the slowness might just be startup overhead of your environment - fastcgi could be the solution.

The solution might just be something as simple as modifying the webserver configuration to allow more (less?) simultaneous requests. Similarly, there may be some tweak you could make to the DB config to speed things up.

The solution might just be something as simple

The solution might just be to tune the webserver.

BTW - 48 tables is nothing. I've worked on systems with thousands of tables/views and dozens of gigabytes of data.
 
The best way to speed something up is to figure out what's slow. Nobody here can give you any sensible answer until you explain your architecture, share what you've measured (and what you haven't), and so on. The only other viable alternative is to enumerate suggestions for things to go and measure, which amoeba has already done ...
 
Back
Top