mysql server caching

bubbles

Limp Gawd
Joined
Dec 16, 2005
Messages
180
I have an apache server and a seperate mysql server. Quite often it happens that 1000 people request the same thing at once and the server slows down when it sends the data to the apache server (over internal gigabit network). Query_cache is turned off. From what I read it should probably be turned ON to fix this.

The problem is the tables are constantly updating with new inserts/updates all the time so would that work with caching? or would it send stale data?
 
http://dev.mysql.com/doc/refman/5.1/en/query-cache.html
The query cache can be useful in an environment where you have tables that do not change very often and for which the server receives many identical queries

What is your traffic pattern like? As in, what is the relationship between updates and selects? Even if you can cache 1 out of 5 queries, that still represents a significant savings.

I'd take some benchmarks now, then implement query caching, and take a second set of benchmarks. It's really the only way you're going to know.
 
http://dev.mysql.com/doc/refman/5.1/en/query-cache.html

What is your traffic pattern like? As in, what is the relationship between updates and selects? Even if you can cache 1 out of 5 queries, that still represents a significant savings.

I'd take some benchmarks now, then implement query caching, and take a second set of benchmarks. It's really the only way you're going to know.

It updates about every 5-30 secs I guess and there is a sort of polling system where the clients check every few secs if there are changes, and if there is a change it does more queries and sends the new info back. The problem is when it sends the updates it seems to bottle neck. It also goes up to 80-90% CPU usage when it happens, but the load average is still low so I don't think the CPU is the bottle neck.
 
Can you post your machine specs and your my.cnf file?

Have you tried running mysqltuner.pl?

Can you use memcache on your Apache server? That's a huuuuge boost. Huge. If you can use it, the requests will never even go to your DB server.
 
What does your query look like? (Show the generated SQL) If you are not running a stored proc, can it be turned into one?

Is this a 3rd party product/schema, or your (or someone else's) homegrown site and DB?

Aside from the memcache suggestion, is there anything that can be statically cached in the website code during spin-up to reduce redundant chatter?
 
As far as I can tell, MySQL caching stinks. You can move to a real RDBMS like Oracle or SQL Server, or you can modify your application to do the caching in the application server layer or with memcached or somewhere else. Polling a database for changes is a pretty bad design pattern, so it sounds like your application could use some refactoring anyway.

You're spiking to 90% CPU, but you don't think you're CPU-bound. Why is that?
 
Because, as you said, his code needs refactoring. These are spikes and not continuous high loads, and most likely a good refactoring would cure the problem.
Polling a DB is not necessarily a bad design pattern, depending upon many factors we don't know. Maybe an external process is taking care of a particular update. What could be done is to cache the DB polling query and execute it as seldom as possible. One query per few seconds would be nothing and I doubt he is executing it more often.
 
That the code needs refactoring isn't an indication of a CPU-bound process. That the load isn't continuous doesn't exclude a diagnosis of a CPU-bound process.

Since we don't know the query, we don't know if one query per few seconds is something or nothing -- a poorly written query against a terribly implemented database can take minutes or hours to run, and multiple instances of that query might end up running at the same time if the polling frequency is too high.
 
I think the problem might be to do with threads being created. When there is something new the "running" threads jumps from like 3 to 500. The thread_cache_size was set at 16, I put it up to 50 to see what happens. I guess it probably needs to be higher? It's using persistent connections.

Normal:

load average: 0.39, 0.59, 0.61 mysqld 5.1.67 up 7 day(s), 1:54 hrs
845 threads: 2 running, 50 cached. Queries/slow: 97/0 Cache Hit: 100.00%

Something new in the db:

load average: 0.70, 0.65, 0.63 mysqld 5.1.67 up 7 day(s), 1:55 hrs
948 threads: 575 running, 0 cached. Queries/slow: 113/0 Cache Hit: 100.00%
 
Because, as you said, his code needs refactoring. These are spikes and not continuous high loads, and most likely a good refactoring would cure the problem.
Polling a DB is not necessarily a bad design pattern, depending upon many factors we don't know. Maybe an external process is taking care of a particular update. What could be done is to cache the DB polling query and execute it as seldom as possible. One query per few seconds would be nothing and I doubt he is executing it more often.

Polling is pretty much the only way to do it with apache, and it works great during off peak hours. It's just during the highest peak time, when there is something new that all the clients need the CPU spikes happen. But when everyone is just polling the load is very low and everything is snappy as hell.
 
Huh? Polling is the only way to do what? Even-driven programming is always an alternative.
 
I dont think apache can do it the proper way with COMET you are limited to polling. I will upgrade to lighttpd at some point but I still need to have apache working until then.
 
To me the obvious way of doing this is to cache this in the code running on Apache using something like memcached or redis.

The way I do this in PHP is (very simplified):

- Client requests page
- Check memcached for key
- If not exist, query mySQL for data, save it in memcahce with key and a tag eg. photos
- Serve page up

This way when upload a new photo...
Photo data inserted into table
Clear all memcached with previous tag

Next user to request the page, will initiate a this stage '- If not exist, query mySQL for data, save it in memcahce with key and a tag eg. photos'

I tried to use mySQL caching once, gave up on it as it's **** in my opinion!
 
I think I narrowed it down a bit more. The max connections on mysql is set to 1500 but the apache maxclients was only 1000. thats why it would only hover around 1000 threads and slow down a lot. However when I raise apache maxclients I get 'cant create new thread' errors? Then I have to killall apache because it wont even shutdown.

They are on separate servers so I wonder if it runs out of sockets or something?

I raised the max processes and max open files limits for mysql user on the OS.
 
There is probably a problem with your configuration if your web server is spawning 1k threads.
 
Back
Top