SSD High Load

turbo2ltr

Member
Just move to an SSD server and it's significantly faster.

I monitor load on my severs and all of a sudden load went through the roof. Page response was very slow. But nothing is going on! Whats up with that!

Code:
top - 13:46:58 up 5 days, 18:02,  1 user,  load average: 5.38, 3.79, 1.94
Tasks:  44 total,   1 running,  43 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   5242880k total,  4562012k used,   680868k free,        0k buffers
Swap:        0k total,        0k used,        0k free,  4019812k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
  962 mysql     20   0  497m 302m 4456 S  0.3  5.9  27:12.32 mysqld
    1 root      20   0  2900  932  752 S  0.0  0.0   0:00.49 init
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd/12258
    3 root      20   0     0    0    0 S  0.0  0.0   0:00.00 khelper/12258
  134 root      16  -4  2464  504  260 S  0.0  0.0   0:00.00 udevd
  570 root      20   0 37000 1656  744 S  0.0  0.0   0:02.49 rsyslogd
  583 named     20   0  331m  41m 1776 S  0.0  0.8   1:01.44 named
  680 root      20   0  8940  904  400 S  0.0  0.0   0:00.07 sshd
  687 root      20   0  3264  696  508 S  0.0  0.0   0:00.00 xinetd
  697 root      20   0  6268 1232 1024 S  0.0  0.0   0:00.02 mysqld_safe
1025 root      20   0 69076  18m 6504 S  0.0  0.4   0:54.77 httpd
1031 root      20   0  9024 1452  996 S  0.0  0.0   0:00.63 pure-ftpd
1033 root      20   0  9856 1100  772 S  0.0  0.0   0:00.54 pure-authd
1040 root      20   0  7184 1176  568 S  0.0  0.0   0:05.65 crond
1050 root      20   0  2988  504  360 S  0.0  0.0   0:00.00 atd
1175 root      20   0 18384 9792 1788 S  0.0  0.2   0:13.10 cpsrvd-ssl
6693 root      20   0 13768 7676 1080 S  0.0  0.1   0:11.05 lfd - sleeping
11105 root      20   0 11856 3228 2512 S  0.0  0.1   0:00.01 sshd
11119 cffdev    20   0 11992 1572  836 S  0.0  0.0   0:00.02 sshd
11133 cffdev    20   0  6404 1688 1408 S  0.0  0.0   0:00.01 bash
11175 cffdev    20   0  2568 1088  880 R  0.0  0.0   0:00.20 top
11325 root      20   0 12624 6912 2548 S  0.0  0.1   0:00.14 leechprotect
11328 nobody    20   0 75612  24m 5632 S  0.0  0.5   0:00.27 httpd
11331 nobody    20   0 69476  17m 4468 S  0.0  0.3   0:00.04 httpd
11346 nobody    20   0 69476  17m 4476 S  0.0  0.3   0:00.10 httpd
11347 nobody    20   0 80160  34m  11m S  0.0  0.7   0:02.14 httpd
11349 nobody    20   0 80228  38m  15m S  0.0  0.8   0:02.36 httpd
11351 nobody    20   0 77872  26m 6028 S  0.0  0.5   0:00.21 httpd
11354 nobody    20   0 77080  32m  12m S  0.0  0.6   0:01.71 httpd
11355 nobody    20   0 69476  17m 4480 D  0.0  0.3   0:00.08 httpd
11389 nobody    20   0 69476  16m 4280 S  0.0  0.3   0:00.03 httpd
11390 nobody    20   0 69476  16m 4280 D  0.0  0.3   0:00.02 httpd
22531 root      20   0  3096  948  712 S  0.0  0.0   0:00.59 dovecot
22533 dovenull  20   0  7220 2200 1584 S  0.0  0.0   0:00.04 pop3-login
22534 dovenull  20   0  7364 2452 1768 S  0.0  0.0   0:00.41 imap-login
22535 dovecot   20   0  2952  852  708 S  0.0  0.0   0:00.23 anvil
22536 root      20   0  3080  996  724 S  0.0  0.0   0:00.24 log
22538 dovenull  20   0  7228 2188 1584 S  0.0  0.0   0:00.03 pop3-login
22539 root      20   0  3760 1608  848 S  0.0  0.0   0:00.53 config
22540 dovenull  20   0  7224 2408 1768 S  0.0  0.0   0:00.24 imap-login
22568 mailnull  20   0 11668 1152  584 S  0.0  0.0   0:00.86 exim
25122 root      20   0  6980 3980  956 S  0.0  0.1   0:06.70 queueprocd - wa
25191 root      20   0 14500 7464 1108 S  0.0  0.1   0:40.97 tailwatchd
25201 root      38  18  5196 2080  696 S  0.0  0.0   0:00.36 cpanellogd - sl

As I write this, load is coming down. I wish VPSs were more sandboxed as far as hardware resources go. Load on other VPSs should not affect my VPS that is sitting idle. But I guess there's only so much you can sandbox on common hardware. Maybe I see a dedicated server in my future..
 
This definitely sounds odd and the output you posted confirms your VPS is pretty much idle. Do you have a ticket number I can reference?
 
No, it's back down to it's normal < 0.2. I'll watch it and start a ticket if it happens again. I'm on SSD14 if you want to keep an eye on it.

Thanks!
 
Top