cPanel VPS Optimized

Status
Not open for further replies.
Pmcwebs,

Thanks again for the info! Yep, after a little time conversion here too, the times you list match up perfectly, as do our graphs. I'm thinking it's time for someone from higher up in KH to be directed to this thread, as the support guys and gals are clearly making stuff up as they go along. They're performing "fixes" to my VPS (when it clearly isn't the problem to begin with), and saying that they aren't seeing any spikes server-wide, and no elevated load server-wide at all.

Again, the one last night wasn't THAT big of a deal for me, because it happened here very late US time, but the one last week lasted almost 5 hours during prime time, fixed itself, then took another hit just a few hours later. There's obviously something going on with the server.


awswd.net_bryan_loadpics_1.jpg


awswd.net_bryan_loadpics_5.jpg


awswd.net_bryan_loadpics_15.jpg
 
Ahhhh, amazing coincidence once again. ;) Yep, that's exactly the one I'm talking about.

June 25th my time
awswd.net_bryan_loadpics_6_25.jpg


June 26th my time
awswd.net_bryan_loadpics_6_26.jpg
 
ppc - thanks for your email pointing to this thread.

Bryan, pmcwebs - I was aware of the end of June problem, it was related to the abusive account and was resolved as we were able to trace the activity to the specific account/script running there and got this corrected. As for recent spikes - I will check with the maintenance guys to find out if this was traced to the specific activity already or not. As of right now I can think of two general possibilities:
- Backups. There is a good chance that the same problem as we recently had in TX started to happen in LA. We're in process of building redesigned backup system in LA to match with what we've made in TX as apparently new backup structure in TX helped to get the problem with systems based on Tyan MBs with nVidia chipset resolved. Current ETA for new backups system completion in LA would be around 2-3 weeks;
- Abusive VPS/script. As I mentioned I'll check with the maintenance to see if any account was already identified as responsive for the possible filesystem / fs journal lock. If it was it will be moved away to get the problem resolved outside of the production machine. If not, work will be continued and in the mean time we can get your VPSs moved somewhere else at least until the reason will be identified / fixed. If you want to be moved somewhere else please feel free to PM me or submit a ticket and ask it to be escalated to me.

pmcwebs - I'm quite confused as to how two of your VPSs ended up being on the single physical machine. When we do accounts provisioning we usually check to see where other VPSs are located to distribute customer's accounts across multiple physical machines for stability and uptime purposes. I do believe this is something that needs to be resolved and one of your VPSs needs to be moved to a different machine to avoid situations when, say, both VPSs will go down due to physical hardware problems with one of the machines. Please PM/ticket me with VPSs IP which you'd like to be moved away.

And in general - if you're having troubles and feel that support doesn't deliver correct or expected solutions please feel free to PM me on this forum with problem description/ticket #, ask support to escalate the ticket to me or just drop me an email at paul (at) knownhost.com.

US VPS hosting Services from KnownHost.com.
 
Paul,

Definitely appreciate the reply! Again, it's a little frustrating to have 5 to 6 hours of issues and see the Support guys on a wild goose chase fixing "problems" that aren't problems to begin with.

What I guess I was more or less wondering with this thread (and thanks again pmcwebs for helping with the confirmation) is why Support didn't recognize that the problem was happening on a server-wide level. In the June instance, I just got fed up with Support after hours of tickets and hours of getting nowhere, and on the July 5th issue, I just gave up after I saw the first reply to my ticket. Apparently, there were quite a few people who were complaining about the issues, yet every person was told that there was nothing going on at the server level.

Anyway, thanks again Paul. I'll tough it out on this server and hopefully everything will get taken care of. I've been VERY happy with the server so far (heck...look at the other loads on those graphs...darn near 0.00 all day long :D). Certainly not trying to bust anyone's chops over this or get any of the Support guys/gals in trouble, but it would be great if they would actually research a problem completely first, before just randomly "fixing" things.

Still a very happy KH customer,

Bryan
 
I'll tough it out on this server

You do realize the move takes minutes and there's no changes needed on your part whatsoever. It's all done outside of the VPS container. Just curious why you would want to stay on the server that's causing you issues. This is the beauty of Virtuozzo.
 
Thanks PPC, yeah I know. I've been really happy with this server (except for the 2 issues). Like I said above, my main problem isn't so much that the server went down for a bit (it happens...the uptime here has been absolutely incredible over the last almost 2 years), but more that the Support guys had no idea what they were even looking for, and just randomly went off trying to fix things that didn't need to be fixed. If they had diagnosed the problem appropriately, it might have been fixed a lot quicker.

Also, I did change servers quite some time ago (in the same datacenter...not the move from SJ to LA) and it required a change in IP addresses. Normally not a big deal, but we have some apps programmed to specifically use IPs. Wouldn't be THAT big a deal, but would still require some programming on my part.
 
Bryan, there might have been a disconnect between support and maintenance teams. Basically support first tries to resolve the problem inside the specific VPS (which is something that covers 99% if not more of tickets) and then passes the problem to maintenance if it appears that the issue might be outside of the VPS. At the same time if maintenance sees the global problem which can't be fixed / workaround right away the support team gets notified about the possible issue with specific machine so tickets can be handled appropriately. When this happens the problem also gets escalated to me to create a thread on our forums. This process might not be the best and I'll see what can be adjusted to make it better.

VPS migration within the same DC doesn't require IP change. During the movement of the VPS to another machine customer will experience anywhere between couple seconds of network downtime (best case scenario) while memory dump/tcp connections information and changed files are copied over to the destination server to couple minutes of downtime (worst case scenario) associated with VPS being stopped, changes files copied over and VPS being started on the destination machine. In most cases during the VPS migration even your ssh sessions won't drop and will remain active.
Could you please PM me a ticket # where you was forced to change IPs during the VPS migration to another machine within the same DC? I can't really imagine why this would be required unless if you wanted a clean system or a system with a different control panel or wanted to switch from CentOS 4 to CentOS 5.
 
Could you please PM me a ticket # where you was forced to change IPs during the VPS migration to another machine within the same DC? I can't really imagine why this would be required unless if you wanted a clean system or a system with a different control panel or wanted to switch from CentOS 4 to CentOS 5.

Now that I think back (and look through tickets and my welcome email), I believe I was thinking of our old host...with changing IPs when switching servers. I don't know...these years just blend together. That was my mistake. :eek:

Bryan, there might have been a disconnect between support and maintenance teams. Basically support first tries to resolve the problem inside the specific VPS (which is something that covers 99% if not more of tickets) and then passes the problem to maintenance if it appears that the issue might be outside of the VPS. At the same time if maintenance sees the global problem which can't be fixed / workaround right away the support team gets notified about the possible issue with specific machine so tickets can be handled appropriately. When this happens the problem also gets escalated to me to create a thread on our forums. This process might not be the best and I'll see what can be adjusted to make it better.

Sounds good to me. Like I said, I've been EXTREMELY satisfied here. Anything to make things even better would just be an added bonus.

Thanks again Paul!
 
Status
Not open for further replies.
Top