Blog Performance Improvements

As many “first responder” Scot’s Newsletter subscribers found out when the last newsletter went out on December 6, access to the database that underpins the Scot’s Newsletter Blog is limited to 50 simultaneous database calls by my webhost. Every shared webhost that I’ve come across has a similar MySQL access limit. So the limit was no surprise.

But I was not expecting so many readers to be unable to read any of the stories at all, or to have to wait “minutes” for pages to load, during the first couple of hours after the newsletter mailed. Normally, the blog is very fast, snappy even. So the problems that many people saw were temporary. But I experienced the slow performance myself. Even though the problem lasted only about two hours until peak demand wound down, the experience for the people who respond right away to the newsletter is unacceptable.

In case you’re wondering why this happened, there are some underlying technical issues that I’ll come back to shortly, but the primary cause is that the change to a blog focus means that the newsletter no longer contains whole articles; it describes articles and links to them on the blog site. Everyone who reads a blog post by clicking a link in the newsletter is making a call to the database. Scot’s Newsletter has never worked this way in the past. So it’s a learning experience.

Since the last newsletter mailed on December 6, I’ve been working on solutions to the problem — and I’ve made some headway. Here’s a more detailed explanation of the technical issues and the solutions I’ve put in place since my last post on the topic:

Problem 1: Out of the box, the WordPress software has manual caching options but they’re turned off by default and not very easy to use. Without any form of caching, every single request to read a story requires a direct call to the database. I wasn’t aware that there was no default caching at all. I had read up on a WordPress add-on called WP-Cache that some people have had issues with. When I first installed WordPress, the available version of WP-Cache was older than the version of WordPress I installed. And the WP-Cache site does not list the WordPress versions it supports. I decided the better part of valor was to keep it simple at the beginning and not install too many WordPress plug-ins. The result, of course, was a WordPress installation that relied on database calls for every display of the content.

Problem 2: WordPress (in fact, most blog software) as well as Invision Power Board (and most forums software) require MySQL databases. For nearly five years, the Scot’s Newsletter Forums has had a large MySQL database. When I launched the blog I set up a second database for WordPress. What I didn’t know was that by setting up the database with the same user name that I use for the Forums, I was effectively forcing both databases to share the 50-concurrent-MySQL-database-calls limitation.

Fix 1: As it happens, WP-Cache was updated very shortly after I installed the blog. So after some more research on the built-in WordPress caching capabilities and WP-Cache, I opted to go with the plug-in. It installed cleanly and easily (although, it could use more detailed installation instructions). I experienced none of the problems that some people reported after installation. And I can see from its control panel that it quickly indexed and cached the stories.

Fix 2: I also wanted to separate the connection limit on the forums database from the connection limit on the blog database. My thought was: Could I get a second account from my webhost and move one or the other to that account? Doing that would require me to change the URL of either the blog or the forums — for very good reasons on both sides, I didn’t want to make either change if I could avoid it.

It took several days of back and forth with IX Webhosting, my webhost, before the tech reps finally understood my problem and one specific rep (Alexander, who has been a great help in past) finally understood all the aspects of what I was asking for and gave me a useful answer. The answer when it came was worth the several days of low-grade frustration that IX’s less-experienced tech-support reps put me through. All I had to do was create a new user name for the blog database, eliminate the access to that database from the shared user name originally created for the forums database — and each database would have separate 50-concurrent-users database-connection limit. There was no added charge for this, I didn’t need a separate account, and most importantly, I would not be forced to make a very tough URL-change decision. It was also an easy thing to accomplish — less than 10 minutes after I read Alexander’s post in my trouble ticket, I had made the suggested change.

My hope is that, with an effective increase in the upper limit on concurrent database connections (since blog and forums no longer share) and the addition of the performance-improving, database-call-reducing cache software, the blog will be sufficiently better able to serve stories that it will hold up under the steep initial demand for stories during the first few hours after the newsletter mails.

It all sounds good, and the blog seems snappier to me. But we’re also going to need the proof before I dust off my hands. I’m currently thinking that the next newsletter will mail sometime the first full week of January, so I’ll be monitoring the situation next time to see whether the problems have been solved or whether additional measures are called for.

Notification List Subscribers: Let me know if you run into performance issues when trying to reach the blog during the first few hours after the next newsletter mails.

4 Responses to “Blog Performance Improvements”

  1. ruirib Says:

    IMHO, the 50 cuncurrent connection limit is imposed by the host but it’s not that common. I find that there are no reasons for that, specially with a database system that costs… well nothing. There is no justification for that limit, pure and simple, except to force you to pay a few more bucks per month… It’s ridiculous.

  2. Scot Says:

    The last four hosts I’ve had have all had simultaneous-MySQL-database-call limits. They don’t put this spec on their site details pages, but they have them. One of those hosts kicked me off its service because I was exceeding the unpublished limit too frequently. It said it couldn’t handle the demand of the forums. Now, *that* was ridiculous.

    I disagree with you a bit, ruirib. I can tell you that at my day job, where among other things I’m editorially responsible for a large, high-volume website, we have heavy duty caching software in place to cut down on the number of simultaneous calls to our Oracle database. Because, otherwise, service would eventually crawl to a halt. You can’t just run everything real-time out of a database — especially during peak demand times.

    Where I agree with you is that 50 concurrent calls is a pathetically low number.

    By the way, my webhost doesn’t have a more expensive service or offer an upgrade for a few dollars a month. I just got rid of a webhost ( that played those games. So I’m not saying you’re wrong. Many hosts do do that kind of stuff. But IX does not. Of course, IX ain’t perfect either. 😉

    — Scot

  3. cgilkison Says:

    I can’t tell you how much time this post just saved me. With five wordpress databases under one user, I was constantly running under this problem, but no one at Dreamhost seemed to be able to tell me why. Now I actually look forward to the monday surge to see if this fix works. Thanks!

  4. Scot Says:

    Glad to be of service. Not sure whether it was the cache tool or the webhost solution you were referring to, but I added a link to WP-Cache in the post in case that helps.

    It’s not clear to me that the concurrent user limitation will be something that will be universally helped by separate user names for the databases. But if that’s what you’re doing, I hope you’ll come back and let us know whether it worked.

    — Scot

Leave a Reply

You must be logged in to post a comment.