Arguably the single most popular content hosting platform on the web, many high-traffic websites have come to rely on the Wordpress / LAMP stack. Various cloud-hosted solutions have stepped-up to accomodate the infrastructure load and security implications of high profile, ecommerce websites.
With Great Power Comes... Increased Risk
Many developers have built unnecessarily complex environments that work against the way LAMP-based websites were designed to work. Beyond that, while it might seem like a good idea to break up site page caching from databases and web servers, this creates multiple additional points of failure that may hinder performance and even produce more frequent downtime.
Terms like “Firewall” and "Load Balancer" are promoted as if by default they will somehow improve the performance of websites. There seems to be a belief that load balancing will magically allow infinite horizontal scaling of web applications. This *sounds* like it makes sense, except it belies how Wordpress is built. Wordpress uses file-based session caching and cookies to store user sessions. This adds complexity to an environment where the “web server” is being replicated amongst load-balanced hosts. How is the user sessions replicated? Are tmp folders broken out into separate servers and mounted on each web server? How is the cache distributed?
When not-logged-in guests visit a Wordpress site with even basic page caching, a conservatively-spec'ed server can easily sustain a couple million unique visitors per month. This is because the stress is pushed off to Apache, requiring little RAM and a fewer CPU cycles. But, what makes the situation even better is that HTTP is stateless, so it’s not like it makes much more of a difference if 100 or 1000 people are browsing the site at the same time (not-logged-in, with page caching). There is little processing overhead from PHP and MySQL and disk I/O remains minimal.
With that said, you may still need DDoS protection, but services like Cloudflare or a decent IDS help with that, actively blocking IPs participating in attacks. Note: The Firewall doesn’t do anything for you in these cases.
Scale Down Complexity
The real action comes from logged-in visitors, loading un-cached, unique pages. You begin to stress MySQL while racking up concurrent Apache processes as PHP renders un-cached pages. While it is possible to leverage query caching and PHP caching, all these solutions have their advantages and drawbacks. You can only cache so much rendered HTML when the content displayed per visitor is unique (although javascrip with AJAX calls can help get around many of these issues).
At this point it *can* make sense to break out the webserver from the DB server. The goal would be to better accommodate RAM and CPU load incurred by Apache while relieving the primary webserver from I/O, CPU and RAM load caused, in turn, by high MySQL query traffic.
Even with that said, a simple two-web-server / one-database-server, round-robin load-balanced configuration can relieve most bottlenecks and accommodate large amounts of traffic with minimal resources.
No Easy Way Out
There is no "canned" solution! You still need to run tests on any environment and make tweaks to the following resources because each web application combined with unpredictable traffic patterns will impose unique load characters on the overall infrastructure. Some suggestions...
- RAM: Enough to accommodate high volume, un-cached traffic peaks.
- CPU: Enough to accommodate PHP mod_fcgid and Apache threads incurred by concurrent traffic. Note: I make this recommendation because it runs visitor process threads in an account dedicated to the virtual host, preventing potential infection to the rest of the server in case of a system-level hack through the website.
- Disk: Usually, lots of spindles (5+ RAID5) in a redundant array suffice for the web server. Two mounted SSDs in RAID one on the server (or a separate database server) are more than sufficient for MySQL. Of course, in a pure-Cloud environment, this terminolgy can be thrown out the window as you might be dealing with "containers", "instances", "slices", "shared SAN", etc.
- MySQL configuration: This can ONLY be done properly with at least a couple weeks' load testing. A simple tool like MySQLtuner can provide all the information you need to configure table caches and various other critical variables which will allow you to optimize performance without inducing a resource-choked kernel panic.
- PHP configuration: Take care adjusted max process memory and total concurrent fcgid threads.
The goal should be to provide the least amount of *configured* resources to get the job done, so that simple resource reallocation can quickly accommodate rapid growth. While growth overhead can be accommodated by maximizing hardware infrastructure, judiciously sparing resources through initially conservative configuration is critical to preventing sudden disasters from unforseen traffic / load spikes.
Overbuild and under-configure. This leaves you room to quickly scale up before hitting a hardware wall and if you build for HA (High Availabily), DR (Disaster Recovery) is only a step away!
This article is updated regularly to fix errors and highlight new technology.