Scaling on Demand – Sulu as a Cloud-Ready CMS

Sulu is a cloud-ready CMS, making it ideal for applications that scale to suit demand at short notice. You can drop in your choice of database solution, asset storage, and many other components. Sulu stores all configuration strictly in code and not the database — so hosting your application in the cloud is more straightforward than with some other CMSs. In this guide, we walk you through the steps necessary to leverage cloud infrastructure and understand the difference between vertical and horizontal scaling.

Dealing with heavy and volatile traffic by scaling in the cloud

Whether you have consistently high traffic or experience spikes under exceptional circumstances, modern scaling strategies allow you to adapt your hosting infrastructure without your users or editors noticing any lag or downtime.

You can scale a Sulu application to solve or anticipate performance issues from simple virtualization to a more complex setup involving a horizontally scalable, containerized approach. Before you decide, you need to get a grasp of where performance bottlenecks are most likely to occur.

Identify the bottleneck before you begin

If you’re having performance problems with an existing application, or want to make sure it can react to future peaks in traffic, identify the bottleneck with tools like ApacheBench and Siege. Locust can help you in more complex scenarios. If you’re at the planning stage for a new application, you need to consider how much traffic you are expecting and how much it will fluctuate.

Parts of your application that can be the weakest link in the chain are:

  • The database - This is usually the bottleneck if you have database-intensive operations due to difficult-to-cache content.
    • In this case, use a cloud-hosted database service such as Amazon AWS, Google Cloud, or Microsoft Azure.
  • The server - If your content is difficult to cache and involves lots of personalization, or resource-intensive business logic such as custom PDF generation.
    • We go into more detail about approaches to scaling your server below, in particular using a technique called horizontal scaling.
  • The HTTP cache - Often the weakest link, and a common single point of failure when you have high traffic but cacheable content.
    • If necessary, the cache can also be scaled horizontally to speed up page loading times and take the unnecessary load off your server.

Here are two illustrative examples from our own work:

Example 1 - Allianz Cinema

  • Extremely high seasonal traffic
  • Content — mainly personalized tickets — largely uncacheable
  • Horizontally scaled server using Kubernetes


Example 2 - Küchengötter

  • Content highly cacheable using Varnish
  • Extremely high, but constant, traffic levels all year round
  • No need to scale performance
  • Simpler setup with just a few servers coordinated with Deployer

We’re here to help

If you would like help setting up these sorts of applications, join us in the Sulu Slack channel or check out the Sulu documentation. For a fast track to success, find out about our workshops, consulting, and coding services.

Help

The difference between horizontal and vertical server scaling

Traditionally, if an application was pushing the limits of the server’s performance, the only option was to add RAM and CPU to give it a boost. This approach is known as “vertical” scaling and is still a viable option for some applications. But there is a practical limit to how much RAM and CPU you can put in one server. Upgrading the server also requires you to reboot, causing unwanted downtime.

“Horizontal” scaling is a more recent approach. Instead of upgrading the existing server, you add more servers as you need them, spreading the load and eliminating downtime. One possible solution is to put your code in a Docker container and use Kubernetes, or another option to orchestrate your container setup. If you encapsulate your server configuration and application code in a Docker container, you can spin up fresh copies of your container and add them to a network of servers that share the load.

Distributed state is one of the biggest challenges you will need to address. Unless you use sticky sessions (which might make the load balancer the new bottleneck), the containers are completely stateless. This could lead to consecutive user requests being sent to different servers, which wouldn’t know the user had already visited the application. The lack of consistency — think login state, losing shopping basket contents, or missing comments, for example — would deliver a terrible user experience. We explain how to solve this below.

  • If you have user-specific content and irregular traffic, you could benefit from horizontal scaling. It enables you to increase performance when you need it, even if only for a few hours or days.
  • Building a horizontally scalable solution from the beginning is advisable if you’re expecting substantial growth in traffic over time.
  • You can run a site on a few servers without a complex Kubernetes setup.
  • Are you still unsure? You can start with a single server for a faster time-to-market and later convert your setup to horizontal scaling. Be aware that it will be more work overall if you make the change later. You need to balance how likely you are to need horizontal scaling with the additional upfront cost.

Building a containerized, i.e., horizontally scalable, setup requires up-front investment — mostly the time it takes to learn how to do it — but it pays off in the long run because it gives you much more flexibility to scale up and down as necessary.

What a containerized setup looks like

Sessions must be synchronized between servers, not stored locally. Usually, when a user loads your website, the request is routed to a single server. In a distributed setup, the request can be dealt with by any of many identical instances. A user might communicate with several containers in a single session, so the containers need to offload the handling of these sessions to a central resource.

Add servers to improve performance. You need a blueprint that gets cloned for each virtual server you add — for example, a Docker container with the code for your Sulu application.

Automate container synchronization. Differences between containers can cause your application to break or confuse users with different versions of the site during a single visit to your website. Intentional differences, like when doing a Blue-Green Deployment, are a potential exception.

Sulu supports containerizing and makes it more technically straightforward in two ways. Configuration and code are stored in files, not in the database; new containers can be exact copies of your application. Sulu prevents editors and users from adding or configuring extensions through the browser; individual instances can’t be changed (and put out-of-sync).

Use Kubernetes for container orchestration, Deployer for simpler setups

If you have just a few machines and don’t expect much future growth, you can use Deployer, which essentially runs scripts to execute commands such as deploying code, restarting the server, clearing the cache, and so on.

As soon as you have a more complex setup, we recommend Kubernetes, the industry-standard for orchestrating Docker containers in a horizontal configuration.

<?php

namespace Deployer;

require 'vendor/deployer/deployer/recipe/sulu2.php';

// Servers
host('web01.sulu.io')
    ->user('deploy')
    ->set('deploy_path', '/var/www/sulu.io')
    ->stage('prod');

host('web02.sulu.io')
    ->user('deploy')
    ->set('deploy_path', '/var/www/sulu.io')
    ->stage('prod');

// Configuration
set('repository', 'git@github.com:sulu/sulu-demo.git');
set('composer_options', '{{composer_action}} --verbose --prefer-dist --no-progress --no-interaction --no-dev --optimize-autoloader --apcu-autoloader --no-scripts');

// if deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
Deployer Configuration

Limitations of horizontal scaling

Not a silver bullet. Horizontal scaling only applies to your server. The relational database and assets are centralized — both represent potential single points of failure and require special attention while updating and upgrading code.

Lots of work upfront. You need to make sure the benefits warrant the extra investment in the long-run.

Next steps for a containerized setup

Sulu itself is entirely scalable. You need to centralize many parts of the application to keep your containers in sync.

First and foremost, not handling session management locally. The same goes for content stored in the database, and assets such as images and videos. You need to offload these to third-party cloud services, so each container has access to the same content. We’re going to walk you through the major steps.

Choose your hosting – Platform as a Service or Managed Cloud

Unlike SaaS cloud CMSs, with Sulu, you own code. You can make the modifications you need to reflect your business logic and data structures. You do have to choose hosting provider to host your application online — usually, either a platform as a service (PaaS) or a more customized solution.

Platform as a service (PaaS). These days, many people choose a convenient PaaS solution because it makes their life easier and enables developers to work on code, not the finer points (or the grunt work) of infrastructure. Be sure the provider you choose has servers located in or near the physical locations of your users.

  • Suggested providers:
    • DDEV
    • Platform.sh
    • Heroku

Cloud hosting providers. There are some situations where you might need to customize hosting to your specific requirements, and in this case, you need to choose custom hosting. You will need to do more work setting up your deployment processes, but you'll have more flexibility and freedom of choice, as mentioned above.

  • Suggested providers:
    • Amazon EC2
    • Google Cloud
    • Microsoft Azure
    • DigitalOcean

Set up caching

Session handling. With Sulu running in several containers, and not knowing which container will service the request from the browser, centralize your session management. Any server will be able to pick up where the last one left off if a user interacts with more than one during a single session.

  • Suggested tool: Redis is a powerful but relatively simple key-value store

HTTP caching. Varnish is the most popular tool for HTTP caching. It can be run on a distributed setup, too. Getting caching right is critical. It not only saves users time but also protects your application from unnecessary load. If you usually rely on Symfony's basic HTTP caching, this won't work in a distributed setup because it caches pages locally. Use Varnish instead.

Store your assets centrally

File system. Don't store anything, especially media assets like images and videos, locally. Depending on the specifics of an application, this varies. For example, the Allianz Cinema website allows editors to generate a list of invoices and download it from their cloud storage. Sulu integrates with most cloud-based storage solutions: for example, Amazon S3, Google, and Azure.

composer require league/flysystem league/flysystem-aws-s3-v3
Install Dependencies
sulu_media:
    storage: s3
    storages:
        s3:
            key: 'your aws s3 key'
            secret: 'your aws s3 secret'
            bucket_name: 'your aws s3 bucket name'
            path_prefix: 'optional path prefix'
            region: 'eu-west-1'
Storage Configuration

Database. Similarly, the database needs to be offloaded to the cloud for all instances to access. 

Again, Sulu is very flexible in supporting cloud-based providers such as Amazon, Google, and Azure. You need to make sure the database is powerful enough for your requirements, as it's a potential single point of failure, especially if your site needs to write to the database a lot. 

Bear in mind that updates can be complicated, especially if you're looking to avoid downtime. "Rolling" code updates can help, taking one server down at a time without users noticing. However, if an update requires changes to the database schema, it can break the older versions of the code that are still online during the process. One advantage of managed services is that they can deal with backups and scalability for you.

Why not give it a try?

Devs are chomping at the bit to containerize their applications. It’s not always necessary, but it opens up more possibilities in the future. When used appropriately, horizontal scaling saves you money by allowing you to scale up resources seamlessly when you need them. It avoids lost revenue through server outages during peak times when all eyes are on your site. 

If you’re ready to take the next steps and need more details, join us in the Sulu Slack channel or check out the Sulu documentation. For a fast track to success, find out about our workshops, consulting, and coding services.

Last Updated 06. January 2021

You might also be interested in these articles

Multilingual Guide

Website Localization with Sulu

Sulu’s powerful multilingual features enable you…
Learn more
Start here Guide

Understanding Sulu and taking your first steps

A decision-making guide for agencies and develope…
Learn more