Serverless Website Deployments with AWS Fargate and RDS

Many people host their sites on dedicated servers or virtual hosts. This can be one of the fastest and easiest ways to get started. It’s also often assumed to be the case by whatever software you’re using: WordPress, MediaWiki, phpBB, etc. Despite that, this may not be the easiest way to run the site in the long run. You have to worry about site security, software upgrades, problems with your host, sudden increases in traffic, backups, and many other issues. So doing it all yourself may cause headaches and things like software upgrades or backups often get left by the wayside. If you can afford it, cloud deployments, especially serverless, can be one of the most reliable and least stressful ways to handle all of these things — but it takes some initial footwork.

For my own sites, I’ve deployed using Docker and serverless technologies several times now. This site is deployed and hosted the same way. Unfortunately, there’s not a ton of great documentation for such a common and practical use case, so I’m writing this article as a simple breakdown and tutorial for what you need to do if you want to move from single box hosting to cloud deployments in a way that scales with your traffic. First, let’s take a look at the overall architecture for what we want to build:

Serverless site architecture using AWS services

From this we can see there are a few major components: the database in RDS, the containers in Fargate, and the internet-connected load balancer. Traffic comes in through the load balancer and is then distributed to one of multiple containers deployed in Amazon ECS using Fargate. If there is a need to, these containers then communicate with a backend database in Amazon RDS. While I’m not going to cover it here, one really common modification to this architecture is the need for some additional, non-database storage. While you can use volumes for this purpose, I usually use Amazon S3 because it’s stateless and architecture-agnostic, which results in an overall architecture like this:

Often the configuration for storing and retrieving objects in S3 is done at the application level — within the containers. As a result, it’s a little different for each type of application, which is why I’m not going to cover it in depth here. Just know that it exists and is something you usually want to use if you’re containerizing and scaling your site.

Amazon RDS

This could be an Aurora Serverless database or it could be your average MySQL/Aurora MySQL instance. The protocol for communication from your container instances doesn’t change, so you really don’t have to worry about this part too much and how you handle the database is up to you. Migrating an existing database into RDS is a somewhat complicated subject, though, so I’ll likely write a full tutorial for how to do this at a later time. The main takeaway from this portion is that you need to migrate your database from your current server to RDS somehow in order to support this architecture. A common method, if it’s MySQL-based, is to dump to a file, transfer that file to a temporary EC2 instance, and then connect from there and restore it into RDS. There is also a migration service that AWS offers, but I personally haven’t had a lot of luck with that. Once you’ve finished the migration, you can move on to moving your application itself into AWS.

Elastic Container Service and Fargate

Getting your site running on Fargate is the bulk of the work. This requires you to do 2 things: first, you need to Dockerize your application. Having done this myself, believe me when I say: you generally do not want to be the one to do this. Depending on the services involved, it can be really tricky to do this correctly and in a secure way. As a result, whenever possible, I tend to utilize Bitnami’s docker containers. You may wonder why I would recommend their containers over something provided by the company behind whatever application you are using. A good example of why would be the official MediaWiki docker container, which up until last year or so, explicitly said it wasn’t intended to be used for production. Even now they imply that it is intended to to be extended from rather than used directly. Conversely, Bitnami’s containers are deployment ready, usually quite configurable, and have a good number of other people using them, which helps catch issues.

Once you’ve Dockerized your application, then you need to push it to the Elastic Container Registry (ECR), setup an Elastic Container Service (ECS) cluster, and setup a Service to deploy your container. This part is honestly just a lot of configuration and menuing. The Load Balancer is usually setup alongside the Service, so once you’ve finished this portion, usually you’re able to just point your DNS at the Load Balancer’s public domain and you’re done, but there could be some additional things you need to fix as well, depending on the application.

With that said, let’s get into the specifics.

Deploying a Temporary EC2 Instance

Once you’ve found the base Docker image you want to use, you need to configure it, build it, test it, and then push it to ECR. The easiest way to do this is to boot up a minimalistic Amazon Linux 2 AMI instance and run the container from there. First, select whatever region you migrated your database to, if any:

Here I’ve selected us-east-2

Next, we need to choose the AMI.

This AMI is standard at the moment.

Next the instance type. I usually try to keep the costs as low as possible while configuring and testing my Dockerfile, build scripts, and image, so I tend to go with t2.nano, but you can choose whatever you need here. If you have a heavier weight application, you can select something higher. If it turns out that you need something beefier later, you can always stop the instance, change the type, and then start it again without losing any data.

For the next part, there are a lot of settings, but the most important thing to remember to do here is select the same VPC and subnet as your database instance. If you are using Aurora Serverless, the subnet may not matter, but at least the VPC does! If you are not using Aurora Serverless, it is generally better to try to keep your instances and databases using the same availability zones to prevent data transfer overhead charges as much as possible. This is a frequent cause of AWS bill surprises, so be aware! Granted, if you are scaling to multiple instances on either side here, there’s only so much you can do to avoid it.

In this example, my database is in this VPC and using this particular availability zone

Other than that, the rest of the settings you can pretty much leave be unless you have some special requirements. For the Add Storage page, I generally don’t change anything for this test instance. Add Tags is totally up to you. When you get to the Configure Security Group page, you want to make sure that you whitelist: SSH from your own IP and any ports needed to be publicly accessible. If this is a web service, usually that’s 80 for HTTP and 443 for HTTPS.

You may get some warnings if you didn’t restrict the IPs able to access some ports. If it’s just 80 or 443, it’s probably okay, but if you really don’t need anyone but yourself to be able to access it while testing, it’s always a good practice to make your whitelists as restrictive as you can.

When you finally go to launch the instance, you’ll be prompted for what key pair to use. If you are new to AWS, this is a public/private key pair file that is used to log in to your instance. You can always add more keys once you’re logged in, if you need to, but AWS does not allow password login via SSH, so if you lose this keyfile without any other keys available, you will lose your way to log in to this instance. There are some recovery methods, but it’s beyond the scope of this tutorial. Don’t lose your key! Back up your keys!

Anyhow, once you’ve either created or selected a key, you can click Launch Instances and your instance will be launched after some time. Once it’s launched, you need to look up the public IP or DNS for the instance, which can be viewed in the instance details from the EC2 Dashboard.

Preparing your Container

Once you’ve got the IP or hostname for your temporary EC2 instance, it’s time to setup your container and push it to ECR. Since this temporary instance is brand new, you’ll need to setup docker first before we can continue.

First, log in to the instance.

ssh -i path/to/your/key.pem [email protected]

The above assumes us-east-1, but it should use whatever your instance’s hostname is. Just make sure to use ec2-user for the account name.

Next, we need to setup Docker, which you can use the following series of commands for:

sudo yum update
sudo amazon-linux-extras install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
exit

This will update the packages for the instance, install Docker, start the service, and add ec2-user to the group for using Docker. You’ll notice the exit at the end logs you back out. This is intentional, because of this usergroup modification. Log back in using the ssh command from before. At this point, you need to get your Dockerfile onto this instance and configure it to run in whatever way is needed. You can simply copy and paste the contents of your Dockerfile into a new file using vi or nano, for example, and then run it. If there happen to be some other files you need to copy to the container, like a zip file with a skin for WordPress, for example, you can use scp from a different terminal that isn’t logged in:

scp -i path/to/your/key.pem path/to/your/file.zip [email protected]:~/file.zip

Returning to preparing your container, if you’re using a container that binds HTTP to 8080 and HTTPS to 8443 (many do), then you’ll need to run something like this:

docker build -t container-instance-name .
docker run -di -p 80:8080 -p 443:8443 --name container-instance-name container-instance-name

This will run the container in detached mode, so you’ll have to use the docker logs command to retrieve the logs from the container if you need to check why something went wrong. It also binds port 80 and 443 of the instance to 8080 and 8443 of the container, respectively. Assuming nothing went wrong, you should be able to access your container from the browser using the same hostname you used for SSH. If your container needs database access and you’re getting errors about that, make sure to check the RDS instance’s security group and check that traffic is allowed from the EC2 instance or its security group for the correct port. If it’s MySQL compatible, it’s probably port 3306.

If you need to troubleshoot connecting to MySQL from this temporary instance, you can install the MySQL client with the following commands:

sudo yum install -y https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm
sudo yum install -y mysql-community-client

Anyhow, assuming everything is working with your container and you’re happy with it, now we need to push it to ECR.

Pushing your Container Image to Elastic Container Registry

First, we’ll need to make a new repository. Luckily, this is really, really easy. Go to the main ECR page and click Create Repository, then give your repository a name. Most likely you also want it to be private, so you don’t really have to change any of the other settings if you don’t have a reason to and can click Create Repository again.

After you’ve done this, you can click the radio button to the left of your newly created repository and then click View push commands. The first one is the most important. That’s the command to log in, to be able to push your Docker image to the new repository. What it doesn’t clearly tell you is that before you can use this command, you need to create an IAM user that is configured for programmatic access. If you’re just making this one user for yourself to handle ECR things, it’s okay to attach either the AmazonEC2ContainerRegistryFullAccess or AmazonEC2ContainerRegistryPowerUser role directly to this new user.

At the end of the user creation, it will give you an Access key ID and Secret Key to use. You can only see that secret key once, so don’t lose it. Once you have these, you need to run the following command and provide everything except default output format. That’s optional, so you don’t need to worry about it:

$ aws configure
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-2
Default output format [None]: 

Once you’ve input your access key ID and secret key in this config, you can actually use the push commands that were previously provided. First, let’s login:

aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 111111111111.dkr.ecr.us-east-2.amazonaws.com

After you’ve successfully logged in, you need to commit the running container to an image using the repository name and a given tag. I usually keep a constantly-updated latest tag, so in this example, if my repository name was example-repository, I’d run it like this:

docker commit container-instance-name example-repository:latest

Then you need to tag it with the full repository path, too:

docker tag example-repository:latest 111111111111.dkr.ecr.us-east-2.amazonaws.com/example-repository:latest

After which point, you can actually push it to your repository:

 docker push 111111111111.dkr.ecr.us-east-2.amazonaws.com/example-repository:latest 

Once this is done, you can finally, finally move on to the final section, which is the deployment.

Setting up Deployment on Fargate

If you’ve never used ECS at all before, first you need to create a new cluster. Don’t worry too much about what that means, but you can use one cluster for several applications if you want, so you may end up only having to do this once. We are using fargate, so choose Networking only:

Next you need to give the cluster a name. You also most likely want to create a VPC for it, so you can check that box. I also personally think CloudWatch Container Insights is really valuable to have, but it’s up to you.

Once you’ve created the cluster, we need to create a task definition to actually run our container.

Creating the Task Definition

On the left, click Task Definitions and then click Create, then select Fargate.

If you want to enable S3 access via role permissions, you can attach a role here, but you don’t need to.

For the task size, you can select whatever you need, but since we’re going to be scaling up with more containers as load increases, I tend to start with 1GB of memory and .25 vCPU.

Next, you’ll need to add the container definitions for this task. You can have a multi-container task if desired, which is why these are defined in a subsection. This is where we’ll tell the task to grab the image from ECR you pushed earlier and run it. Give the container a name (this doesn’t need to match anything from earlier) and then paste the full repository and image name.

The nice thing about using the latest tag is that you can push an update to ECR and simply relaunch the tasks to redeploy if you want. If you need less volatile deployments, you can use explicit version tags and task revisions properly. Also make sure to set the port mappings correctly or you won’t be able to reach the services running in the container.

Once you’ve finished creating the container definitions, your can finish up and complete the task definition. One minor note is that, by default, the container definitions also create a log group in Cloud Watch. Unless you want a headache, don’t delete the log group if you have to troubleshoot something in this guide not working. It’ll break several other things.

From here, we have to go back to the cluster page and create a new service by clicking Deploy.

Select Service and your newly created task definition and revision, then assign a name and the desired number of tasks to run. For most small deployments, unless you have a really good reason to be running multiple tasks from the start, you probably just want to leave this at 1. You can also usually leave the Deployment options section on the default. Under Family, select your new task definition and the latest revision.

From here, we want to setup a new Application Load Balancer in order to prepare for auto scaling our containers in number when we get higher traffic. Create a new one, set the name for the load balancer and the target group, then the port you want to listen on. I usually start with 80 here and then set up 443 later when setting up SSL certs. You’ll also need to set a relative path to a page that can be used as a health check, and the grace period before the health checks are actually used to mark a container unhealthy. Some containers don’t need a grace period, others do. It depends on how yours works!

The next part is a bit tricky. If you’re using RDS, you probably want to pick the same VPC as your RDS database, otherwise you’re not going to be able to reach it from the containers. Likewise, you’ll need your security groups to be configured to allow this, and will need to select 2 or more subnets. That’s in bold because if you don’t do it, the deployment will fail, and you’ll have to re-enter all of the information for the service that you’ve done up until this point. 😊

When you’re done, click deploy. When it’s finished, you’ll need to locate the public URL for the load balancer you just created, since it’s needed to access the containers and will be needed to configure DNS for your domain. One big, gigantic caveat here is that if you are deploying a site for your root level domain (and, honestly, I’d be willing to bet money that if you are reading this you are), your DNS provider needs to have support for CNAME flattening, because normally the root domain is expected to be set using an A record — you can’t use a CNAME record for it. If you happen to be using Cloudflare for their CDN and WAF, you’re in luck — they do. However, if you’re using another provider, you’ll need to explicitly check.

With that said, you can get the URL by going to EC2>Load Balancers and clicking on the one you just created. Once you get the URL and set it in your DNS, your site should be accessible via your domain. At this point, if you also want to use HTTPS, there are other things you need to configure, like the load balancer targets and the certificate. If you’re using Cloudflare and you want to force traffic to be over HTTPS, you may need to create a Page Rule and use the “Automatic HTTPS Rewrites: On” or “Always use HTTPS” settings. There could also be some configuration changes you need to make to your container, too, depending on what application it is. I plan to write some other articles documenting settings I’ve used for different container types in the future, but for now it’s out of the scope of this tutorial.

Anyhow, if everything is working, you technically could stop here, since your site should be accessible via your domain! However, we went through all this trouble to setup running a Docker container on Fargate… we want scalability! So, the next section will cover how to reconfigure your service to scale the number of tasks automatically with usage.

Auto Scaling

This next part I could not find a way to do using the new UI, so I had to revert back to the classic UI. You will need to do the same (though if someone knows how to get it working in the new UI, please tell me). Go to your cluster and click your service, then click Update. Then click Next Step until you reach the Auto Scaling page:

Here you can set the min, max, and initial task count to use. This lets you limit how many resources you can consume, while also making sure that you start with a proper number of containers running for whatever load you have. At the bottom, you need to click Add scaling policy, which will allow you to scale by a few metrics. I usually scale by CPU and memory usage, so an example of one of those would look like this:

You can set whatever values you’d like, but once you’re done with your auto scaling policies, you can click Next Step and Update Service to finish. Now your site will scale with your traffic! Behold, the beauty of serverless!

With all that said, I hope this tutorial helps someone else. I plan to make some shorter guides for configuring the Dockerfile for several common applications and how to set up SSL to work with those, such as MediaWiki and WordPress. That’s it for now!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.