Friday, June 22, 2018

5 Best Practices for the Perfect AWS Deployment

While deploying an application, there are some standard practices and procedures that you need to consider. Having a predefined set of procedures helps you deploy quickly and improve the overall development workflow. Some of the good practices that work on one platform might not do that well on another. However, good techniques will help you deploy rapidly and continuously.

This post talks about the 5 best practices that you should know about while deploying your application using AWS. “Why AWS?”, you might ask. AWS is one of the most popular cloud platforms at the moment. It offers tons of services and it makes automating deployments super easy. Some of these best practices are not hard rules, but techniques that you can use in your project to get the best possible results. Here are the top 5 best practices.

Use a Deployment and Operational Checklist

In my opinion, creating a checklist upfront can save you time which otherwise would be spent on fixing unpredictable bugs and other issues. Deployment is not just about pushing your code into the cloud and then hoping everything works as expected. There are lots of things that you need to consider such as verifying whether the security features are enabled, checking whether the performance optimizations and scalability factors are in place etc.

AWS provides a few checklists such as Basic Operations checklist, Enterprise Operations checklist, and Auditing Security checklist. But from a practical perspective, if I was going to move a project from a LAMP server to EC2, here are few things that I will consider while creating a checklist.

  1. Identify the region where your users reside. If your users are located in a particular region, then you can avoid the latency issues by placing your server close to that region.
  2. Make sure that you have a good understanding of the scalable services offered by Amazon. This is so that you can split certain aspects of your application among these services to get the best performance results.
  3. Use a standard AMI that supports your stack over a fresh install to speed up the deployment tasks.
  4. Attach CloudWatch to essential services to keep track of the important metrics
  5. Keep an active health check and ensure that your ELB is configured in such a way that if the running instance fails, a new instance is automatically instantiated.

Use RDS to Avoid Database Bottlenecks

In production, how well the database scales and how fast it recovers on failure have to be given due consideration. As you might already know, RDS is a prepackaged version of MySQL on AWS. Everything from monitoring your DB, to setting up security parameters can easily be accomplished using RDS. The service also offers additional features such as MySQL DB backup, Amazon CloudWatch integration, on-demand scaling capabilities, database recovery on failure etc that’s hard to configure on a virgin EC2 instance.

By using RDS, you are not limited to MySQL. You can also consider using other alternatives like MariaDB, PostgreSQL, Oracle and Amazon’s own Aurora.  Here are a few best practices that have been identified to get the best out of RDS.

  1. Allocate enough RAM so that your working set resides in memory. You can use IOPS metrics in CloudWatch to decide if everything is working as expected.
  2. Keep your RDS secure by creating IAM security policies and providing individual IAM accounts to users who need to access RDS. Never use the AWS root account to manage RDS.
  3. Identify OS related problems using Enhanced Monitoring. It’s available for all popular database engines.
  4. Understand the key metrics and use it to identify and alert you regarding any performance issues.

Automate Deployment in AWS

Automation is an integral part of any modern workflow. Continuous Integration and Continuous Deployment, for instance, requires all parts of the process to be automated. Although there is no predefined rules on how you should do this, you will need a CI tool, a CD tool and an application repo to push the code through. For example, you can use Jenkins for the CI, Amazon CodeDeploy for CD and CodeCommit for the repository.

  1. Jenkins runs the tests and if successful, it will trigger the build action and call AWS CodeDeploy to do its job.
  2. Jenkins pushes the code into an S3 bucket in a zip format.
  3. CodeDeploy pulls the zip and deploys into auto-scaled servers.
  4. Tests are executed again — this time in production and if it fails, the previous version of the code is deployed.

AWS CodeDeploy automates software deployments to various services like S3, Lambda, and EC2. The deployment script can be configured to deploy your app into thousands of EC2 instances if required. And as we just saw, CodeDeploy also makes it easy to integrate into an existing delivery pipeline.

Blue-Green Deployment Strategy to Reduce Downtime

Blue-Green is a popular deployment strategy that tries to reduce the downtime by running two identical production environments in parallel. They can be two different EC2 instances or two containers. We’ll call them Blue and Green.

At a given time, one environment will be live and the other would be running idle.  Let’s say that the Green is serving all the traffic and the Blue is running idle. When you’re about to deploy your next release, the Blue environment will be where you stage it. It goes through the regular rounds of testing, and once you’re happy about its stability, you can move it production. There are a couple of ways that you can do this: Place the Green in read-only mode, make the switch and then change the mode back to read-write.

The Blue-Green strategy helps to settle down most of the issues that prop up while deploying new releases. You can also use the Blue-Green as a backup plan so that when/if one of them fails, you can smoothly shift to the other one. Martin Flower has explained about the Blue-Green strategy in depth and you can read more about it on his blog.

Use AWS CloudWatch Logs for Debugging

CloudWatch is amazingly useful for debugging and identifying the issues with your EC2, and RDS instances, and other services provided by Amazon. CloudWatch provides near real-time logs and historical data and you can use it to monitor the status of your application. I’d recommend considering cloud monitoring of great priority before things get out of hand. You should create a monitoring plan on AWS that monitors all the cloud services that you are using so that you can use the data to find what went wrong. You should also try automating the tasks and put your human resources on something else that’s much more important.


From a development team’s perspective, deployment should be the easiest of the lot. The whole process should be automated and the point of automation is to keep the probability of human errors close to 0. This post discusses some of the best practices that will help you automate the deployment process and stay on the safe side of things.

I hope that you’ve enjoyed this article. Share your thoughts in the comments.