Saving $8,220 and increasing high availability by migrating the main app to containers

One of the first infrastructure implementations I made was migrating the main app from 3 EC2 servers to ECS Fargate. This was the outcome of a conversation I had with the CTO when I proposed helping with some DevOps tasks.
There were several reasons that pushed me to commit to this. First, when I started working for the company, I had access to AWS and figured out that our servers were oversized and we were paying too much for them. The main server CPU usage was under 5%; this one was in charge of running a bunch of cron jobs, and the other 2 servers were under 2% on CPU usage. RAM usage was similar.
Another reason to migrate the app is that I spent almost 2 weeks setting up the local environment, and after that I worked after hours on a Docker implementation for local environments. After that implementation, new devs were able to set up their local environment in a few hours. Also, with some additional improvements by implementing the Make Pattern, I was able to configure a single command that set up the entire environment in a few minutes. This structure became the de facto standard for new apps and those planned for migration.
The environment standardization is also cool because now everyone is using the exact same environment, and the “It works on my local” excuse became just a funny phrase with no real impact. All our environments, from Local to Production, are the same. Obviously, they differ in resources, but other than that, they have the same versions of everything, making it simpler to understand what is really happening because server differences are not a thing anymore. If this fails locally, it will fail in production, and the other way around, making it easier to debug when required.
Even though we had 3 oversized EC2 servers configured with a load balancer in front, the main concern was the lack of standardization and documentation on how those servers were configured. Also, in case of failure, recovering any of those servers, or the worst-case scenario in which all of them might fail, it would take a long while just to deploy and configure the new servers. After the migration, the containers are distributed into different AZs, making it Highly Available.
A great feature that came with implementing containers is that we don’t need to worry about a server crashing. If, for some reason, a container fails, it will automatically deploy a new one to meet the minimum desired capacity, and now the autoscaling rules allow the application to scale when it is really needed.
With the migration, we were able to streamline our deployment process. We went from some sort of gitflow to Trunk-based deployments. The old implementation was using Ansible, and no one was aware of what it was doing. Now, even the environment variables are encrypted and safe inside AWS. The same artifact we build locally is the one built by the CI/CD and what we deploy to our different environments (DEV, QA, PRE-PROD, AB-TESTING, PROD).
Another nice advantage is that we can rollback with only one click. The moment we detect something went wrong with any deployment, as all the artifacts created are saved, deploying an old artifact can take minutes. We can even rollback to what we deployed in 2023 years ago, can you do so?
Finally, the reason you came here, how much was the billing reduced? The old servers were 3 x t2.2xlarge; they cost around $0.4416 per hour.
Instance type | Cost per hour | Hours per Month | Total Instances | Total in USD |
---|---|---|---|---|
t2.2xlarge | $0.4416 | 744 | 3 | $985.65 |
With our current containers configuration, we are paying under $300 monthly, considering all the environments we have. This represents around $685 in monthly savings, and the grand total is $8,220 yearly.
Although this post’s title focuses on cost savings, the truth is that the reliability, scalability, availability, and other important aspects involved in the development of our main product were improved a lot by just implementing containers.
Some Devs are terrified about creating or using containers, a few others are just lazy and they do not want to leave their flaky local environment behind, and there is another group of devs that is unwilling to learn about this technology because it is not their responsibility. I’m glad that there is also a group of high-quality devs that is committed to continuous learning and is always looking for excellence.
My personal rule is trying not to install any software directly on the OS unless there is no image available already on Docker, or it is too complex to create one and it doesn’t compensate the time.
Get quality content updates subscribing to the newsletter, Zero Spam!