Optimizing NGINX is a challenging task, let alone coupling it with resource sizing in Kubernetes. The solution? Automated tuning! This blog post is based on a webinar that was co-hosted by Shashi Raina, a Partner Solution Architect at AWS, and Tomer Morad, Co-Founder and CEO of Concertio. It is divided into three parts, and in this first part, which is based on Shashi Raina’s presentation, we explore containers, VMs and Kubernetes on AWS. In the next part, we’ll dive into automatic optimization and how to optimize an NGINX website that is hosted on Amazon’s EKS, or more specifically, Fargate. In the final part, we’ll dive into post-optimization steps to generate insights, as well as discuss integrating Continuous Optimization into the Continuous Delivery process.

What are containers?

In application development, containerization is a major step towards modernization. It enables companies to achieve automation in deployment and operation at an efficiency level they have never seen before.

A container is an atomic, self-contained package of software that includes everything it needs to run (code, runtime, libraries, packages, etc.). Containers are the solution to the problem of how to get software to run reliably when moving from one computing environment to another – from a developer’s laptop to a test environment, from a staging environment into production, or perhaps from a physical machine in a datacenter to a virtual machine in the cloud.

Containers consist of an entire runtime environment, application dependencies, libraries, and binaries – all packaged into one. By containerizing the application, a platform and dependency’s differences in the operating systems and underlying infrastructure, can be abstracted away. A single container might be used to run anything from small micro-services or software processes to much larger applications. Compared to virtual machines, containers have a different approach to solving the same problem.

Containers are very lightweight, making them easy to spin up with much less overhead. They offer a logical packaging mechanism in which applications can be abstracted away from the environment in which they run. This decoupling allows a container-based application to be deployed easily and consistently regardless of whether the target environment is a private data center, a public cloud, or even a personal laptop. In other words, containerization provides a clean separation of concerns as developers focus on the application logic and dependencies while IT operation teams focus on deployment and management without worrying about things like application details, software versions, OS versions, and so on.

While there are some similarities between containers and VMs, they are quite different, especially when it comes to overhead. For example, virtual machines are run in a hypervisor environment, where each virtual machine must include its own guest operating system along with the related binaries of the applications. This consumes a large amount of system resources, especially when multiple VMs are running on the same physical server. When using containers, however, the guest OS’s resources can be shared by containers under the same host.

Containers offer a far more lightweight unit for developers and IT Ops teams. Each container shares the host OS kernel, binaries, and libraries, which reduces manual overhead. In short, containers are lighter weight and more portable than VMs.

Above is a snapshot of the AWS container ecosystem. There is a managed image registry or ECR to store images. Then there is a compute layer that basically contains the EC2 machines and Fargate (serverless). On top of that, there are orchestration engines like ECS and EKS which complete the system.

Amazon EKS is a highly available, scalable, and secure Kubernetes service on AWS. It is essentially AWS’s managed Kubernetes service that makes it easy to deploy, manage, and scale a continuous application using Kubernetes on AWS. It runs a Kubernetes management infrastructure across multiple availability zones (AZ’s) to eliminate a single point of failure.

Kubernetes has become a popular choice for a number of reasons, but one of the greatest motivators for customers is portability and modularity, which allows for a compelling of path to modernization of applications. In terms of the sequence of steps, the first step is to containerize your application into a docker standard container (which may or may not involve decoupling). After the applications containerize, Kubernetes implements immutable architecture as code deployments, and has significant operational tooling around running these containers at production scale.

These are some use cases for Kubernetes:

  • Microservices – Either greenfield or if you are moving your existing code over to microservices.
  • Platform as a Service – Primarily for large enterprises that are building some tooling on top of Kubernetes.
  • Enterprise App Migration – Things like Java applications that are just sitting on a datacenter can be lifted and shifted into Kubernetes.
  • Machine Learning – One of the fastest-growing use cases for Kubernetes (ex. Tensorflow using GPUs).

Amazon EKS is already Kubernetes certified, which means that all the tooling and plugins from the partners and the community are easily accessible and usable. Applications running on any standard Kubernetes environment are fully compatible and easily migrated to Amazon EKS. To fine-tune and optimize EKS installations, we need to monitor and understand many metrics. These include pod capacity, control plane matrix, a resource matrix around CPU utilization per node, and memory usage per node, just to name a few.

This is exactly where Concertio comes in – with a solution using its automatic optimization engine. In the next part, we will dive into how to optimize an NGINX website that is hosted on Amazon’s EKS, or more specifically, Fargate.

Pin It on Pinterest