Every DevOps or performance engineer today is aware of the concept of performance tuning. But performance tuning can mean many different things. In the domain of configuration tuning, there are three main variants: static, continuous and dynamic optimization.

Are they all used for the same purpose? No.

Can they be used together? Yes.

While these variants look similar, there are differences in when each can be applied and what exactly they can optimize.

Static optimization is the process of discovering a configuration that optimizes a target metric for a specific benchmark (see this blog for a more elaborate discussion on automatic static optimization). Consider, for example, a containerized environment, with microservices that comprise of a web server, an application server and a database. The microservices might be deployed in the public cloud on a Kubernetes cluster. Each of the service container resources and replicas should be chosen so that they will be able to support the expected application load. There is a fine balance between price and performance when choosing the resources and replicas. Static optimization can be used to discover and recommend the optimal resources to use. Later, the recommendations can be applied in production. When the microservice code or infrastructure changes or when the load characteristics change, static optimization should run again to update the recommendations. Concertio’s Optimizer Studio implements static tuning.

Continuous optimization is the application of static optimization in an automated process such as Continuous Integration (CI) or Continuous Deployment (CD). For instance, in continuous optimization of compiler flags (see this blog for more information), static tuning is implemented in the CI pipeline. When committing new code, a typical CI flow will trigger a build step and a testing step. In continuous optimization, after the functional testing step is complete, the CI flow will trigger an optimization pipeline to discover the best-performing compiler flags for the build. In each iteration, different flags will be used to build an alternative binary and measure its performance. When well-performing flags are found, they are committed back into the repository for later use. More information about continuous optimization of compiler flags using Optimizer Studio is available in this webinar.

Dynamic optimization is a technique in which configuration settings are optimized in real time on production systems. The main advantage of dynamic optimization is that computing systems tune themselves in real-time depending on the actual workloads that run. Dynamic optimization can outperform static optimization by harnessing the fact that in many cases, applications have numerous execution phases that require different optimal settings for each, whereas static optimization attempts to fit one configuration for all the program phases.

Concertio’s Optimizer Runtime runs on production systems and implements dynamic optimization of settings in real time. In order to detect the execution phases, Optimizer Runtime continuously measures the system metrics and builds classification models on-the-fly. It can also use pre-trained models in environments where the workloads are known. However, the knobs that can be tuned by dynamic optimization are limited only to those that will not disrupt the normal operation of the production systems. For example, BIOS settings will typically not be optimized dynamically as a reboot will be required.

How to benefit from static, continuous and dynamic optimization

Enterprises can use one or more optimization techniques in tandem. The easiest way to start benefiting from automated configuration tuning is with dynamic tuning. Optimizer Runtime can be deployed in production systems on Bare Metal and VMs to optimize OS and CPU settings for significant speedups without any human effort apart from a straightforward Linux package installation.

If a Continuous Integration pipeline is already in place, then users will benefit from continuous optimization of compiler flags. Optimizer Studio can be used to discover the best GCC or LLVM settings that maximize performance.

If a Continuous Delivery pipeline is in use, then users will benefit from continuous optimization of resources (e.g. k8s container sizes), runtime (e.g. JVM parameters), application servers configurations (e.g. NGINX configuration files, Apache httpd etc.) and databases configurations (e.g. MongoDB, PostgreSQL, MySQL). Optimizer Studio, with its thousands of embedded knobs, can improve the performance and performance per dollar metrics by up to 5x.

For generating insights, or if continuous optimization cannot be used, users can benefit from static optimization. Optimizer Studio provides users with a full suite of capabilities that cater to DevOps engineers, IT professionals, and hardcore performance engineers. These capabilities include an extensive library of tunables, advanced optimization algorithms, native variability handling, a web-based experiment management system, sensitivity analysis and more. More information about automating static performance tuning is available in this blog.

Pin It on Pinterest