[et_pb_section fb_built=”1″ _builder_version=”4.4.6″][et_pb_row _builder_version=”4.4.6″][et_pb_column type=”4_4″ _builder_version=”4.4.6″][et_pb_text _builder_version=”4.4.6″]
The computing world is dominated by general-purpose systems that need to perform well for a wide variety of applications. Take servers as an example: there are 10’s of millions of servers in the world that are running many families of applications, and they are expected to deliver acceptable performance for each application. These servers are highly configurable, and encompass many tunables; you can find them in the BIOS settings, in the processor settings, in the firmware, in the middleware, in the applications, and even in the choice of compiler flags that are used to build the binaries.
The crux of the matter is: if you tailor these settings for your application, you get significantly higher performance and at reduced costs.
It used to be easy when there were only a handful of tunables – a performance engineer would be able to spend a few hours trying a few combinations to eventually achieve good performance. Unfortunately, those days are gone, and today, we are already in the hundreds of interdependent settings, meaning that the number of possibilities to configure a system today is enormous. In fact, the numbers of settings can reach 1040, and if you explore things like compiler flags, well over 10300. With this enormous number of possibilities, it is not practical to use an exhaustive search to tune systems.
So what do many large companies do? They hire engineers who are experts in many domains: from the hardware domains to the software domains. One of the challenges with this is that these experts are very hard to come by, making it impractical for smaller organizations. Additionally, the tuning process is expensive, and not only because of the time required to perform the tuning. The issue is that for every change in the hardware, like moving your workload to another instance type on the cloud, or a change in the software, such as a new software commit or a package update, the optimal point might shift and you’ll have to retune.
On top of that, there’s also the issue of program phases. Applications have different phases of execution, that may last from a few seconds to a few hours each. Each phase might require different optimal settings, but if you optimize once for the whole application, you’re essentially optimizing for the average phase rather than for the individual phases.
It is thus difficult to tune systems optimally and over time. Despite the high rewards in performance tuning, the challenges in doing so coupled with the fact that optimization is not a priority or focus for the vast majority of organizations, performance tuning will usually not be performed. This leads to underperforming production systems and bloated costs. Concertio, with its automated performance optimization products, takes away the hassle of performance tuning. We help organizations tackle the performance challenges they meet from the development stage all the way to deployment, using three optimization techniques: static, continuous and dynamic optimization.