After a long and rigorous process, competing against 175 companies from 23 states and 11 countries, we're excited to have won one of the six awards of the 76West Clean Energy Competition, granting Concertio an award of $250k. The ceremony was held yesterday at...
Concertio has been selected as a semifinal in the 76West competition! Come and see Tomer Morad present Concertio and the opportunity to cut emissions of data centers at the event @ Cornell University in Ithaca, NY. The event is open to the public....
Last week we visited DCD in NYC, and were very excited to discuss the potential of dynamic tuning with the industry's leaders. Read the following interesting write-up by Mark Welsko from WES that mentions our "game changing technology" in the context of reducing...
From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain performance, wasting precious cycles and watts. In the fast-growing field of AI, optimized systems yield faster training times and require less infrastructure. But the tuning process can be tedious and requires specialized skills. Startup Concertio, creator of performance optimization toolkit Optimizer Studio, is asking the question, “can we relieve data scientists from the need to understand their specific underlying infrastructure and from the need to optimize the performance of their models?”
A company’s technology stack functions much like an orchestra; many parts working together in harmony, but if one instrument is too slow or out of tune, the entire company runs less effectively. Concertio, previously known as DatArcs, serves as an AI-driven maestro that optimizes and orchestrates your software and hardware deployments. When deploying and maintaining your server, there are a myriad of options that can be configured and optimizing this architecture across your technology stack ensures that the show runs smoother.
While the role of performance engineer will not disappear anytime soon, machine learning is making tuning systems—everything from CPUs to application specific parameters—less of a burden. Despite the highly custom nature of systems and applications, reinforcement learning is allowing new leaps in time-saving tuning as software learns what works best for user applications and architectures, freeing up performance engineers to focus on the finer points of system behavior.
A year and a half ago I wrote about a start-up working on dynamically-tuned, self-optimizing Linux servers. That company is now known as Concertio and they just launched their “AI powered” toolkit for IT administrators and performance engineers to optimize their server performance.
There was a simpler time when system tuning consisted of adjusting a relatively few number of knobs, a manual and not overly demanding task that brought out the best in systems performance. But now, as we move toward accelerated enterprise systems networked from data center to public cloud to mobile and sensored devices at the edge, simplicity in systems tuning is long gone.