Komodor is adding new cost optimization capabilities across its Kubernetes management platform so that organizations can reduce costs while still maintaining reliability across their Kubernetes environment. 

According to Komodor, IT teams often overspend on compute because right-sizing workloads can be difficult as a result of lack of expertise, the large volume of factors to consider, and risks of making changes. The latest updates make right-sizing more of a possibility without introducing additional risks, the company explained.

The platform now provides unified views of costs across cloud, hybrid, and on-prem environments, with the option to dive deeper into clusters, services, and namespaces.

It also offers AI-powered recommendations that are based on CPU usage, memory, throttling, and scheduling signals. It can automatically reserve and manage extra compute resources as necessary as well.

Additionally, Komodor can resolve placement blockers like Pod Disruption Budgets and affinity rules, and uses advanced autoscaling to improve node utilization, reduce fragmentation, and accelerate scaling. 

And finally, IT teams can create safety thresholds and customize optimization profiles based on conservative, moderate, or aggressive approaches. 

“In large scale Kubernetes environments, cutting costs without visibility into application behavior is a recipe for downtime,” said Itiel Shwartz, co-founder and CTO of Komodor. “What organizations need is a way to optimize cost and performance—across the full scope of infrastructure and application operations. That’s what we’ve built.”