While application performance monitoring offerings can provide valuable insight into the user experiences of your application, there’s a key factor of performance monitoring and maintenance that APM provider AppDynamics’ senior director of tech evangelism Matt Chotin says businesses don’t put the proper resources behind — measuring how the application’s underlying infrastructure is impacting its performance.

“The weaknesses that you essentially have are about the fact that containers and the infrastructure supporting them are introducing more abstraction and separation,” said Chotin. “So in the olden days, when you might have had a bigger, monolithic application running on solid infrastructure on-premises, not virtualized hardware, you had this direct tie to understanding what’s going on. You might know ‘Hey, this CPU is spinning up, it’s probably impacting my application.’”

With much of today’s infrastructure moved off-premises and into the cloud, Chotin recommends a top-down approach to determining where issues lie, but that the approach needs to extend to the infrastructure regardless of location or implementation.

“As you get into containers and the infrastructure that supports them, there’s more of this disconnect where you don’t know what’s actually impacting what, so you need to take this application-centric lens to make sure, first, what’s happening to the application and how it’s impacting the users of that application, then look down at the infrastructure and the containers and say, ‘Are they problem? What’s the correlation?’” Chotin said.

One way to ease the stress of needing to figure out where along the delivery of an application service performance hiccups or bottlenecks are occurring is with properly trained machine learning algorithms, according to Chotin.

“The better the data that you feed into your machine learning model, the better the data out, which allows you to make better decisions,” Chotin said. “So what you can do is take all of the data that you’re gathering from the application down through the infrastructure and feed that into a machine learning model. Good machine learning algorithms are able to develop correlations and baselines so that you understand how a system is performing under ideally normal circumstances.”

Chotin says that correlations gleaned from these models, properly trained, can make it that much easier for your team to spot problem areas of delivery, removing some of the abstraction between reported performance in your infrastructure and actual application performance on the user end.

“If there’s a divergence in metrics that are normally correlated, you might say ‘Hm, something is going on,’” Chotin explained.

These developer-digestible performance analytics are the last piece of the puzzle, Chotin says.

“Leveraging all of these pieces together correctly — strong monitoring, end-to-end, starting at the application building depth all the way down through the infrastructure is going to provide you the best opportunity to one, understand what’s going on in your system, two, remediate any issues that potentially occur and, three, ensure that you are prioritizing all of that to what’s actually impacting the application and therefore your business,” Chotin said.