New Relic has introduced its New Relic AI Monitoring (AIM), an application performance monitoring (APM) solution designed specifically for AI-powered applications. 

AIM was made to offer engineers enhanced visibility and insights throughout the AI application stack, facilitating easier troubleshooting and optimization of AI applications in terms of performance, quality, cost, and responsible AI use, according to the company. It includes over 50 integrations and features such as LLM response tracing and model comparison.

This includes components such as LangChain for orchestration, various Large Language Models like OpenAI and HuggingFace, popular machine learning libraries like Pytorch and TensorFlow, and support for model serving on platforms such as Amazon SageMaker and AzureML, covering a wide range of AI infrastructure components from Azure, AWS, and GCP.

“With every organization integrating AI into their products and processes, AI workloads are now part of modern organizations’ application architectures,” said Manav Khurana, the chief product officer at New Relic. “With AI monitoring, we have applied our deep expertise from inventing cloud APM to providing end-to-end visibility into AI-powered applications to help businesses manage performance, costs, and the responsible use of AI.”

New Relic AI Monitoring (AIM) introduces several key features and use cases to enhance the monitoring and optimization of AI-powered applications. One feature is auto-instrumentation, where New Relic agents are pre-equipped with AIM capabilities, streamlining the setup process. 

This includes comprehensive visibility into the AI stack, response tracing, and model comparison for quick and easy implementation. AIM provides a holistic perspective by offering a full AI stack visibility, covering the application, infrastructure, and the AI layer. This encompasses AI metrics such as response quality and tokens, alongside traditional APM golden signals.

Deep trace insights for every Large Language Model (LLM) response is another crucial aspect of AIM. This feature allows users to trace the entire lifecycle of complex LLM responses, offering insights to address performance issues and quality problems, including concerns related to bias, toxicity, and hallucination. AIM also facilitates the comparison of performance and costs across all models in a single view, enabling optimization through insights on frequently asked prompts, chain of thought, and prompt templates and caches.

AIM actively supports responsible AI use by ensuring that responses are appropriately tagged as AI-generated and free from bias, toxicity, and hallucinations. This is achieved through insights obtained from response trace analysis.

AIM is now available in early access to New Relic users across the globe. Users can sign up here to request early access, which is included as a part of New Relic’s simplified, consumption-based pricing.