New technologies emerge like baggage at an airport carousel. Every so often, a new one pops out from under that rubber flap we all stare at as we wait for our luggage. Cloud, virtualization, software-defined networks. They’ve all been around the carousel. 

One of the most recent to emerge is artificial intelligence (AI) and its matching suitcase partner, machine learning (ML). The names are used interchangeably, especially when discussing big data, predictive analytics, and other digital transformation topics.

AI is the use of technologies to build systems that mimic or improve cognitive functions associated with human intelligence. ML is a subset of AI that automatically enables a machine or system to learn and improve from experience.

Undoubtedly, AI has been a subject of fascination for a considerable period, traced back to 1950 when Alan Turing introduced the Turing test to assess a machine’s capacity to demonstrate intelligent behavior. From the mid-1970s onwards, important research activity created the foundations of modern AI, with some well-known theoretical tools like Fuzzy Logic, Bayesian Networks, Markov Models, and Neural Networks. Concurrently, new types of programming languages like Prolog, LISP, and Smalltalk have set the scene for most modern interpreted languages we’re using today.

Today’s reemergence of AI can be attributed to a remarkable convergence of events akin to a celestial alignment. First, the ongoing digital transformation has converted nearly every company into a software company. Second, the advent of big data has provided convenient access to virtually any amount of unstructured information. And last but not least, the cloud has enabled exponential growth in storage and processing power at affordable costs. That’s why AI is now recognized as a critical component for increasing business efficiency and growth instead of being perceived as a mere novelty.

RELATED ARTICLES

Modern network management needs to be experience-driven

Drive your network operations into the future

The app-aware NOC: The imperatives and how to make it happen

Three ways to avoid AI-related network performance problems

Deep learning is a subset of ML that uses artificial neural networks to reproduce as close as possible the learning process of the human brain. There is an abundance of cloud-based AI/ML services (Amazon, Google, Microsoft, IBM, and more) that allow large datasets to be easily ingested and managed to train algorithms at scale and lower cost. 

Even ChatGPT – the much talked about AI chatbot that promises somewhat wistfully to answer questions about nearly any subject and even understand customer intent – is hosted on a Microsoft Azure infrastructure. As you may have noticed, this article is not written by ChatGPT….

Almost every discussion about the benefits of AI is balanced by one about the risks. And one of the main risks is cybersecurity: the potential for bad actors to use AI for fraudulent services generation, harmful information gathering, personal data disclosure, malicious text generation, malicious code generation, and offensive content production. 

However, even if cybercriminals may increasingly employ AI to carry out sophisticated and targeted attacks, this might not be the most immediate risk for your network. In fact, the multiplication of enterprise use cases leveraging cloud-enabled AI services can be a more immediate concern for your network performance.

There are three reasons for this:

  • AI cloud services such as TensorFlow or other deep learning frameworks consume a vast amount of network bandwidth – for example, absorbing bandwidth for training or updating large models. Just keep in mind, as an extreme example, that ChatGPT3 was trained on a huge 45TB set of textual data.
  • Real-time applications of AI, such as image processing, anomaly detection, or fraud detection, can vastly increase network utilization. The reason for this is the sheer scale of the source data volume to analyze.
  • Using AI services for inference, which is the process of using pre-trained models to analyze new data, is usually less of a concern than the training phase. However, as applications tend to move toward the network edge, closer to the end-user devices, bottlenecks may occur within the WAN infrastructure or the cloud interconnect.

This concern about network bandwidth is nothing new. Network traffic generated on the WAN by historical VoIP or video applications echoes the experiences with today’s AI/ML applications. Both are categorized by bandwidth latency sensitivity, bursty traffic patterns, quality of experience considerations, scalability issues, and other challenges.

Understanding these similarities can help the enterprise to design efficient, optimized SD-WAN architectures future-proofed for the vast amount of emergent AI/ML use cases.

Think of it another way. Without taking action on bandwidth management, your AI/ML innovations may be delayed or dropped. That’s like being the last person left waiting at the airport carousel, thinking your bags have disappeared to a different destination.

Learn how to gain insights into end-to-end performance and hybrid WAN architectures that support your AI initiatives here.