Artificial intelligence (AI) is upending almost every sector – from health care and education to manufacturing and retail. It’s even creeping into comedy. Ameca – billed as the ‘world’s most advanced humanoid robot,’ according to UK start-up Engineered Arts – recently attempted to tell a joke. The punchline is so poor that we will leave the gag until the end of this article. Suffice it to say that the AI joke fails miserably.

However, the growth of AI is no joke. Worldwide spending on AI is expected to reach $300 billion by 2026, according to International Data Corporation. On its side, PwC estimates that AI can transform productivity and GDP potential, contributing to $15.7 trillion in the global economy by 2030.

It looks like we’re reaching escape velocity with AI. And for good reason. It has the potential to meaningfully improve people’s lives, accelerate decisions, improve the customer experience, and increase competitiveness. Despite the controversy we hear nearly every day, it is most probable that AI won’t take over the world or permanently destroy jobs, as Prof. Yann LeCun stated recently.

As a result, with most AI and machine learning (ML) use cases being deployed in the cloud, network teams are under more pressure than ever to integrate and support networks operated by third parties. According to the latest EMA Research Megatrends, “While 99% of enterprises have adopted public cloud, only 18% describe their tools as ‘very effective’ at monitoring the cloud.”

RELATED ARTICLES’

Modern network management needs to be experience-driven

Drive your network operations into the future

The app-aware NOC: The imperatives and how to make it happen

Don’t let WAN bandwidth get in the way of your AI initiatives 

Now more than ever, network professionals must prepare wide-area networks (WAN) to support the rising tide of AI-enabled technology. Shifting from traditional WAN to SD-WAN technologies can help but also introduces complexities in managing networks. As a result, network operations can rapidly be overwhelmed as they grapple with managing the growing number of network paths connecting remote branches to critical AI cloud services. These three simple recommendations can help operations teams to stay in front of the change: 

  1. Streamline Operational Workflows: Enterprises that integrate network operations into a cross-domain operations center experience greater success than those that maintain network operations in a standalone network operations center (NOC). With the prevalence of cloud and SaaS models, and the resultant lack of visibility, network teams need to route user-experience metrics through standard operational workflows. This is key for organizations to successfully deliver AI-enabled applications supported by network services that span across the edge into multiple cloud providers.
  2. Validate Deployments End-To-End: Network teams face significant challenges in getting continuous feedback for validating the rapid, dynamic changes in their SD-WAN environment. As organizations are pressured to increase the agility of operational processes, traditional network monitoring practices can be a roadblock to AI adoption. Without explicit pre- and post-change validation of the end-user experience, risk-averse network teams will be wary of deploying required infrastructure changes for supporting AI-enabled use cases because of the potential risk to the quality of experience.
  3. Keep Transport and Cloud Providers Accountable: Network teams are now responsible for the end-user experience along the entire network path, regardless of infrastructure ownership. While network teams may have basic, binary insight into whether a service is being successfully delivered from one end of the network to the other, they won’t necessarily be able to identify all of the handoffs between ISPs and other network owners. By effectively demonstrating that performance issues occur within environments owned by ISPs and third-party providers, operational costs are reduced, and the actual return on investment in cloud-based AI developments can be realized.

So, what’s the key takeaway from this? As more organizations introduce cloud-based AI innovations, they will face associated challenges in managing their WAN. Bandwidth burden is not a total novelty as most network teams already need to manage voice, video, and other latency-sensitive applications. Traditional performance management and traffic prioritization can help, but most of the focus will be on doing right with managing the genuine digital experience. 

So, back to the AI joke. The robot Ameca was asked to think of the most entertaining thing to tell a researcher. Ameca replied: ‘A scientist was showing off his new AI robot to a group of people,’ she begins. ‘He asked the robot, what is 2+2? The robot replied 4.’

Understandably, the researcher is unimpressed, questioning: ‘Why is that funny? What’s the punchline?’ Ameca seems unphased, simply adding: ‘The punchline is that the scientist was so impressed he asked the robot what 4+4 was. The robot replied 8.’

In one final desperate attempt, the researcher asks: ‘What happened next?’, to which Ameca disappointingly replied: ‘The scientist was so impressed he asked the robot what 8+8 was. The robot replied 16. Adding, ‘That’s the end of the joke.’

AI is clearly not going to take over the comedy scene soon, but it is no joke that it creeps into your network infrastructure and your business – an excellent reason to get ready early on.