While F5 Networks claimed its recent deal to acquire NGINX will “bridge the divide” between DevOps and NetOps, the acquisition also promises to accelerate both companies’ goals to create new service mesh architectures.

In agreeing to acquire NGINX for $670 million, F5 is paying a premium for a rival with $26 million in revenues last year. But NGINX offers an alternative to F5’s traditional load balancers and web application firewalls. And NGINX has made no secret that its modern approach to application delivery and security was targeting F5’s lucrative business.

In January, two months before the deal was reached, NGINX CMO Rob Whiteley showcased the opportunity to replace F5’s hardware-based load balancers and application delivery controllers with its modern, open-source software-based approach. Comparing F5’s load balancing and application security firewall appliances to fax machines as analog devices in a digital world, Whiteley asked in a January post: “Why are you still using your F5 hardware load balancers?”

Noting that hardware load balancers have played an integral part of data-center architecture for more than two decades, he added that “times have changed and F5 appliances are now the fax machines of the modern application world.” Both companies offer different approaches to application-centric load balancing and network security, though there is some overlap, notably that each were looking to accelerate their respective emerging service mesh architectures.

Mike Fratto, senior analyst at 451 Research, said that F5 sought more than just NGNIX’s service mesh technology, which the company is just rolling out. “With NGINX’s ubiquity among web and application developers, it provides a way in for F5 that was otherwise difficult to broach,” Fratto said. “There are other benefits as well such as complementary engineering cultures and both companies have a strong focus on application delivery.”

The anticipated rise of network automation for modern multi-cloud environments enabled by APIs and service mesh architectures was not lost on F5, where the bulk of its $2.1 billion in revenues last year came from its load balancers, application delivery controllers (ADCs) and web application firewall (WAF) appliances. Shares in F5 stock fell 8 percent the day after the deal was announced, following skepticism among financial analysts about how NGINX will impact F5’s top and bottom lines.

Despite the disparity in revenues, NGINX grew 65 percent last year, while F5 posted flat growth in sales, though its profits grew in 2018. A year ago, F5 president and CEO François Locoh-Donou presented a multiyear plan for growth with a strategy called Horizon. The long-term plan focused on becoming less dependent on its high-margin hardware appliance business and addressing the shift toward modern, multi-cloud environments, with applications built with containerized microservices designed for serverless environments.

Specifically, F5 set its sights on growing demand for cloud-based versions of its load balancers and ADCs to deliver more dynamic security and the ability to manage the scaling of service levels.

In a conference call announcing the deal, Locoh-Donou said 62 percent of enterprises surveyed for F5’s recently published 2019 State of Application Services report are implementing automation and orchestration into their IT ops processes wherever possible. According to the report, 42 percent are exploring modern application architectures including containerization and microservices. Also, more than half, 52 percent, said this shift is changing how they develop applications, he noted, adding that 72 percent are frequently using open-source technology. “F5 believes that every organization can benefit from the agility and flexibility enabled by modern technologies without compromising on the time-tested fundamentals of security, manageability and reliability,” he said.

Service Meshes Still Formative

NGINX recently acknowledged that only a small percentage of IT ops professionals have experimented with service mesh architectures at this point. But industry experts widely agree that this year will see some major developments that will see service mesh architectures designed to support new cloud-native applications that are microservices-based for containerized and serverless environments.

Among them are the newest release of HashiCorp’s Consul service mesh, which offers an alternative to traditional load balancers and ADCs with a central dynamic registry that is aware of all the services and where they’re running. Others include Linkerd 2.0, now a  Cloud Native Computing Foundation (CNCF) project that brings together Linkerd and Conduit. Perhaps the most prominent is the Istio project, sponsored by Google, IBM and Lyft with broad industry support. Version 1.1 of project was released earlier this month.

Even before F5 revealed its Horizon blueprint a year ago, the company was already investing in service mesh architectures. In 2017, four engineers at F5 created Aspen Mesh as an incubation project for the company, which now has 15 employees. Aspen Mesh released the first open beta of its Istio-based service mesh in December. Earlier this month, the company issued an update to that release.

The plan is to keep it in open beta through this year.  “As enterprises started to get on the Kubernetes  bandwagon last year containerizing their applications and building modern application environments, we realized there’s a problem to be solved here around helping companies get better visibility and control of these microservices environments,” Randy Almond, who leads market development at Aspen Mesh, said in a recent interview before the F5-NGINX deal was announced. “We think Istio is a really powerful technology pattern that we can build upon.”

An F5 spokesman said it’s too early to discuss how NGINX will impact Aspen Mesh until the deal closes, which the company expects will happen in the next quarter. “From an overall portfolio perspective we are finding that there are more complementary solutions we both offer to customers than there is overlap,” the spokesman said.

Nevertheless, while NGINX affirmed after the deal was announced that it sees Istio as the leading implementation of a service mesh, it’s not mature yet, according to senior product director Owen Garrett, who noted that many are built on home-grown solutions.

“A more universal approach is emerging, described as the ‘sidecar proxy’ pattern,” he noted. “This approach deploys Layer 7 proxies alongside every single service instance; these proxies capture all network traffic and provide the additional capabilities – mutual TLS, tracing, metrics, traffic control, and so on – in a consistent fashion.”

Tom Petrocelli, a research fellow at Amalgam Insights, is among those who believe it’s premature to determine if  the improved multi-clustering support in the new Istio 1.1 release will become the preferred approach. “It’s good to get ahead of the curve since service mesh technology is an enabling technology for large clusters,” Petrocelli said.

“However, it’s also a bit of catching up to other solutions that already do multi-cluster and are have been deployed in very large clusters,” he added. Both A10 and HashiCorp Consul claim multi-data center, multi-cluster capabilities. Several including NGINX support hybrid apps that don’t include Kubernetes. Consul as well.

Meanwhile, there are different views as to how important a role Istio will play in the future of service mesh architectures. It’s hard to dismiss a technology that is supported by the likes of Google, IBM, Pivitol, VMware, Huawei, RedHat, Cisco, SAP, Salesforce, Pivotal, SUSE, Datadog and LightStep.

“Istio certainly has the most momentum around the service mesh space,” said IBM Cloud CTO Jason McGee. “As I talked to clients, I think the capabilities that Istio embodies like the ability through policy, configuration control, security, to get visibility and applications and traffic telemetry that shows who’s talking to who, and, having more control over deployments, really resonates.”

Yet others say while Istio is suited for Kubernetes, despite its increased popularity, there are service mesh requirements beyond Kubernetes. “Istio really only works in Kubernetes versus in our view, you’re going to have heterogeneous sets of run times,” said HashiCorp CEO founder and CTO Armon Dadgar.

“Some of your apps are going to run on Kubernetes but others are going to run on Pivotal Cloud Foundry, some are going to run in Amazon serverless environments like a Lambda and Fargate and others. The challenge is all these things have to talk to each other. So if I’m all in 100 percent in Kubernetes, that’s great, a system like Istio makes perfect sense. But the reality for most people is they’re going to have mixed environments where some of my apps are Kubernetes and some are legacy and some are serverless  and so how do all these things interoperate together? That’s what our goal was for Consul and really how it’s differentiated.”

Google Cloud senior product manager Dan Ciruli, who sits on the Istio working group, believes Istio will evolve into the leading service mesh architecture. “We think that the service mesh community will follow the same pattern as container orchestration,” Ciruli said. “Five years ago, there were lots of competing offerings, but the industry came to realize that innovation would happen faster if everyone consolidated around one open-source project — Kubernetes.”

Ciruli noted that there are already developers from dozens of companies working on Istio. The benefits of Istio are twofold, he added. “First, more developers means faster innovation and second, it encourages the industry to standardize on APIs, which will create an ecosystem of telemetry, policy and routing vendors that can easily be dropped in,” he said. “We know we have a way to go with usability and simplicity, but unless someone is deploying a Kubernetes infrastructure with no workloads that communicate with other workloads–say, a pure batch computing use case– they will want to govern the communication between those services.”

NGINX Service Mesh

Major software providers are driving the development of service meshes as distributed architectures that provide dynamic exchange of microservices with the ability to ensure granular security, management and monitoring of those services. While F5 has Aspen Mesh, the NGINX service mesh looks beyond Istio.

NGINX, which handed off development and support of its ngimesh to the open-source community, recently added what it described as enterprise-grade service mesh capabilities to its NGINX Application Platform, an architecture that also provides load balancing, API management, a content cache, web server WAF and Polyglot app server.

The core service mesh capabilities are provided in NGINX Unit, a dynamic application server designed to run apps with multiple languages and frameworks. NGINX Unit is designed to run with its NGINX Controller management plane and NGINX Plus integrated software-based load balancer, web server and content cache built on the company’s open-source platform.