The 30th anniversary of Linux gives us a chance to reflect on the evolution of open source and how it has transformed the corporate landscape of technology-makers.  While open-source software was originally seen as a democratization of technology and a threat to traditional corporations, disruptive startups and tech giants who have adapted successfully have found new value and competitive advantage partaking in the OSS community.  Linux proved that the power of community and open standards could create a commercially successful tool, as it became the operating system of the Internet. Meanwhile, accessibility to infrastructure grew rapidly as we moved from local singular supercomputers to billions of globally distributed Cloud instance groups.

Today, we expect a wide variety of sizing options, infinite capacity, and global distribution from any Cloud Provider, something that would have been unthinkably difficult 30 years ago.  Wide accessibility and variety of competing providers contributed to infrastructure commoditization, and server provisioning became largely boring. Early entrants entrenched themselves with a competitive advantage that only grew at scale, leaving providers with little room to win on price.  Without the ability to compete on the accessibility, availability, or price of infrastructure, the industry was forced to find a new layer of abstraction.  As an industry, we were forced to ask, “what’s all this infrastructure FOR?”  Though certainly one day we shall find The Met adorned with beautiful infrastructure system diagrams, nobody (save a few of our most creative artists) was building global systems for the sake of art.  

Infrastructure is only a means toward an end of delivering applications and services.  In an attempt to address the purpose of infrastructure, a new world was born where the servers were no longer the product — the product became the interfaces and the orchestration of that infrastructure.  Providers found ways to abstract even the three golden resources — CPU, memory, and storage — from each other and provide them as individual services, backed by the same infrastructure they had been selling previously.  They built services on top of those that managed common patterns of deployment, monitoring, authentication & authorization, analytics, and even application logic.  In this endeavor, they narrowed the gap between infrastructure and product, making DevOps accessible not just to server users, but directly to product builders.

In the same way that the administration of purchasing, protection, and maintenance of physical servers to support Amazon.com grew to such an unprecedented scale that it beckoned forth the first infrastructure APIs and abstractions which became AWS, the scale of container orchestration for Google.com reached such a level at Google that it beckoned forth an abstraction of its own–enter, BORG.  The way servers were commoditized by cloud providers, and application configuration was standardized into the container by Docker, Borg provided a configurable abstraction for the rest of Google’s operations pipelines.  

At this point, the power of abstractions and standards to put the power of computation in the hands of product-focused teams was clear.  It was then that Google took a giant leap of faith and open-sourced Borg, birthing Kubernetes and the CNCF.  Why release such an incredibly powerful proprietary tool to the world, and invite others to enjoy the spoils of work that could only have happened at Google?  Clearly, Google saw something even more valuable than this orchestration engine they had created.  By empowering engineers across the world to spend more time writing applications and less time worrying about how to run them, they could amplify the market for the assets they really wanted to sell–compute, memory, and storage.

Of course, this would be good for not just Google, but for VMWare, Amazon, IBM, and all of their other competitors.  It is uninteresting where intention ended and boon began, but the result is clear.  This, of course, had one particularly beneficial effect for Google, which was that they had an immediate advantage in the world of Kubernetes.  It had, after all, been created to support their internal services and their employees knew it better than anybody.  Other providers also wanted this advantage, and began dedicating their own resources toward the development of Kubernetes.  Corporate competitors became unified behind a mission to empower the industry, and forced each other to build sensible abstractions which could benefit them all.  

The CNCF became the vehicle of both growth and governance, and itself began to grow quickly.  The existing players continued to repeat this process for more tooling, starting projects like Istio’s service mesh.  The world would never be the same.

Cloud providers continued to compete by selling dev-friendly abstractions and APIs as services, deployment, orchestration, networking, and even application logic became increasingly soldered onto the cloud infrastructure we were buying.  In crept the fear of vendor lock-in.  If it was already hard to change our provider of servers, it became nearly impossible to change directions on what had consumed our entire operations stack.  The CNCF framework can be seen as the potion for this poison.  Open source technologies built by startups and cloud providers, in collaboration, could democratically provide standards and each cloud provider could release their own managed service that integrates with their own suites, while startups could build highly specialized platforms for solving problems without having to worry about traditional problems associated with deploying and managing infrastructure.  

Many became so good at solving problems in their space that it has behooved the big players to pay top dollar to acquire their talent, experience, and assets–acquisition by a cloud provider would provide the original creators with much more scale that would allow them to thrive.  I won’t go as far as to say that the CNCF model made market competition and M&A more collaborative and less cutthroat, or that it led to more winners and less losers, but it’s impossible to deny the strangeness with which 1980s CEOs might look at partnerships between the major players and their disruptors.

The CNCF framework captured the attention of technical artists, fans, and investors alike.  And so, more and more tools were born in this landscape, and entire companies were reborn around OSS they would donate to the CNCF, like Buoyant’s Linkerd.  Those who had been present at the Cloud providers during the turn of the era began to start companies which would completely revolve around this model.  

The CNCF model grew so popular that, while competitive companies began to collaborate on technology, competing standards emerged.  My personal favorite example is the modern observability space.  OpenMetrics and OpenTracing were separate efforts that the CNCF was able to merge around common ground into what is now OpenTelemetry.  Honeycomb, Lightstep, and now Nobl9 have sprung up around these efforts, along with the new OpenSLO.  You won’t find these companies trashing each other on the CNCFCon floor, as one might expect, but rather they’ve created their own lanes and seek to simply balloon the value of their shared field.  This is vastly different from the one-winner-eats-all mentality of tech giants at the birth of the personal computer.

It is yet to be seen if multicloud will be a necessary next-level layer of abstraction, for either resilience or competitive flexibility reasons. Regardless, we can all look forward to greater flexibility as democratization of DevOps continues to be driven by the CNCF model.

To learn more about cloud native technology innovation, join KubeCon + CloudNativeCon North America 2021 – which takes place from October 11-15.