Google Cloud is adding a complete managed serverless offering that handles all forms of infrastructure management for DevOps teams, taking low-level programming and deployment tasks off of their hands.

The new Google Cloud Run, launched at this week’s Google Cloud Next conference in San Francisco, aims to let developers move any type of code into Docker containers, and for IT ops teams to deploy those containerized workloads in Google Cloud Platform (GCP) or Google Kubernetes Engine (GKE) without having to consider the underlying cloud-compute infrastructure. Along with the new Google Anthos hybrid cloud platform, also announced at this week’s Google Cloud Next, the company is making a decided push at establishing GCP and GKE as multi-cloud services.

Cloud Run, based on the Knative API set and runtime environment for enabling developer and workload portability in Kubernetes clusters, automatically handles the provisioning, configuration and scaling of workloads as a managed service, billed in sub-second increments of usage time. It will initially spin up HTTP and push requests and can run any code moved into a Docker container.

“The beautiful thing about cloud run is it just takes a Docker file and that means it’ll run anything and by anything I mean, you name it the craziest, whacked out thing you can think of,” said Oren Teich, director of product management at Google Cloud, during a briefing at this week’s conference. “You can take the exact same code that you have written and deploy it anywhere else you want.”

In a separate session, a Google engineer demonstrated the provisioning of a 16-year-old large Java file moved into Docker and then Cloud Run, without modification, just a wrapper to enable it to work with Python. “What’s beautiful about the system is you’re paying by the hundredth of a millisecond for what you use only,” Teich said. “And it scales up horizontally to many, many thousands.” The service can scale to thousands of cores in a matter of seconds, he added.

The current iteration now in preview does have some constraints, Teich acknowledged. It runs at a maximum of one gigabyte memory size instance, and it comes with a single core, per instance. “So, it’s horizontal scaling, not vertical scaling,” he said. “And each process has to respond to an HTTP 1.1  [or pull] request, with a maximum of 15 minutes.”

Because it is based on the open-source Knative runtime developed by Google, IBM, Pivotal, Red Hat and SAP to enable portability of Kubernetes and the Istio service mesh architecture, organizations can run serverless apps across those compatible services as they roll out, said Pali Bhat, VP of product management at Google.  “We actually have partnerships with them and they’re all building support for Knative,” Bhat told IT Ops Times.

While Knative is still quite new (it was announced at last year’s Google Next conference), the expectation is that managed serverless services will become more popular, as organizations are faced with fewer administrators and more IT business services. “With Cloud Run, you don’t have to manage that much but you do have to make certain decisions about how much memory you’re allocating per container, what your scaling factor is for each container, and getting visibility to cold starts are all big things we’ve heard about from our customers,” said Sajid Mehmood, cloud platform director at Datadog, among those application performance management (APM) providers that worked with Google on Cloud Run.