Serverless computing is the latest trend the IT industry is hopping on, but there is still some confusion about what the technology really means. Despite its name, serverless doesn’t mean servers are no longer necessary. It just means organizations no longer need to use their own physical servers. Instead, servers can be run, accessed and managed through the cloud.
According to Rich Sharples, senior director of product management at Red Hat, serverless technology takes care of the necessary servers, middleware, infrastructure and network traffic so developers just have to focus on their code.
Serverless is often measured against containers because the two technologies provide similar benefits such as low overhead, rapid development and low point-of-entry. However, Sharples says it is important to note that one technology does not negate the other.
“As with any ‘next big thing,’ the conversation around serverless is jumping immediately to what ‘formerly big thing’ the technology will kill. Most people have identified containers as serverless’ likely victim. However, as with most ‘next big things,’ it’s not about a one thing replacing another, but rather about how the new fits in with the (relatively) old,” he said.
ITOps Times recently talked to Sharples about how serverless can fit in a containerized world.
ITOps Times: How does serverless compare to containers?
Sharples: Serverless is a cloud computing code execution model in which the cloud provider fully manages the code execution at the individual function invocation granularly versus the more traditional long running application server.
There are many similarities to, and many differences between, Linux containers and serverless technology.
Virtualization, cloud computing and even containers were designed to empower business-class IT to scale and deliver applications faster than ever before. These technologies have led us to concepts like serverless and function-as-a-service (FaaS), which seek to essentially abstract away from the need for the traditional ‘always on’ server system sitting behind an application.
But, even serverless needs to run on a computer somewhere.
It’s this massively scalable infrastructure that helps to enable all of the goodies in the current generation of abstraction, from reusable services and composable applications to rapid application development and digital transformation.
Is serverless being set up to replace containers?
It can be a clean and compelling image to present serverless as the most recent advancement in a long arc stretching from bare-metal, dedicated hardware, to virtualization, then containerization and finally serverless. It can also be a useful image to describe how serverless differs from other ways the industry has developed and deployed applications over the years.
But it is wrong to interpret this image to mean that serverless has taken over and everything before it has been superseded. There are rarely clean transitions in technology – adoption by the mainstream often happens many years after the early adopters have moved on to the next shiny object.
Do the two technologies work better together or alone?
Similar to containers, serverless functions are well-suited for quick running and low point-of-entry projects with low overhead. Serverless is not necessarily suitable for long-running tasks, such as risk analysis, or a CRM system, or ones that require a significant amount of memory. It can provide cost savings and reduce wasted time and resources. Serverless is also highly scalable and offers high concurrency and the singular use focus means that it can be easier to optimize and prioritize specific tasks.
Traditional containerized microservices give you complete control over how and where your service executes, however with serverless you give up the control but gain developer productivity.
Developing a microservice-based architecture using containers and a container orchestration layer (and in the future a service fabric like Istio) can give more flexibility and control to the developer in defining how their application is deployed and how it behaves at runtime. Under the covers, the serverless services provider needs to perform some complex orchestration of language runtimes, such as activation and passivation, pre-loading and warming to give the illusion of instant execution and enable low latencies. Typically that orchestration is going to be managed by an orchestration engine like Kubernetes. So even if the serverless developer does not have to think too much about container packaging and orchestration, they can indirectly benefit from advances in container technology.
What does the future of serverless look like?
In short, serverless is a more simple execution model and can be a good fit for services that have a very tightly scoped input and output, limited resource requirements (memory, CPU, time) that need to execute at scale, and do not require complex flows of events and information. Currently – Serverless is not a convenient all-purpose programming model and developers need to understand where it can yield benefits and where other approaches are more appropriate.
As adoption continues to increase, we’ll see enhancements to, and more integrations with new technologies and tools, including:
- Event chaining, pipelines
- Execution optimization
- Better developer experience
- Templating / generators for connecting event sources
- Broader language support
- Instant deployment from a (Web) IDE
- Better debugging, monitoring, diagnostics