Infrastructure and operations (I&O) leaders globally are facing demand from the C-level to use containers to speed up application delivery. But the rapid adoption of container technology does not necessarily mean that it is a fit for every organization.

Containers can help enterprises modernize legacy applications and create new cloud-native applications that are both scalable and agile. Container engines such as Docker and orchestration frameworks such as Kubernetes provide a standardized way to package applications — including the code, runtime and libraries — and to run them in a consistent manner across the entire software development life cycle.

By 2022, more than 75% of global organizations will be running containerized applications in production, which is a significant increase from fewer than 30% today. However, the current container ecosystem is still immature, and organizations must ensure that the business case is solid enough for the additional level of complexity and cost that it will entail to deploy containers in production.

Although there is growing interest and rapid adoption of containers, running them in production requires a steep learning curve due to technology immaturity and lack of operational know-how. I&O teams will need to ensure the security and isolation of containers in production environments while simultaneously mitigating operational concerns around availability, performance and integrity of container environments.

Here are six key elements that should be part of a container platform strategy to help I&O leaders mitigate the challenges of deploying containers in production environments.

Security and governance
Security should be embedded in the DevOps process, and the containerized environment must be secured across the entire life cycle. This includes the build and development process, deployment and the run phase of an application. To prevent malicious activities, I&O leaders should invest in security products that provide whitelisting, behavioral monitoring and anomaly detection.

Monitoring
The deployment of cloud-native applications shifts the focus to container-specific and service-oriented monitoring (from host-based) to ensure compliance with resiliency and performance service-level agreements. Focus on monitoring a container and across containers at a service level. Monitor the applications, rather than the physical host.

Storage
Consider two separate cases for storage: If the primary use case is “lift and shift” of legacy applications, there may be little change in storage needs. However, to refactor the application or create a new, microservice-oriented application, the organization needs a storage platform that is integrated with the developer workflow and can maximize the agility, performance and availability of that workload.  

Networking
The portability and short-lived life cycle of containers will overwhelm the traditional networking stack. The native container networking stack doesn’t have robust-enough access and policy management capabilities. I&O teams must therefore eliminate manual network provisioning within containerized environments, enable agility through network automation, and provide developers with proper tools and sufficient flexibility.

Life cycle management
Containers present the potential for sprawl even more severe than many virtual machine deployments caused. This complexity is often intensified by many layers of services and tooling. Container life cycle management can be automated through a close tie-in with continuous integration/continuous delivery processes. Together with continuous configuration automation tools they can automate infrastructure deployment and operational tasks.

Orchestration
The key functionality for container deployment is provided at the orchestration and scheduling layers. The orchestration layer interfaces with the application, keeps the containers running in the desired state and maintains service-level agreements. Scheduling places the containers on the most optimal hosts in a cluster, as prescribed by the requirements of the orchestration layer.

Kubernetes has emerged as the de facto standard for container scheduling and orchestration, with a vibrant community and support from most of the leading commercial vendors. Customers should decide on the right consumption model for Kubernetes by carefully evaluating the tradeoffs between CaaS (containers as a-service) vs. PaaS (platform as a- service) as well as hybrid vs. cloud-native services.