In today’s rapidly evolving technological landscape, organizations are constantly seeking ways to optimize their infrastructure for running applications. This often leads to the creation of vertical silos across departments, each with its own compute and application solutions. For instance, a quality control team might deploy an infrastructure to monitor and control quality, while cybersecurity, manufacturing execution systems, and process control systems each have their own dedicated setups. This creates redundancy and data silos, preventing seamless integration and data sharing.

In a recent ITOps Times Live! webinar with Penguin Solutions, Rudy de Anda, who leads strategic alliances, and Ken Espiau, principal solutions architect, joined ITOps Times news editor Jenna Barron to explore the shift toward leveraging virtualization, particularly in Operational Technology (OT) environments.

Virtualization, they said, allows for the consolidation of different departments and functions into a single compute environment, while still maintaining data flow and application responsibility within those departments.

“This is especially important in OT environments,” de Anda said, “because many times those applications may not be able to be on a modern or current operating system. A lot of those applications that we run into in the OT environments have dependencies of legacy operating systems. Those dependencies are tied to different hardware that you have in your environment. They have different update and patching cycles. So really, being able to take those applications and move them into software and run them on current, more reliable hardware has become a really popular and beneficial thing to be able to do in these OT environments.”

The IT world is moving beyond virtualization to containerization, which in today’s cloud native world is enabled by containerization and Kubernetes.  While VMs offer complete isolation and are ideal for legacy applications, containers provide agility and portability. Containers can boot in seconds, making them highly efficient for rapid restarts and continuous integration. However, they share a single operating system, meaning a crash in the underlying OS can affect all containers. Running containers within virtual machines, as many hyperscalers do, combines the benefits of both: the isolation of VMs with the agility of containers. This hybrid approach allows for the support of legacy systems while simultaneously embracing modern, data-driven applications.

“A container just has the pieces of the operating system necessary to support a particular function,” Espiau explained. “It’s broken up into these small, digestible chunks that are the containers and microservices there. And with the microservices, it makes it much, much easier to patch and update, because you don’t have to update all of the code. In many cases, you can shut down the code, switch it out to do the update, and bring it back up, and you may not even notice that anything has happened.” That, he said, is the big advantage of containerization.

The integration of VMs and containers presents both opportunities and challenges, they said. OT environments prioritize supporting legacy systems, minimizing downtime, and ensuring mission-critical operations, so updates are often infrequent due to the disruptive nature of changes. In contrast, IT environments prioritize constant updates to support data-driven applications, where continuous improvement is key.

Bridging these two worlds requires addressing challenges related to ownership, dependencies, and security. Who owns the servers, the software lifecycle, and the patching schedules becomes critical when IT and OT converge. Dependencies, both workflow-related and technological, also need careful consideration, especially for real-time OT applications where latency can have significant consequences. Cybersecurity at the edge is paramount as well, as more intelligent decisions are made closer to where data is generated.

Penguin Solutions’ offerings like Stratus ztC Edge and ztC Endurance offer fault-tolerant platforms that integrate virtualization and containerization. These systems are designed to run in harsh environments and provide high availability, allowing organizations to consolidate workloads and run both legacy OT applications and modern containerized IT applications on a single, reliable server. This approach enables IT and OT teams to collaborate more effectively, leveraging the strengths of both virtualization and containerization to achieve improved scalability, fault tolerance, and Agile operations.

Listen to the full webinar here.