Tim Armandpour, SVP of engineering at PagerDuty
Forget reliability — with the adoption of resilience engineering and the proper use of automation, operators can expect a 20% reduction in unplanned work. Today’s organizations are fixated on the reliability of their technology. But any developer can tell you that the reality is not if it will fail, but when it will fail. The success metric will shift to resilience, or how quickly you can recover from failure.

Ben Sigelman, CEO and co-founder, LightStep
In 2020, we will (finally) understand what observability is *for*. 2019 was a breakout year for “observability” as a term, but most of our industry still doesn’t know why they need it or how to develop a strategy around it. In 2020, we’ll hear more about how high-quality observability enables more frequent releases, a faster end-user app experience, reduced downtime, and other critical product and engineering objectives. At the same time, we will stop confusing high-quality observability with high-quality telemetry: in 2020, the CNCF’s OpenTelemetry project will both standardize and automate the collection of traces, metrics, and logs, moving our evaluation of effective observability beyond the bits and bytes of the raw data, and instead towards the real-world use cases that actually drive business value for the enterprise.

Ashish Thusoo, co-founder and CEO of Qubole
Kubernetes comes to big data and data lakes. Kubernetes has swept through the enterprise to quickly become the de facto standard for containerized applications, but its impact on big data analytics has been minimal so far. Next year, we will start to see enterprises deploy Kubernetes to support decision engines, data lakes and other parts of the big data environment, both on-premises and in the cloud. Kubernetes and containers allow for faster deployment times, more efficient resource utilization and greater portability across clouds. It’s only natural that these benefits will come to the world of analytics and big data.

RELATED CONTENT:
2019 ITOps predictions

Kunal Agarwal, CEO of Unravel Data
NoOps falls short: The concept of NoOps gained some steam in 2019. Using AI to automate does make operations far more efficient, but the notion that organizations can leverage cloud services and AI to eliminate all IT operations is a pipe dream. The reality is that you need DevOps and DataOps in the cloud just like you do on-premises. The cloud is an ideal destination for many workloads, especially data workloads, but the operational challenges from on-premises deployments don’t just disappear when you get to cloud. In fact, new challenges emerge, such as re-configuring apps for improved performance in the cloud and monitoring workloads for overuse (and increased costs). There are AI tools that significantly simplify these efforts, but organizations will need human operations teams to leverage those tools correctly. The cloud is great, but there need to be guard rails in place to ensure it’s delivering on cost and performance. The NoOps trend reminds me of a time 5-8 years ago when people thought the cloud be the panacea for everything. Instead, it’s clear that a hybrid model has won out with the cloud becoming ideal for many apps but others remaining best left on-premises.

Chris Patterson, senior director of product management at Navisite
Multi-cloud management will emerge as a top priority for IT. Multi-cloud strategies are now commonplace, and, in 2020, more organizations will place a laser focus on the next cloud phase: tying these environments together and asking the critical question, “How can we unify management?” This will involve taking stock of all applications and where they reside – whether that’s on-premise, on a hyper-scale cloud or anything in between – and implementing a governance framework that ensures interoperability between platforms as well as common management and connectivity planes.

Maty Siman, founder and CTO at Checkmarx
Infrastructure as Code: Until recently, organizations’ security spend primarily focused on protecting traditional IT infrastructure. Today, that infrastructure is now flexible, with organizations scaling up and down as needed, thanks in-part to infrastructure as code. This has immense benefits, but in 2020, we can expect to see attackers abusing developers’ missteps in these flexible environments.

With the introduction of infrastructure as code, network and security architectures are being defined with software, which impacts the traditional IT security spend. Infrastructure as code will lead to more dollars allocated toward software and application security, which previously only accounted for around 10% of IT security budgets, drastically shifting traditional security spend.

Steve Burton, DevOps evangelist for Harness
APM warfare will ensue between Cisco/AppDynamics, New Relic, Dynatrace, Datadog and Splunk/SignalFX. 2019 saw the IPOs of both Dynatrace and Datadog, the latter being valued almost twice as much as the former.  As this market intensifies I expect these vendors to integrate, partner and differentiate with adjacent DevOps vendors around areas like Continuous Delivery so their value spans more of the software delivery lifecycle.

Paul Dix, co-founder of InfluxData
Multi-cloud is BS for the enterprise, but critical for vendors. I think that in 2020 enterprise tech customers will finally realize that pursuing a multi-cloud strategy is proving to be worthless. It takes enormous effort and adds a lot of complexity to build systems that can switch between different public clouds for the relatively meager benefit of hedging against outages and vendor lock-in. Of course, technology vendors must continue to build solutions that work across all major public clouds in order to satisfy the demands of a diverse base of customers that each choose cloud providers based on their specific needs. But for tech customers, the goal of hedging against failures is just not meaningful when prolonged outages among major cloud providers, the kind that would require a company to shift operations to another cloud, have been practically non-existent. As for avoiding vendor lock-in, it ends up being more expensive for end-users to build the same system in multiple clouds than to build for your cloud of choice and then possibly move to another cloud if the terms or functionality get bad.

Peter Guagenti, CMO of MemSQL
Operational workloads move to cloud and embrace AI and ML: businesses will expedite the move of their operational data away from legacy providers like Oracle and SAP to cloud-native database management solutions. The need for data management systems that are purpose-built for AI and ML functions will skyrocket. This will continue the shift of workloads to Google, AWS and multi-cloud service providers.

Toby Coleridge, VP of product at HiveIO
Servers get smaller: Looking at servers over the last three to five years, one thing that hasn’t changed is the price. For example, customers are still paying the same dollar value for a server that can virtualize 80-100 desktops. To scale out the architecture, more servers are added. Even if a user only needs 10 additional virtual machines, a full server is purchased. By bucking this trend that has been in place for the past 10 years, we may see enterprises follow some of the cloud providers and move back towards single-socket servers in 2020. These servers can be more efficient from a resource utilization and cost perspective in many use cases.

Ofer Bezalel, founder & chief development officer at HiveIO
IoT security attacks: Equifax was breached in 2017, Marriot took the biggest hit in 2018, and in 2019, First American leaked hundreds of millions of title insurance records. At this point, ransomware attacks are nothing new. However, in 2020 we expect these attacks will shift from large corporations to IoT devices. Everyday household items like refrigerators, televisions, doorbell cameras, and even washing machines will be points of entries for hackers, and unfortunately, we don’t expect consumer product manufactures to put the necessary processes in place until another breach occurs.

Chris Patterson, senior director of product management at Navisite
IT will begin to take a more methodical approach to achieving cloud native status. Running cloud native applications is an end goal for many organizations, but the process of getting there can be overwhelming – especially because many companies believe they have to refactor everything at once. More IT departments will realize they don’t need to take an “all or nothing” approach, and a process founded on “baby steps” is the best way to achieve cloud native goals. In other words, we’ll start to see more IT teams forklift applications into the cloud and then implement a steady, methodical approach to refactoring them.

Don Boxley, CEO and co-founder of DH2i
Enterprises will combine Raspberry Pi (RasPi) and software defined perimeters (SDP) to create secure low-cost IoT networks. All over the world, people are using Raspberry Pis (RasPi) to learn about and build Internet of Things (IoT) devices. Raspberry Pi is a great platform for IoT – its a very cheap computer that runs Linux and provides a set of open GPIO (general purpose input/output) pins that allow you to control electronic components. Software defined perimeter (SDP) software improves the security of data flows between devices by removing an IoT device’s network presence, eliminating any potential attack surfaces created by using a traditional virtual private network (VPN). In 2020, enterprises will take advantage of the ubiquity of RasPi and the security of SDP software to enhance product differentiation with high value IoT networks.

Nikhil Handigoal, co-founder of Forward Networks
SDN deployment will see more adoption of white box switches – The journey towards software-defined networking involves implementing technologies aimed at centralizing control and increasing network visibility. White box switches offers the decoupling abilities that forward-thinking enterprises need in order to gain further control over network design by allowing users to purchase network software and network hardware from different vendors. This shift greatly improves reliability, decreases complexity, and lowers overall costs significantly. Because SDN technology allows for simple governance over network functions, the adoption of white box switches pairs with an SDN environment by singling out issues in the software or the hardware. As SDN continues to proliferate the market, white box switches promise the flexibility needed to compliment an SDN environment.

Vijay Pullur, CEO of WaveMaker
Companies will adopt an immersive and adaptive IT approach, one that is embedded and connected. Following shape-shifting characteristics of organizations, “fluid IT capability” will be nurtured, where the boundaries between IT and business will fade. Enterprises will embrace agile development practices to ensure better collaboration between business and IT. To achieve agility, businesses will work towards connecting people, applications, and devices seamlessly.

With the increasing need to bridge silos, developing enterprise applications with faster release cycles will result in the increasing adoption of low-code platforms. The fact that low-code development platform market is growing at 40 percent and is expected to reach $21.2 billion by 2022 confirms the potential of modernization using emerging technologies.

Bob Moul, CEO of machine data intelligence platform Circonus
Increased complexity in monitoring infrastructure – We’re seeing a large rise in the volume of metrics, being driven by DevOps practices such as blue-green deployment. When you take those practices and combine them with rapid CI/CD, you see some agile organizations doing upwards of a dozen releases today. There will be a need for significant changes in tooling to help support these use cases.