Sophisticated cyberattacks are proliferating globally. Today, with the expansion of the Internet of Things (IoT) and device connectivity, cyberattack targets extend  beyond defense and IT to critical infrastructure, aerospace, automotive, healthcare, heavy industry, transportation, and communications—virtually any segment in which there is digital information to steal or misuse,  or where there is potential for operational disruption or damage.

Protecting critical systems from network-borne threats and preventing the deployment of infected systems are priorities for both government and industry. Technologies are available today that can give security engineers a considerable advantage in combating threats. First, though, let’s review the current model for cybersecurity research and development.

Cyberdefense: Deconstructing Attacks
Developing, deploying, and testing effective cyberdefenses in embedded devices is particularly challenging. Embedded devices typically have resource constraints such as limited compute power and processing capacity. They are often designed for a single, unique purpose and employ less widely used busses and interfaces. Setting up test labs to perform system-level cyber testing on a representative set of devices at scale poses logistical and cost challenges. It is also difficult to perform security tests on live systems without “freezing” them entirely, which is not easily accomplished since most systems need to be available at all times. In addition, there is often no backup or redundant service available. While it may be possible to shut down one hardware node and keep the rest of the systems running, this may distort system behavior and therefore not be indicative of how a security measure will perform in a real attack scenario.

Testing cyberdefenses entails such techniques as fuzz testing, or automated testing that injects invalid, unexpected, or random data into a system to determine causes of system failure, and penetration testing (or “pen test”), which involves attacking a system to uncover security weaknesses, gain access to data, and take over or prevent system functions, and then reporting findings to the system owner.

System operators may not even realize they are under attack. Sophisticated attacks can develop over a long period of time, with seemingly random events that in isolation seem harmless, but collectively and over time can cause damage. The cyber chase can be elusive—smart attacks may initially appear as random and simple bugs. Cyberdefense teams must develop countermeasures that are constantly active, that can detect and prevent attacks, and that report attempted attacks to the security team.

Forensics is essentially a form of reverse engineering— investigators work their way backward to identify the root cause of an attack. But many sophisticated attacks are designed to prevent reverse engineering—they burrow and hide below the OS level, in the BIOS or firmware. These attacks may also delete traces of themselves so there is little left for a forensics team to find once the attack becomes exposed. In some cases, attacks can even detect whether they are being analyzed, and change behavior to avoid discovery of their true nature.

Investigating Attacks and Developing Defenses in a Virtual Environment
So how can you perform forensics if sophisticated malware is designed to thwart attempts to investigate? How can you detect and remedy vulnerabilities in critical infrastructure systems composed of special-purpose embedded devices? How, in effect, can you become smarter than intruders?

If expense were no object, you could build a so-called “cyber range,” a completely isolated network of physical computers whose sole purpose is testing cyber malware and countermeasures— comparable to a golf range for swing practice or a firing range for target practice. But this undertaking is usually very expensive, requiring physical equipment—whether that’s an entire aircraft cockpit, power plant equipment, or operating room instruments— all wired together in a lab. The cost and physical nature of a cyber range limit its capacity, which is often significantly lower than the actual need. Furthermore, cyber ranges typically require special skills associated with the unique characteristics and interfaces of a particular system. Given these constraints and the resulting value, a physical cyber range is neither sufficient nor cost-effective for many organizations.

A less costly, more flexible, and more effective alternative is to use virtual hardware and full system simulation technology, such as Wind River Simics.

Secure Deployment
Developers need to be sure that new software and the products it enables have not been compromised before being deployed— that the system boots and operates securely initially, as well as after an update.

The simple answer would be to test every part of the software before deployment and at every update. The problem is that security is difficult to scale correctly. The more complex the software and computer system, the larger the test matrix, and the more difficult it becomes to achieve the relevant test variation at production scale. Not testing at full scale can put the production system at risk, and this risk is exacerbated with the unrelenting demand for faster deployments. Unfortunately, the solution has often been to forego complete test coverage and test only for the most critical use cases on available platforms. Cyber attackers will find those places that were not fully tested.

Fuzz testing is one method that can be applied to evaluate security prior to deployment. For example, engineers can randomly vary inputs to a device, introduce random communication, apply protocol variations, perform range and boundary checks, or check for buffer and register overflows. Randomized testing, however, requires bandwidth, which again raises the issue of scalability.

Solving the Challenge of Scale
Security testing requires scalability. Compromises on test variation and test coverage need to be eliminated. Solving this problem requires two key capabilities: automation and parallelization. It is critical to have as much automation as possible, not only to speed up the testing process, but also to achieve repeatability and to be able to report and log results automatically. Running tests in parallel also helps save time—but parallelization is difficult. Not all types of test software can be run in parallel; some are by nature serial. And test parallelization requires the existence of several instances of the same hardware, which is not always practical or affordable.

Instant Replication of Test Assets
Simulation and virtual hardware solve both the automation and the parallelization problems. When hardware is virtual, any amount of target hardware can be instantiated, in any system configuration, instantly.  A virtual hardware lab can complement a physical hardware lab, enabling engineers to create the target systems on demand. An automated test system can also be programmed to create new hardware instances and system setups (of both hardware and software) automatically.

Automating the Impossible
An important and often overlooked aspect of virtual hardware is that it is more stable and reliable than physical hardware. Hardware labs (with hardware lab equipment) tend to be susceptible to failures and sensitive to disturbance. The larger the lab, the more sensitive it can become as complexity increases. Checking results from an automated test system after overnight testing, one may find that tests were broken or interrupted, costing several hours or days in delays. Engineers may also have to spend time analyzing a reported problem to determine whether the issue lies with the system being developed or the test system itself, which cuts into productivity. With virtual hardware, running on stable servers, the test system becomes more trustworthy, and all test teams, regardless of locations, can save time that might otherwise be lost when test automation is performed on physical hardware only.

Increasing automation, digital information and interconnection of critical systems all raise the complexity of developing and maintaining secure systems. Developers of critical systems need tools that can help them stay a step ahead of increasingly sophisticated attackers. System simulation technology provides an efficient and effective means of researching, analyzing, and testing a wide variety of attack methods and security countermeasures in a flexible and scalable environment, and in ways that would simply not be feasible with physical systems. In a world that is ever more dependent on the safe and reliable performance of interconnected systems, simulation gives cyber professionals a way to gain the upper hand.