Back to Blog
Here's Why Today's Cybersecurity is Insufficient

cybersecurity chip

"Cybersecurity" is a broad topic and means different things to different people. In the cybersecurity industry, vendors have tended to tout what they know how to do as "security" and be done with it. However, what vendors tout as sufficient cybersecurity still leaves systems vulnerable to attack. 

By first examining the current cybersecurity landscape, we will illustrate exactly how and why systems continue to have a large attack surface that today’s cybersecurity tools do not address. 

Existing, mature, security products

There are two broad classes of technology that we categorize as mature, and are widely marketed as a way to create secure systems. Cryptography and compartmentalization are arguably both necessary for end-to-end security of a computing system, but combined, they still leave a large attack surface vulnerable to exploitation. 

Cryptography. If data is not signed and/or encrypted in flight and at rest, that data is at risk of modification and/or pilfering. While there are many documented cases of cryptography being incorrectly implemented or deployed, it is safe to say that, in general, cryptography is a mature and well understood method for protecting data at rest and in flight. There are several vendors in the RISC-V and Arm ecosystems with excellent products supporting cryptography.

Compartmentalization. Compartmentalization solutions prevent compromised software from one compartment from corrupting data or control in another compartment. In other words, compartmentalization does not prevent an attack, but it limits the scope of damage from an attack. Techniques such as virtual memory, virtual machines, and vendor-specific features such as SGX from Intel and TrustZone from Arm do a good job of separating one collection of software from another at runtime. If the software in one compartment is successfully attacked via exploitation of a software vulnerability, the data and control within that compartment are compromised, but an attacker is prevented from leveraging the initial compromise to attack other compartments. In the RISC-V ecosystem, there are several products focused on compartmentalization, including WorldGuard from SiFive, and Multizone from HexFive. The RISC-V specification also supports virtual memory, PMP (physical memory protection), and there is ongoing work in supporting so-called Trusted Execution Environments (TEEs)—all of which support protecting one collection of software from another via compartmentalization.

Remaining attack surface after cryptography and compartmentalization

So, what about the attacks that neither cryptography nor compartmentalization can cover? 

There are two well-known classes of cyber attack that are not adequately addressed by either of the mature solutions available: 

  1. Side channel attacks: A system has well-defined inputs and outputs wherein data gets processed according to the instruction set of the CPUs, and exits via output ports. Side channel attacks are executed by using information outside of the defined behavior in order to extract information that was not intended to be exposed. Recent attacks such as Spectre and Meltdown build on a long line of research since at least the 1990s, with an initial focus on exfiltrating cryptographic keys via timing analysis. There is a huge amount of research into mitigating side channel attacks; one can look at the citations of the original Meltdown and Spectre papers to get an overview.

  2. Software error exploitation: You need only look at MITRE's CVE (Common Vulnerabilities and Exposures) and CWE (Common Weaknesses Enumeration databases to realize that bugs in software present a vast attack surface. The ever-increasing amount of software that we all depend on, and the seemingly inevitable corresponding vulnerabilities that come along with that software, have been a large and growing cybersecurity problem since the earliest days of networked systems. When Microsoft releases a new security patch, do you have any confidence that all of the security vulnerabilities have been patched? Of course not. When you add a new networked appliance to your work or home, are you confident that the device cannot be broken into? Of course not. So, how can we prevent bugs in software from being exploited to begin with?

How to prevent bugs in software from becoming exploits

Obviously, it is better to remove exploitable software bugs before deployment. The most desirable approach is to formally prove that exploitable bugs are absent from the software. Because formal proof is generally not possible, it is of course advisable to test software before deployment, removing every security vulnerability encountered. Testing, by its nature, is incomplete, and therefore there must be runtime monitoring to detect and prevent the exploitation of any latent software errors. We describe each approach, proof, testing, and monitoring, in more detail below.

Formal methods and static analysis

The formal methods community has made great strides (e.g. the seL4 kernel, compcert C compiler, etc.) in ruling out bugs a priori. Memory safe languages such as Rust, Go, or Java also go a long way towards reducing the single largest class of exploitable bugs, namely memory errors. The holy grail of formal methods applied to cybersecurity is that the combination of (1) a security-focused development process, (2) strongly-typed programming languages, and (3) sufficiently smart static analysis will prove that software is immune to attack. Unfortunately, attaining that holy grail is still far in the future. The best we can do right now is slowly, incrementally increase the amount of software that is proven secure, and incrementally increase the range of attacks that can be checked statically. For the foreseeable future, we will rely heavily on software that is riddled with security holes.

Testing

Thorough testing aims to reduce the number of bugs in deployed software. There are good testing and analysis tools (e.g. Veracode, LLVM's Sanitizers) that find many potentially exploitable flaws in software. It is obviously true that the more bugs found and removed before deployment, the more secure the deployed system. Sadly, it is also obviously true that systems that have been rigorously developed, tested, and analyzed nonetheless contain a seemingly unending supply of exploitable software errors. Hence, the ever-growing CVE database and constant stream of security patches.

Runtime monitoring

Approaches toward dynamic attack detection and prevention fall along several axes: (1) hardware vs software, (2) signature-based vs class-based, and (3) anomaly/probabilistic vs exact. 

Hardware- versus software-based attack detection and prevention 

Over time, hardware designers have added security-related features, including acceleration for virtual memory, RWX protection for pages in memory, cryptographic acceleration, and so forth. Hardware security features have the benefits of not significantly affecting performance and not being able to be subverted over the network. 

The downside of hardware-based security is that security features are one-trick-ponies, meaning they do not adapt to a changing threat landscape. Also, hardware features take a long time to develop and deploy.

Software security solutions have the reverse profile. Software security products tend to be programmable and flexible, and so able to adapt to changing threats. However, software security checks at runtime have a negative impact on performance, and because they are in software, the checks can be either skipped or modified by skilled attackers. There are numerous examples showing that security software systems, which tend to run at high privilege, are also susceptible to exploitation, as they, too, contain software errors.

Signature- versus class-based attack detection and prevention

Traditional attack detection platforms like virus scanners and firewalls are examples of signature-based checking. The issue with signature-based prevention is that zero day attacks slip through. With a seemingly endless supply of software bugs, there seems to be an effectively infinite supply of zero-day attacks. Signature-based methods correspond to the MITRE CVE—as soon as an attack is detected and characterized "in the wild,” it is able to be signature-checked. 

MITRE's CWE database, on the other hand, characterizes classes of attacks. Example classes of attacks include memory safety violations (buffer overflows), failures to check/sanitize inputs, and violations of confidentiality. If a tool can block an entire class of attack, the tool has blocked an infinite number of attacks.

Anomaly/probabilistic versus exact attack detection and prevention

While there are a number of solutions on the market that claim to block classes of attacks, those solutions often do so by detecting anomalous behavior. Many cybersecurity vendors apply standard statistical models during a "learning" phase and then flag deviations from the learned models. It is true that these tools will catch some attacks, but it is also true that they are fundamentally subject to both false positives (i.e., flagging correct, but unusual behavior) and also false negatives (letting actual attacks go undetected). Despite wildly exaggerated claims by some vendors, anomaly-based, statistical methods, often claiming “machine learning” and “AI”, offer extremely weak protection.

Dover’s CoreGuard technology addresses today’s cybersecurity challenges

Luckily, there is a solution on the market that addresses the shortcomings of existing security solutions, or as we like to say, fills the gap in the cybersecurity stack.

CoreGuard is a novel combination of hardware and software IP. The hardware component of CoreGuard allows for the enhanced performance and insubvertability provided by hardware security solutions, with checks taking place in parallel with application execution. The software component of CoreGuard, made up of metadata and micropolicies, provides precise specification of allowed and disallowed behavior, and thus can adapt to a changing threat landscape.

With its base set of micropolicies, the CoreGuard technology blocks the most common and severe classes of attacks. Beyond the base set, additional micropolicies can be added to address the specific classes of attack that are of the greatest concern to a specific organization or industry.

CoreGuard supports the exact definition of allowed and disallowed runtime behavior. Furthermore, the behavior tracked at runtime can depend on arbitrary (software defined) metadata maintained at runtime. Some examples of the kind of metadata that CoreGuard runtime policies maintain and check are: (1) is this value a pointer, and is it allowed to dereference this address in memory? (2) is this value an instruction, and is it allowed to execute? (3) is this data confidential, and is it allowed to be written to this output device? (4) did the data coming into this function come from the trusted key generator, and has it been modified since then?

The possibilities for what metadata maintains at runtime, and what conditions to place on execution with respect to that metadata, are endless.

Performant, programmable, and precise security using CoreGuard

Until today, no one thought you could have performant, programmable, precise dynamic enforcement of security policies. CoreGuard is the first product to provide such a powerful capability. 

If you would like to learn more about CoreGuard, request a demo today.

Share This Post

More from Dover

PublishedOctober 28, 2020

5G is often touted for its consumer benefits. Everything from better, more reliable cell phone service to the proliferation of self-driving cars are made possible with 5G. However, the high speed, greater bandwidth, and lower latency of the 5G...

CoreGuard