Solving the problem of how to prevent cyberattacks has been a priority even before the first internet-born cyberattack, the Morris Worm. It was a buffer overflow attack that spread rapidly and became a viral denial of service attack and it was unleashed on the world in 1988.
Five years prior to this attack, the Trusted Computer System Evaluation Criteria (TCSEC), more commonly known as the Orange Book, was published by the Department of Defense with the goal to establish the desired end-states for trusted computer systems. The DoD was sounding alarm bells that cyberattacks would soon be a reality, and laid out an aspirational framework for trusted and secured computer systems. This framework included the first proposal of a reference monitor, which later began to be called an oversight system. Not only was the TCSEC prescient about cyber attacks, it became a requirement for systems evaluated as highly secure which to this day require that they enforce the reference monitor--or oversight system--concept.
So, why did the DoD and the authors of the Orange Book land on the concept of an oversight system as a cybersecurity defense mechanism? And, if the Orange Book was largely hypothetical and aspirational, why are we talking about it now?
An Oversight System Addresses the Root Problems of Embedded Systems
Before we dive further into the concept of an oversight system, let’s take a look at two critical cybersecurity issues plaguing the industry today.
The first issue was created all the way back in 1945, when John Von Neumann first called for a processor that had a single memory where data and instructions reside. This set of data and instructions are all mixed together with special registers pointing to where in the memory to fetch the next instruction to execute. This architecture is called an execution system, and is designed to simply execute each instruction as quickly as possible. There is no way for the execution system to determine whether an instruction is valid or not. This architecture is what all processors are still based on today.
The second problem was identified fifty years later by Steve McConnell in his book Code Complete, and certainly exacerbates the issue of the Von Neumann architecture. McConnell found that all software running on an execution system inevitably will contain bugs. In a survey of deployed software in every major industry, McConnell observed an average of 15-50 bugs per 1,000 lines of source code.
Many of these bugs are exploitable across the network, and execution systems are highly vulnerable to these exploits. A successful attack could mean the systems are completely overtaken or data is stolen. Or in today’s world of autonomous vehicles, smart cities, and connected critical infrastructure, it could mean a threat to health and human safety.
Historically, we’ve relied on reference monitors to protect these incredibly vulnerable execution systems.
What is a Reference Monitor?
First proposed by J.P. Anderson while working for the US Air Force, a reference monitor is the means by which an execution system validates whether any instruction should be allowed. Reference monitors work by enforcing the security policies of a system, which could include things like not allowing an unprivileged user to write to a restricted file. Anderson suggested for a reference monitor to be truly effective it must operate in tandem with the execution system and contain the following elements captured by the acronym NEAT:
- Non-bypassable: The reference validation mechanism must be non-bypassable.
- Evaluable: It must also be evaluable—amenable to analysis and tests—and the completeness of which should be verifiable. Without this property, the mechanism might be flawed in such a way that a security policy is not enforced.
- Always invoked: The reference validation mechanism must always be invoked, ensuring it always performs when intended.
- Tamper-proof: It must be tamper-proof, so an attacker cannot undermine the mechanism itself and violate a security policy.
Since first being proposed by Anderson, numerous reference monitors have been designed and implemented in software either by adding code to that provided by the developer or to the operating system itself. Unfortunately, reference monitors have significant downfalls.
If done in software, many lines of source code are added around a select few types of constructs, such as memory accesses—which result in potentially thousands of cycles added to the execution system. If done in the OS, it first has to do a context switch from user code to system code. Then, additional code is needed for the verification and context switch back to user code. In addition to the cost of the checking code, the process of context switching involves saving all registers, which creates dozens of new instructions that the execution system must now complete.
The result is that software-based reference monitors can cost significant performance overhead—anywhere from 50% to 100% latency. Because of the costly nature of reference monitors, they are only applied to a few select types of actions the execution system might take (or they are strictly used for debugging and are turned off in production). This means that even with reference monitors, execution systems are still vulnerable to attack.
In order to take the concept a step farther, a reference monitor system would need to be implemented in hardware, solving the issue of system latency. This idea of a hardware-based reference monitor began to emerge in 1983, when a real-time oversight system was first described.
Arriving at the Concept of an Oversight System
Anderson was also the lead author of the DoD’s Orange Book, where the term oversight system was first introduced. In the book, he called for a defense-in-depth approach to cybersecurity, anticipating the day that cyberattacks would become the norm and would even have the potential to cause physical damage or threaten human life.
As part of a defense-in-depth strategy, the book called for a command-and-control system which include two parts, (1) an execution system (e.g. host processor), and (2) an oversight system with the ability to watch and record the execution system to ensure all the carried out instructions are valid.
Turning the Hypothetical into a Reality
For decades after The Orange Book was published, the concept of an oversight system remained aspirational. It wasn’t until DARPA funded the CRASH program in 2009 that a true oversight system was created.
The DARPA CRASH program was created in response to the infamous Stuxnet attack to research the design of a computer system that would be highly resistant to similar cyberattacks like Stuxnet and would be able to guard against future zero-day attacks. The largest performer of the CRASH program, ultimately receiving $25M of the $100M in available funds, were the creators of Dover’s CoreGuard technology.
CoreGuard is an oversight system for embedded systems that prevents the exploitation of software vulnerabilities and immunizes processors against entire classes of attack. CoreGuard silicon IP integrates with the execution system on the same SoC, and monitors every instruction executed to ensure it complies with a set of security, safety, and privacy rules. The rules, called micropolicies, are written separately from the application they are monitoring, which are informed by information mined from the application, called metadata.
Micropolicies act as an “allow list,” enabling CoreGuard to definitively determine the correctness of what the execution system is doing. CoreGuard validates each instruction as it is being executed, without producing any false negatives or positives. Dover’s CoreGuard technology is installed on embedded systems such that latency cost is minimal--less than 1% is added--unlike reference monitors, which are often costly and cumbersome. Furthermore, the CoreGuard oversight system is both hardware and software. Because it is enforced in hardware, the CoreGuard technology is unassailable over the network, and because it is informed by software, that software can be updated to adapt to an ever-evolving attack landscape.
The CoreGuard oversight system, unlike reference monitors, is able to check and validate all instructions written to the execution system, and it is able to do so at speed. In other words, attempted attacks are detected and prevented from executing in real-time, preventing any damage from occurring. CoreGuard then notifies the execution system that it is about to execute an improper instruction and should take evasive or corrective action.
It took a few decades after the Orange Book was published, but with our CoreGuard technology, Dover has finally made an oversight system that is efficient enough for it to be a commercial success.
To learn more about how Dover’s CoreGuard technology works, request a demo today.