In Microsoft’s white paper, Seven Properties of Highly Secure Devices, Microsoft researchers outline the seven “must-haves” for devices that claim security and safety. As argued in the paper, all too often security is sacrificed in the name of cost savings. In a world where nearly everything is connected (you can buy anything from an internet-connected Rubix cube to a self-driving car), this pervasive lack of security is no longer acceptable.
The Seven Properties paper focuses on devices that are powered by microcontrollers. The group notes that MCU-powered devices are “particularly ill-prepared” to pass the cybersecurity-bar. This is due largely to a lack of investment in security as companies roll out new IoT devices every year. However, with the instances of cyberattacks on IoT devices surging 300% in 2019 alone, security is no longer optional.
Consumers Demand Better Security
As attacks on IoT devices have increased, so has consumer demand for better security on the devices they purchase. Cybersecurity is no longer solely the concern of the back-end developers creating a device, but top-of-mind for the end user. In a study conducted by PWC, 85% of surveyed consumers said they would not do business with a company if they had concerns about that company’s cybersecurity practices. 55% of those surveyed also responded that AI and IoT-enabled devices are the biggest threat to security and privacy. The bottom line is clear: today’s consumers are becoming more security savvy and they are demanding a higher level of security in their connected devices.
Designing devices that meet Microsoft’s Seven properties is a necessary first step to deliver the level the security consumer’s are beginning to demand. But, what if you could provide security that goes beyond the level that Microsoft proposes?
CoreGuard & The “Seven Properties”
That’s where Dover’s CoreGuard® technology comes in. Not only does CoreGuard provide the means to achieve each property, CoreGuard actually goes beyond Microsoft’s recommended security parameters.
It’s able to do so through a novel combination of both hardware and software that protects against 95% of all software vulnerabilities and immunizes processors against entire classes of network-based attacks.
Let's see how CoreGuard stacks up to each of the seven properties ...
HARDWARE-BASED ROOT OF TRUST
The first property mentioned in Microsoft’s paper is a hardware-based Root of Trust (RoT).
To support cryptographic operations, the paper calls for “unforgeable cryptographic keys generated and protected by hardware.”
CoreGuard does not provide a hardware-based RoT. However, CoreGuard Information Flow Control (IFC) micropolicies can ensure the correct use of RoT cryptographic facilities in the following two ways. First, it can ensure that no confidential data (e.g. Pll) leaves the system without first being encrypted by trusted encryption facilities. Second, it can ensure the confidentiality and integrity of cryptographic keys, if key material is every processed by a CPU (e.g. using encryption functions defined in software).
The CoreGuard IFC Confidentiality micropolicy, which protects confidential data from leaving the system unencrypted, has the following six elements:
1. Labeling of Confidential (aka Private) Data
What data is confidential (versus public) is application dependent, but can be defined using the Dover Policy Language (DPL) at a high level. For example, all data coming from a database socket could be classified as confidential, or all data produced by a trusted key generator, or specific regions of memory, etc.
2. Labeling Output Locations "Public
All output locations, such a memory-mapped IO regions of memory, are labeled as "public".
3. Trusted Encryption Function
A trusted encryption function is identified (its instructions are labeled as being within the trusted encryption function)
If a value exits the trusted encryption function, any confidentiality tag is removed. That is, encrypted confidential information is no longer considered confidential.
As values flow through a system, by being loaded from or stored to memory, being copied, and being combined with other values, the IFC tags flow with the values. So, if a confidential value is loaded (copied) from a word in memory into a register, the tag on the register receives the confidentiality tag. If that value is then combined with another value, the output register tag is updated to be confidential (confidential data combined with anything produces confidential results). And if a confidential value is stored (copied) from a register back out to memory, the tag indicating that the value is also copied and labels the destination word in memory.
When data is stored (copied) copied to a memory location labeled "public", there is a check whether the value being copied is labeled as confidential. If so, a policy violation is signaled and the store is prevented.
The confidentiality of key material is ensured just as the confidentiality of private application data. That is, key material produced by the RoT is labeled as confidential and thus CoreGuard's Confidentiality micropolicy will prevent key material from being exfiltrated via public IO.
It is also important to ensure the integrity of key material. That is, upon entry to a cryptographic routine that takes a key as input, we want to ensure two properties: (1) that the key material was generated by a trusted key source (part of the HW RoT), and (2) that the key material has not been modified between the key source and the function entry.
CoreGuard's Integrity micropolicy can enforce the above two integrity properties. The CoreGuard Integrity micropolicy has similar elements to the Confidentiality micropolicy, such as labeling of high integrity (aka trusted) data, as it is produced by a trusted key source (HW RoT), and consumers of the key data (arguments to crypto functions) are labeled as requiring high integrity values. Integrity combination rules are the opposite of confidentiality; if a high integrity value is combined with any other value, the result is no longer considered high integrity. Lastly, at the entry to functions taking high integrity arguments, the integrity tags of values are checked. If a value does not have a high integrity tag (high integrity means originated from trusted source and has not been modified), then a micropolicy violation is signaled.
SMALL TRUSTED COMPUTING BASE
The second property of highly secure devices is a small Trusted Computing Base (TCB).
The paper defines a TCB as “the trusted computing base consists of all the software and hardware that are used to create a secure environment for an operation.”
Unfortunately, in common practice (including initial prototypes of Azure Sphere runtimes) the TCB has grown to include software that is trusted to perform privileged actions. There is a subtle but critical distinction between (1) software that is strictly responsible for enforcing security policies, and (2) software trusted to perform privileged actions.
In many commercial implementations, including Trusted Execution Environments (TEEs), there is a large codebase, amounting to a full operating system, responsible for filesystem, network, and cryptographic operations. Typically, the entire TCB/TEE software stack is deemed “trusted” and is not itself subject to security protections.
CoreGuard takes an approach much closer to the original definition of a computer system’s TCB. One such definition from the Orange Book describes a TCB as “the totality of protection mechanisms within it, including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy.”
In a CoreGuard-protected system, all software, whether it is privileged OS code or application software, is subject to security micropolicies. As a result, CoreGuard is able to deliver on the venerable idea of the Principle of Least Privilege (PoLP). CoreGuard micropolicies can grant narrowly-defined privileges to specific functions (or parts of functions). CoreGuard can define fine-grained boundaries of privilege, ensuring that all software, including the software in the TCB, behaves only as intended with only the privileges the developer originally intended to grant.
The third property of highly secure devices is the longstanding, incontrovertible principle of defense-in-depth.
Everyone agrees with the general idea that more, rather than fewer, layers of protection should increase security. However, whether a system implements defense-in-depth is notoriously hard to define. The paper defines defense-in-depth as “devices with multiple mitigations applied to each threat.” This can be difficult to demonstrate given that the threats are, by definition, unknown.
CoreGuard directly provides defense-in-depth using micropolicy composition. Let’s take a look at an example to better understand how we compose multiple micropolicies together to block a cyberattack at multiple levels.
In the case of a classic data exfiltration attack, the attack may rely on modifying control flow to send data to a network connection rather than a filesystem. To accomplish this cyberattack, the attacker must do the following (1) overflow a buffer to overwrite the address of a virtual function table, (2) write to the virtual function table, replacing a function (pointer) that uses the filesystem with one that sends to a network port, and (3) invoke the method, sending private data over the network.
CoreGuard micropolicies can block each of these steps in isolation. When the micropolicies are combined together, a system achieves defense-in-depth. First, the The Heap and Stack micropolicies block buffer overflows. Then, a control Pointer Integrity (CPI) policy can prevent overwrites of function pointers. Finally, an IFC micropolicy can prevent confidential data from being written to a network port via memory-mapped IO.
The ability to define many different micropolicies, and to compose micropolicies together, enables CoreGuard to implement true defense-in-depth—blocking the attacker at every turn.
The fourth property of highly secure devices is compartmentalization, which the paper describes as “compartments protected by hardware enforced boundaries to prevent a flaw or breach in one software compartment from propagating to other software compartments of the system … a common technique is to use operating systems processes or independent virtual machines as compartments.”
Compartmentalization does a good job of making sure applications in one compartment don’t interact with or compromise applications of a separate compartment, but doesn’t actually make these applications any less vulnerable to attack. And despite our best efforts, all software (especially low-level software written in C/C++) has exploitable errors in it. If you divide your large corpus of software into four compartments, you have only reduced the scope of impact—you have not eliminated the inherent vulnerabilities that exist in your code. Even code in a "trusted" compartment has bugs and can be compromised.
That said, compartmentalization can play an important role in an overall security approach. Compartmentalization can support the PoLP, mentioned earlier. For PoLP, one must narrowly circumscribe the code (and data maintained by that code) that is privileged in some way (for example, scheduler code that can switch threads, memory management functions, code loaders, etc.)
Therefore, to be truly useful towards the goals of security, compartmentalization approaches (1) must support many fine-grained compartments, and (2) support fine-grained assignment of privilege to compartments. CoreGuard supports both of these concepts.
Fine-Grained, Lightweight Compartmentalization with CoreGuard
Traditionally, compartmentalization has been coarse-grained. For example, Arm TrustZone places all code and data into two or four compartments, and MMU-based solutions put all of a process and its data in a single compartment.
There are strong arguments that an operating system, or an application, should be divided into many smaller, fine-grained compartments. However, the primary practical impediment to finer-grained compartmentalization has been context switch overhead, as well as the memory required to maintain data structures for each context (e.g. process).
A context switch is when control switches from one compartment to another; all of the machine’s registers, virtual memory metadata, etc. are saved to memory, the saved data for the new compartment is retrieved from memory, and finally control proceeds in the new compartment. Put another way, a context switch saves one world view (aka a context) and restores another world view before proceeding. SGX, TrustZone, and virtual memory systems all involve context switches to achieve compartmentalization.
CoreGuard maintains metadata for each word in memory, which includes both instructions and data. This means that each instruction and data word could, in theory, be its own compartment. While per-word compartments do not make practical sense, the point is that CoreGuard’s Compartmentalization micropolicy can group instructions and data together at arbitrary granularity.
Because CoreGuard can label instructions and data with the compartments they are in, and because CoreGuard already checks every instruction during execution, the confidentiality and integrity aspects of compartmentalization can be enforced by CoreGuard without heavyweight context switches. In a sense, CoreGuard has taken the original Software-Based Fault Isolation work and implemented it in hardware to improve both security and performance.
The fifth property of highly secure devices is certificate-based authentication.
Authentication via certificates involves several steps—steps that can be enforced in a specific order by a CoreGuardFinite State Machine (FSM) micropolicy . A FSM micropolicy can encode a finite state automaton and enforce that only legal transitions are taken, and is written specifically for the needs of the system it is protecting.
For example, a micropolicy can ensure a public key will not be used for encryption or decryption until the certificate from which the key came has been validated, possibly by a certificate authority. It can also track provenance of public and private keys and ensure that the correct keys are used for crypto operations that are part of certificate-based authentication. For example, the private device key must be used to decrypt an incoming signed certificate.
In addition to enforcing elements of the certificate-based authentication protocol, CoreGuard IFC micropolicies can be used (as mentioned for Principle #1, hardware-based RoT) to prevent private keys from ever being copied to a public channel. Furthermore, when key material is placed in RAM, CoreGuard can enforce that the key material is read-only.
The sixth property is renewable security.
The concept of renewable security traditionally implies security patches that fix discovered vulnerabilities in software. It is obviously a good idea to fix known vulnerabilities in deployed software, and we agree with the paper’s statement, “A device without renewable security is a crisis waiting to happen.” However, relying primarily on finding and then updating vulnerabilities in software, after the fact, is a reactive, instead of a proactive approach to security.
CoreGuard is a proactive approach because our micropolicies are future-proof—they are designed to stop entire classes of attack, not just specific attacks. Thus with CoreGuard in place, one can drastically reduce the need for frequent security patches to deployed software.
For example, when a new buffer overflow vulnerability is discovered, CoreGuard’s Heap and Stack micropolicies (included in the base set) will already be equipped to protect against it, no patching required, because they are designed to stop all buffer overflows (both known and unknown).
However, if there is a new class of attack that is discovered or you want to add additional micropolicies to your system for defense-in-depth, CoreGuard supports updates to the micropolicies installed on deployed devices. CoreGuard micropolicy updates can piggyback on device firmware update functionality, if necessary.
The seventh and final property of highly secure devices is failure reporting.
The paper cites the advantages of having many reliable cyberattack detectors. In fact, CoreGuard is able to detect attacks in real-time, at the byte-level.
If we assume the Azure Sphere vision of millions of IoT devices communicating with cloud-based analytics, we can view CoreGuard as an extremely accurate and reliable cyberattack detector. The fact that CoreGuard operates at per-instruction, per-word granularity means that cyberattacks are detected early and precisely. The fact that CoreGuard micropolicies are enforced by hardware interlocks means that cyberattack detection cannot be circumvented and that no damage is allowed to happen to data.
When CoreGuard detects a violation of any installed micropolicy, the default actions are to first discard any pending updates to memory, and then raise a Non-Maskable Interrupt (NMI) to the processor that attempted to execute the instruction.
The first step, discarding pending memory updates, prevents any observable effects of the violating instruction from influencing other computations. The second step, raising an NMI, allows the system to respond in a customizable manner. The default behavior is to kill the thread of the offending instruction, and jump to a customer-defined safe mode.
In the Azure Sphere context, a local NMI would result in a cyberattack detection message being sent to the Azure Cloud infrastructure. Additional customer-defined responses are also possible, such as enabling stricter firewall rules, adding more runtime tracing in order to provide more detail if attacked again, or enabling other defenses, such as ASLR
Ensure the safety and security of IoT devices with CoreGuard
The properties outlined in Microsoft’s paper provide guidelines for an industry that all too often has overlooked security. While simply meeting the requirements of Microsoft’s highly secure devices is undoubtedly a good thing for IoT, it’s possible to provide even more security with CoreGuard.
CoreGuard can protect data and code at every level of the IoT ecosystem: from edge nodes, through gateways, to confidential cloud enclaves. To learn more about how CoreGuard can secure your IoT device, request a demo today.