Today, there is no shortage of cryptographic technology to choose from. Providers of popular crypto solutions include the original creator, IBM, as well as other major corporations, like Microsoft, and startups, like PQShield and AgilePQ.
Over time, cryptographic standards have evolved to keep up with the world’s ever-advancing technology, as well as the advancing sophistication of cyberattacks. To make things more complicated, the standards and guidelines for cryptographic algorithms began to vary from country to country and region to region. While some countries, including the United States, don’t have specified standards or guidelines, other countries, like India, passed legislation that allows their government to set nationally permitted modes and methods of encryption.
While cryptography is not new, crypto agility is a new concept that’s making waves. Crypto agility is the method of designing information security protocols and standards so they can support switching from one cryptographic algorithm to another. It’s a must-have for systems for a few reasons. First, it offers better security—if one algorithm is compromised, a different algorithm can take over. Second, it prevents global chipmakers and device manufacturers from needing to make multiple versions of the same chip or device to meet the crypto standards of each country.
Quantum Computing is a Call to Arms to Adopt Crypto Agility
As the era of quantum computing approaches, the need for crypto agility increases exponentially. Quantum computing will make traditional cryptographic algorithms easier to crack in a much shorter time frame. For instance, it would take a classic computer approximately 300 trillion years to execute a brute-force attack and crack an RSA-2048 bit encryption key. Whereas, a quantum computer could break that same level of encryption in just eight hours.
Unfortunately, crypto agility has been difficult to achieve. Historically, a single host processor wasn’t powerful enough to run a cryptographic algorithm; thus, it required a separate hardware crypto engine. Algorithms were embedded in the hardware and therefore could not be easily updated. It also restricted the ability to switch the algorithm to comply with different cryptographic standards based on the country or region. As a result, chip vendors and device manufacturers were forced to produce and ship different SKUs for each region.
Moving Crypto from Hardware to Software has its Risks
Thanks to advancements, like multi-core clusters, enhanced ISAs, SIMD, and vector instructions, processor performance has improved to the point that software can do more of the heavy lifting when it comes to cryptographic algorithms. As a result, it’s now much easier to achieve crypto agility.
Historically, the crypto software would tell the hardware engine to encrypt (or decrypt) a piece of data at a certain address, and then the hardware would do all of the actual work of encryption (or decryption). To achieve crypto agility, this process is broken down into smaller steps or building blocks the software can piece together. These building blocks can take the form of smaller hardware accelerators that do smaller chunks of the work or they can be processor extensions, such as SIMD or vector processing instructions. In theory, you could also do this with a highly configurable hardware engine; however, that’s not generally practical in reality. The implementation naturally moves toward the software doing as much of the data processing as it can, using hardware accelerators only where necessary, and even then only for small steps in the overall algorithm.
The bad news is, relying more on software results in additional security risks. All complex software is riddled with bugs, which if exploited can be used to expose or bypass cryptographic algorithms and keys. Not only will quantum computing make algorithms easier to crack, but it will also make it easier to find and exploit the inherent vulnerabilities that exist within the software running on the system, including the cryptographic software.
Systems Need to be Immunized Against the Exploitation of Software Vulnerabilities
In a post-quantum world where it’s even easier to exploit software vulnerabilities, it’s imperative that our systems are protected against these types of threats.
That’s where Dover’s CoreGuard technology comes in.
CoreGuard is the only solution for embedded systems that prevents the exploitation of software vulnerabilities and immunizes processors against entire classes of network-based attacks. CoreGuard IP acts as a bodyguard, monitoring every instruction executed by the host processor to ensure it complies with a set of security, safety, and privacy rules, called micropolicies. If an instruction violates a micropolicy, CoreGuard stops it before any damage is done.
CoreGuard comes with a base set of micropolicies which are essential for any agile system. They protect against the most common and severe types of exploits that plague every system, regardless of industry or application. The base set is made up of three micropolicies: Heap, Stack, and Read-Write-Execute (RWX). Together, these micropolicies protect against 43% of all severe software vulnerabilities, including all buffer overflow, stack smashing, and code injection attacks.
Just to clarify, CoreGuard does not eliminate the actual vulnerabilities in the code, it just stops an attacker’s ability to exploit those vulnerabilities to take over the system at run time—quantum computer or not.
Ensuring Defense-in-Depth for Agile Systems
Everyone knows that a defense-in-depth strategy is preferred when it comes to cybersecurity. With CoreGuard, not only can you prevent the exploitation of common vulnerabilities, but you can also layer on additional micropolicies which further reinforce the encryption function on your system.
In particular, two micropolicies can ensure information flow control on your system: Confidentiality and Data Integrity.
CoreGuard’s Confidentiality micropolicy prevents data exfiltration attacks by labeling data either “private” or “public” and tracking the influence of that data as it flows through the system. Ultimately, this ensures private data never leaves the system without being encrypted first.
So, how does it work?
First, embedded systems tend to communicate with external devices through memory-mapped I/O. To prevent private data from being mistakenly written to a public I/O port, CoreGuard places metadata that denotes each I/O port as public. To protect the private data, CoreGuard needs to know about it. That’s done through a configuration file, so the application doesn’t need to be modified, and it specifies the data as “private” either based on variable names or memory addresses.
The Confidentiality micropolicy simply states, if a piece of data is labeled as “private,” it’s not allowed to leave the system via a port that’s labeled “public.” It also designates that the encryption engine is a trusted function that can remove the “private” label. As such, only data that has been successfully encrypted can leave the system—shutting the door on an attacker’s ability to bypass encryption.
Furthermore, it’s important to note that the “private” metadata tag is designed to spread, so anything it touches, also gets marked as private. This prevents an attacker from being able to use the private data in computation and calculate the original value based on the results. For example, in an employee’s personnel file, a birthdate is likely considered a piece of private data.
An attacker could try to determine someone’s birthdate (and age) by subtracting their birthdate from today’s date which is, of course, a public piece of data. To prevent this type of inference, CoreGuard’s Confidentiality micropolicy states that if you compute private data with public data, the result is also considered private.
Meanwhile, the Data Integrity micropolicy ensures key material that enters into a cryptographic routine was generated by a trusted source and has not been modified. Similar to the Confidentiality micropolicy, it achieves this by labeling data as “trusted” as it’s produced by a trusted key source, such as a hardware root-of-trust. Further it designates that the consumers of the key data (arguments to crypto functions) can only use it if it’s marked as “trusted.” Unlike the Confidentiality micropolicy, if trusted data is combined with untrusted data, the result is considered untrusted. This ensures that an attacker cannot bypass the trusted key source or feed untrusted key data into the crypto engine.
Preparing for the Post-quantum World with Secure & Agile Cryptography
The imminence of quantum computing, coupled with varying crypto standards, has made the need for crypto agility a must-have. But the truth is, even without the threat of quantum computers breaking our algorithms, our systems are already vulnerable due to the inherently flawed software they run. As cryptographic algorithms make their way from the hardware to software, this only exacerbates this issue. Our systems need to have a foundation that immunizes them against the exploitation of vulnerabilities. On top of that foundation, they need additional layers of defense which reinforce our cryptographic functions. Dover’s CoreGuard technology is the only solution that can deliver both.
To learn more about CoreGuard and see it works, request a demo today.