Back to Blog
What You Need To Know About Building More Secure Processors


When it comes to building processors, embedded or otherwise, security is rarely top of mind for the hardware team leading the charge.

This is partially due to the fact that security is traditionally seen as something handled by the software team, and partially because manufacturers aren’t (yet) feeling market pressures around the subject.

Even so, the processor industry has reached an inflection point when it comes to security: either secure your hardware and gain an advantage, or fail to do so and watch your competition eat up your market share.

Why Processor Developers Need to Start Considering Security in Their Designs

There are a number of reasons why security will become the next make-or-break frontier for processor developers, but none as important as the natural evolution of the space.

A groundswell of market demand

Today, we live in a world where IoT devices play a greater role in our day-to-day lives than ever before. Users place an unwarranted level of trust in their devices, storing everything from personal information and passwords, to credit cards and digital tickets, on them.

And these devices don’t just act as data caddies. Rather, as the number of embedded devices continues to grow worldwide, so too do the value of the assets they’re protecting--namely, national infrastructure and human lives. As a result, their failures could be disastrous.

Finally, we must consider the rate at which these devices are being created and connected to the internet. There are already more devices in existence, and connected to the internet, than ever before, but current estimates predict that by 2025, that number will reach an astounding 75.44 billion.

So, when we think of the processor landscape, we need to consider the devices they control and the picture that paints.

The processor industry today is one that incentivises bad actors, because these bad actors now have more valuable devices to target, can do more with those devices once they’ve taken control of them, and there is generally more valuable information on those devices to exfiltrate. Whether your goal is espionage or causing rolling blackouts in a major US city, today’s processor industry can facilitate such nefarious ends.

The government’s growing concern

And if that scenario isn’t enough to give security experts a headache, just consider the slew of regulations that are quickly bearing down on the industry.

Not only are there bills in both the US House and Senate--focusing on cybersecurity infrastructure and IoT device security, respectively--but there’s new evidence that suggest the FDA is crafting internal guidelines on medical device security, as well. All-in-all, it appears that the federal government and its agencies are beginning to mobilize, and that it’s no longer a matter of if, but when regulations will be put into effect.

If you’re a processor developer, both the market and government are clamoring for security solutions and it’s only a matter of time before these two forces reach a crescendo.

Understanding Security and Where It Fits Into Processor Design


As mentioned earlier, when it comes to developing processors, security is either not considered or left to the software team. This mindset has lead us into a new era of processor development where hardware teams are only dimly aware of the need to build secure products, and haven’t the slightest idea of where to start.

Although we covered this topic extensively in a recent white paper, it’s still important to outline what we call the Trust Triad--the three aspects of security that need to be satisfied in order to deliver a secure product. The Trust Triad is made up of three parts:

  • Security: This might be a bit confusing, so stick with us here--security is just one aspect of a secure processor. When we talk about security, often what we’re really talking about is some combination of computing security and communications security. Computing security is all about protecting your device or product from the outside world (or the network it will be connecting it to). Computing security, therefore, is all about stopping network-based attacks. Communications security, on the other hand, is all about ensuring encryption. More on that later.
  • Safety: We may think of safety as intrinsically tied to security, but it’s actually something separate. While security is focused on computing and communications, safety is focused on how a product works in the real, physical world. Take a medical device, for example: while computing security is concerned with stopping network-based attacks on the device, and while communications security is worried about encrypting the data this device is sending to a doctor, safety is worried about ensuring the device doesn’t malfunction and hurt the patient. Safety is all about ensuring that even if the device is hacked, it can’t be manipulated in a dangerous way.
  • Privacy: While closely tied to communications security, privacy is more about preventing data exfiltration than it is about encryption. Privacy helps ensure that your work content and data stays separate from your personal content and data, and that both are protected appropriately. When communications security is working to make sure only the right person receives the right information, privacy is working to make sure that the encrypted communications aren’t hijacked during the encryption or decryption stages.

Up until now, we’ve seen this Trust Triad as something that could only be accomplished through software. But we know now--and we’ve always somewhat known--that software alone can’t ensure security, safety, and privacy because software is inherently flawed.

As we mentioned in a recent blog, there are on average 15 bugs per 1,000 lines of software code, and that, according to the FBI, of those bugs about two percent are considered exploitable vulnerabilities. When you consider that something like a US military drone is built on 3.5 million lines of code, you start to realize just how many attack vectors bad actors have to test.

This is exactly why processor design is so integral to achieving the goal of ensuring the security, safety, and privacy of a device.

Hardware is significantly harder to manipulate than software is, as it requires physical contact with the device. By baking security features directly into the hardware product designers are deploying, processor designers can help build a more secure future.

That said, there are a number of different ways you can build security solutions into your processor and they’re readily available to developers, today. In the next section we’ll look at some of the main components developers are playing with as they try to build the new generation of secure processors.

Building Secure Processors: What Does It Take?


The problem with processors today is that, on their own, they’re not very smart.

The bare processor, with no security features added, is like a slave to the software it’s running. Applications feed the processor instructions, and the processor executes those instructions whether it was supposed to or not.

It’s not the processor's fault, though. It simply doesn’t know any better.

We can start to fix this today by layering security solutions in and around the SoC that the processor belongs to, and we’ll take a look at what some of those solutions are in this section. To start, let’s go back to the drawing board … literally.

Security Threat Models

One of the most important security solutions a hardware team can implement, technically never makes it into the SoC. At least not physically or digitally.

Security Threat Models are a type of risk analysis used to understand the attack vectors that might accidentally be created during the design process. Software teams are normally the ones leveraging these models, and it helps them write more secure code by preemptively solving security vulnerabilities before they are created.

Hardware teams can implement the same type of modeling in their design processes. By taking the time to understand how the processor could be attacked and manipulated, developers can write the roadmap to a more secure product.

As mentioned, Security Threat Modeling is rarely seen as the hardware team’s responsibility. As such, hardware teams may not know where to start. If you find yourself in the same situation, consider this resource from Synopsys on the five pillars of successful threat modeling. While the article is clearly labeled for software integrity, hardware teams can easily adapt this for their purposes.

Hardware Root of Trust

While the general assumption is that hardware is inherently secure, the reality is that it needs to be protected just as much as software does. In fact, it’s not difficult to make the argument that hardware is a bigger security priority than software is, because when a hardware exploit exists, they are largely considered “un-patchable.”

Hardware Root of Trust (HRoT) is really the first step in securing your processor. HRoT includes a number of features and handles key functions, such as:

  • Performing device authentication to ensure hardware hasn’t been tampered with
  • Verifying the sanctity of software--particularly boot images--to ensure they haven’t been tampered with
  • Providing One-Time Programmable (OTP) memory for secure key storage, facilitating encryption
  • Ensuring the system is able to be brought into a known and trusted state

There are other aspects of HRoT that you might see in different solutions, as well, for example a security perimeter, a true random number generator, and a crypto engine. These features and functions, layered on top of one another, are designed to ensure that when the processor starts, it is starting in a known state, accessing a known and secure set of instructions, and can run its start up sequence uninterrupted.

HRoT ensures that everything is running as it should from the start, and that none of the hardware security features you’ve baked in have been turned off, manipulated, or circumvented.

Secure Boot

Secure Boot is intrinsically tied to HRoT, in that one cannot exist without the other.

Essentially, once the SoC has been able to verify that all the hardware inside is the hardware it says it is, the secure boot can take place. Here, secure boot allows the processor to verify that the bootloader and operating system being run at startup are the right bootloader and software and that neither have been manipulated.

This is done by creating a trust relationship between BIOS and the software that the processor will eventually run--such as the aforementioned bootloader and operating system--as well as other firmwares, drivers, and utilities.

Secure Boot is often unique to each SoC vendor, however there are open source solutions available as well, such as this one.

Crypto Engines

While encryption can be done sufficiently through software alone, crypto engines accomplish the same ends through hardware acceleration. Crypto engines require an OTP (as mentioned in the HRoT section above), however, if they’re being used for data transmission over a network, then the crypto engine will require something along the lines of Secure Volatile Key Storage.

Physically Unclonable Functions

A Physically Unclonable Function (PUF) is a crucial component of cryptography and allows for both encryption and attestation. The PUF serves as the processor’s digital fingerprint and are manually created during fabrication. PUFs help manufacturers differentiate between otherwise identical processors, and they store private keys that can be used during both encryption and attestation.

OTA Firmware Updates and Rollback Prevention

One thing that makes embedded processors, in particular, vulnerabile is that the firmware they ship with is often the firmware they’ll use throughout their lifecycle. Without over the air (OTA) firmware update capabilities, it’s very difficult for manufacturers to patch their devices when vulnerabilities are exposed after they’ve entered the field.

And, while OTA firmware updates might help keep deployed devices updated and protected against newly-found exploits, it won’t stop bad actors from simply loading up an old version of the firmware with a known vulnerability.

This practice, known as a rollback, can be stopped by implementing rollback protection. This feature tells the processor to invalidate old firmware versions when a new version is being installed, to avoid such an attack.

Sentry Processors

Since the overwhelming majority of cyberattacks start through software exploitations, and since all software contains vulnerabilities, we need an answer that is based in hardware. Therefore, the only way to stop this vicious cycle is through the use of something built in silicon, like a sentry processor.

Sentry Processors, such as Dover’s CoreGuard™ silicon IP, are small and efficient secondary processors that act like bodyguards for the host processor. However, unlike the host processor, the sentry processor comes equipped with a set of micropolicies and metadata that help it determine good instructions from bad ones.

In this sense, the sentry processor functions as a check and balance against the software that the host processor is running. With the help of these micropolicies and metadata, a sentry processor can review each set of instructions an application sends to the host processor, verify it’s legitimacy, and determine whether it’s OK to run or should be flagged as a violation.

Since the sentry processor is a piece of hardware with no connection to the internet or outside world, it can’t be compromised over the network. In fact, CoreGuard’s technology--micropolicies and metadata, included--are unable to be run or even seen by the host processor, making it essentially unassailable.

Further, sentry processors are highly customizable to better serve the industry or specific device function they are meant to protect. While the out-of-the-box micropolices that are included with CoreGuard can stop 90 percent of network-based attacks, custom micropolicies can be layered on top to stop all attacks a device might receive.

Layering a sentry processor together with other security features mentioned in this blog, are the foundational components of any secure processor.

Building Secure Processors: Where to Get Started

A processor on its own is not secure.

In order to prevent our processors from running malicious code, we need to make them smarter and give them more resources to fend off attacks from every angle. All-in-all, this amounts to building SoCs from the ground up, with security in mind.

If, after reading this, you’re wondering where your next step should be, the answer is to head back to the drawing board. Only by developing a threat model and including it in the specs your hardware design team is working off of, can you start to secure your future product.

And, when building your threat model, it’s important to remember the inherent flaws of software, and how hardware like Dover’s CoreGuard silicon IP are essential to developing secure processors.

To learn more about CoreGuard, and how security, safety, and privacy can only be achieved through a combination of software and hardware, watch our introductory video here.

Share This Post

More from Dover

PublishedDecember 27, 2021

With 2021 coming to a close, let’s look back at some of the cyberattacks we’ve seen this year, and discuss what we can expect for 2022. 

Security Communications IIoT