AdobeStock_200284540

Barnaby Jack, laptop in hand, stood 50-feet away and hit return on his keyboard.

With a crisp, audible pop, he “killed” the mannequin across the stage from him by activating an 830-volt defibrillator—a key component of the human stand-in’s pacemaker. Just like the vice president portrayed on the TV show “Homeland,” the mannequin could have been a real person, and they would have been dead.

Medical devices, like the one in Jack’s mannequin, are just one piece of an ever-expanding network of traditionally “unconnected” devices known as the Internet of Things (IoT). These devices, through advancements in technology, can now connect to the internet and other networks as a means of increasing their functionalities.

Pacemakers, for example, now have the ability to connect to networks wirelessly to make it easy for doctors to adjust settings without requiring an invasive procedure or even a visit to the office. And, while this increased functionality makes the pacemaker a better medical device—at least, in theory—it also presents some grave vulnerabilities.

Pacemakers, in general, have two functions: trickle charge the heart muscle to ensure it is beating regularly, and fire an 830-volt defibrillator during emergency situations. Under no circumstances should the latter function happen if the user’s heart is beating regularly; such a shock would mean almost certain death.

What Jack’s demonstration showed us was that weak programming makes pacemakers too dangerous for IoT, and instances such as Abbott’s recall of 350,000 devices shows that the FDA agrees.

But pacemakers are just one piece of a much, much larger picture. And, to fully understand just how dangerous for your health an insecure IoT can be, we need to start by understanding the full scope of IoT and its current impact on the world around us.

Understanding the Full Scope of IoT and Its Impact on Our World

IoT, as mentioned above, is a rapidly growing system of connected physical devices. When I say “connected” here, what I mean is that each “thing" is able to interoperate within the existing internet infrastructure, which in turn means that each “thing" can communicate with every other internet-connected device, worldwide. To put that idea into perspective, consider that current researchers estimate by 2025, some 75.44 billion devices will be a part of IoT.

This massive web of physical devices is made possible by a number of embedded electronics, software, sensors, and actuators that help IoT devices connect to each other and exchange data. Connecting these once disparate devices has lead to a wide array of consumer applications such as, but not limited to:

    • Home automation

    • Digital assistants

    • Baby and home video monitoring

    • Connected entertainment systems

    • Wearable medical devices

    • Connected cars

    • Smart home appliances

This list, of course, is in addition to the plethora of new consumer IoT devices that seem to be announced almost daily.

And while the consumer IoT space is often the most talked about, it’s also important to note that a large segment of IoT falls within the industrial space and is known as the Industrial Internet of Things, or IIoT. It includes a number of unique applications as well, such as factory automation, infrastructure management, inventory and asset management, smart grids, electric metering, and energy management.

For both consumer and industrial markets, the IoT opportunities are endless and greater adoption promises to make our day-to-day tasks easier, more efficient, and enjoyable, both at work and at home.

However, as we connect more and more devices to personal and professional networks, what we’re also doing is adding layers of software that must now be protected, less they serve as openings for nefarious actors.

Here is where we start to see the unintended vulnerabilities that arise from a rapidly growing IoT.

Software Bugs: The “Open Windows” Cybercriminals Are Looking For

AdobeStock_71070279

The dirty little secret about cybersecurity is that it’s done almost exclusively through software.

Why is this a dirty secret? Because it’s impossible to write perfect code, and flaws in a software’s code are seen as open windows into the device, network, or system a cybercriminal is targeting. The more complex the software, the more lines of code, the more chances a window is left wide open. And the kicker to all of this is that the cybersecurity software we rely on every day tend to be some of the most complex pieces of software out there.

So the dirty little secret about cybersecurity is that we’re trying to fix flaws in software with software that is inherently flawed—potentially more so than the original piece of software it purports to protect.

These flaws, or mistakes in the coding, are known as bugs and they’re not always created by human error. Sometimes, developers write clean code, but that code isn’t designed to be as cautious and careful as it should be. Other times, developers are working with tools and programming languages that are not yet mature enough to ensure the code that is written with them isn’t unintentionally producing bugs.

Regardless of the source of the bug, bugs exist and there’s no way to really snuff them all out.

In CODE Complete, Steve McConnell explains that there are consistently 15-50 bugs per 1,000 lines of code. Further, nearly 10 percent of all bugs, according to the FBI, are exploitable by a determined attacker.

To put this into perspective, let’s take a quick look at some well-known sets of code. For reference, keep in mind that 1,000,000 lines of code is the equivalent of some 18,000 pages of printed text.

    • 400,000 lines of code = Space Shuttle

    • 3,250,000 lines of code = Large Hadron Collider

    • 14,000,000 lines of code = Boeing 787

    • 44,000,000 lines of code = Microsoft Office

    • 150,000,000 lines of code = Ford F-150

If we do the math here (conservatively assuming 15 bugs per 1,000 lines of code, and that 10 percent of those bugs are exploitable), we quickly realize that Ford’s popular pickup truck has about 2,250,000 bugs, or about 225,000 potential attack vectors. And that’s just for one truck.

Now think back on the estimated 75.44 billion IoT devices that will exist in less than 10 years, and the number of bugs and exploitable vulnerabilities feels almost infinite.

While the prospect of an attack surface this large would scare even the most seasoned security expert, what’s even more alarming is the fact that the processors in many of these IoT devices are defenseless against cyberattacks—mainly, because they were designed that way.

Why the Processor is the Cybercriminal’s Unwitting Accomplice

The architecture of our computer processors is conceptually the same as it was when designed by the mathematician and physicist, John von Neumann, in 1945. Technically today’s processors are “stored-program digital computers,” which means they keep a program’s instructions, as well as its data, in a single, uniform, read-write, random-access memory. This processor design was an improvement over the program-controlled computers of the 1940s, which were programmed by setting switches and inserting patch cables to route data and to control signals between functional units.

The von Neumann processors use such a simple architecture that they lead to an explosion of innovation within the industry, producing smaller, cheaper, and faster products. This tear of innovation led us to Moore’s Law which, though not really a law, said that the number of transistors on a microchip would double every 18 months.

Moore’s Law made the security problem worse.

Sure, the machines got much smaller, cheaper and faster. And, yes, this meant they could do more complex tasks more quickly. But, unfortunately for us, security was never part of the equation. We went from 2300 transistors in a 1971 Intel 4004 processor to 19 billion transistors in a 2017 AMD 32-core Epyc processor, that scarcely directs even a few of those transistors to security.

Since undifferentiated memory in our von Neumann processors contains instructions and data, the processor cannot tell if a particular piece of data is suitable to execute; it will simply process whatever instruction it is presented. In fact, the efficient performance of our processors can be a detriment when it comes to security.

If data from the outside world—including something from an attacker across the internet—is somehow loaded into memory, the processor can be tricked into executing that data even though it is not a part of the intended program. The attacker will have successfully commandeered the processor to run their malicious code.

We can fix today’s processors without an industry-wide do-over

AdobeStock_116648237

So the defenselessness of today’s processors make for an extremely vulnerable Internet of Things. And a vulnerable IoT won’t be able to inspire the kind of consumer confidence that is needed for the industry to reach its full potential.

But does it have to be this way?

Surely, if IoT device makers could stop network-based attacks, guarantee the privacy of data, and prevent the malfunctions and bad behavior that cause physical harm, then they would be able to instill more confidence in their buyers and the industry as a whole. They could help IoT reach its full potential.

Accomplishing this won’t necessarily be easy, but it can be done. And, after more than seven years of research, we’ve been able to determine four goals that need to be accomplished in order to deliver on the promise of an IoT future:

  • First, we need to provide the processor with more information about the application. Let’s examine how some very critical information is lost when the application is compiled from its source form and transformed into a binary executable. One of the main jobs of the compiler, besides forming the executable, is to build the legal call graph for the application. Every application is made up of a collection of functions (also called routines) where each function typically does one relatively small task and then it returns to the higher function that called it. The call graph is the exact hierarchy of what functions call what other functions and in what order.

    This is important because one favorite type of attack is to hijack the return from a function, so that it will instead go to the attacker’s code. When compilation is complete, the call graph is not preserved. It needs to be, and we do this by modifying the compiler.

    There is also some equally critical information generated while the application is running, but it too is thrown away and not made available to the processor. Both these types of “extra” information about the application provide valuable insight to the programmer’s intent. We call this information “metadata.” Our research led us to the conclusion that creating metadata about every instruction and every location in memory, enables special protection circuitry to help an application processor “do the right thing”—even in the face of bug-ridden software and cyberattacks.

  • Second, we need a way to describe the things we want checked and enforced, whether they are security, safety, or privacy related. We call these descriptions micropolicies. As the micro prefix implies, they are small—as in tens or dozens of lines of code rather than millions. Small means it is much more realistic to verify their correctness.

    Micropolicies are really just a set of rules that describe things you want to verify about the state of the system as each machine instruction is executed.

    Take a buffer overflow, for example. It is the single-most common way attackers start an infiltration. In application development, a buffer is a fixed-sized block of memory created by the application developer to hold some information for the application. Applications can easily have thousands of different buffers. An overflow occurs when the information stored in a buffer takes up more memory than the buffer was designed to hold. Overflowing data then overwrites data in memory that it shouldn’t touch.

    Buffer overflows are possible because of a common bug where a programmer forgets to check the size of input data before storing it in a buffer. Attackers find these bugs and exploit them to overwrite a buffer with instructions that will execute their malicious little programs. When the programmer wants to create a buffer, they call a function (frequently called a “malloc” for memory allocation) asking for a buffer of a specific size. The malloc function returns a handle for this buffer that the programmer then uses for all access to that buffer. The malloc function knows the size of the buffer and when it was called, but that information is completely lost after it returns the handle.

    By preserving the compiler information and this run-time information—that is, the metadata—we can enforce critical security rules like “do not ever allow a buffer to be overwritten” or “make sure every call and every return from a function only goes where the program intended it to go.”

  • The third thing we need is a way to instantly stop processing when a problem occurs—before any damage is done. We call the mechanism to do this a hardware interlock. It has to be hardware because, unlike software, hardware is unassailable over a network. No one can reach across the network with a little soldering iron to physically change hardware.

    The simple idea here is to watch as each and every instruction is executed, to use all appropriate metadata to apply relevant micropolicies, and to identify any violations. If everything is a-ok, let execution proceed normally; if there is a violation, do not allow the instruction to complete, and handle the exception safely and appropriately.

  • Fourth and finally, we need to accomplish the other three items using today’s processors. We can’t develop and implement a more secure processor architecture overnight. It will happen, but it will take decades, and we can’t afford to wait. In the interim we need to bolt something onto our existing processor technology to provide the security it is so sorely lacking.

Accomplishing these four key goals can effectively protect a standard (read: defenseless) processor from cyberattacks over the network, and it’s being commercialized today, by Dover, as CoreGuard®.

The good thing about hardware is its unassailability. The bad thing about hardware is its inflexibility. The beauty of CoreGuard is it’s a simple, but powerful, and unassailable hardware mechanism that is directed by programmable micropolicies.

These micropolicies are a set of predetermined security, safety, and privacy rules that govern what the host processor is and isn’t allowed to do. The micropolicies included with CoreGuard out of the box can stop 90 percent of network-based attacks, and custom policies can be written to make embedded systems even more secure.

What this all means is what is being enforced by CoreGuard is determined by the set of micropolicies that are currently active.

If, for example, a unique safety enforcement micropolicy custom-tuned to a specific application on a specific device is needed, it can easily be written and added to CoreGuard. And if a new micropolicy is needed for a newly discovered type of attack, one can be added—no hardware modifications required. What this all boils down to is a unique combination of a powerful hardware interlock and highly adaptable micropolicies that can guard against entire classes of attack, including zero-day exploits.

CoreGuard, in essence, is a bodyguard for embedded systems, making them secure, safe, and private. With technology like this, we can help mitigate the dangers of the IoT and help it reach its fullest potential.

Learn more about CoreGuard and how it can help bring security, safety and privacy to your embedded systems by requesting a demo, today.

Request A Demo

Share This Post

More from Dover

PublishedNovember 09, 2018

 

Millions of devices, primarily network access points, could very well be vulnerable to two new types of proximity-based attacks discovered by Armis and dubbed Bleedingbit.

Bleedingbit Buffer Overflow networking