With the rise of sophisticated cyberattacks, we need new solutions to combat these threats.
On May 11, 2017, a global outbreak of ransom-demanding computer software (“ransomware”) crippled hospitals in Britain that serve 50 million people.
Known as “Wannacry,” the attack also hit telecommunication companies in Spain, FedEx in the United States, the Russian Interior Ministry, and many other institutions around the world. More victims are still likely to emerge. Fingers are starting to point to North Korea as the source. We know that knowledge of the hacking technique came from leaked documents stolen from the NSA. We also know the sad truth is that attacks like this are pretty easy for attackers to carry out.
You are probably wondering why we haven’t been able to stop these types of cyber attacks by now. Haven’t lots of smart people been working on this? Well, yes but it’s harder than it seems.
The scope of the problem is pervasive.
Bugs in software are the door into our systems and the key to taking control of them. In reasonably large software products like operating systems, bugs are inevitable and attackers are extremely adept at finding them. Once found, attackers can exploit a bug to overwrite legitimate code in the target system with machine code they want the computer processor to execute. When this happens, it’s “game over.” The attacker can take over the machine and do pretty much anything: encrypt data (making it useless until someone pays ransom to have it un-encrypted), steal data, or use the machine to spread the infection to other machines. Research shows that there are consistently about 15 bugs per thousand lines of code and many of these bugs can be exploited as a vulnerability by attackers.
For reference, Windows 7 is 40 million lines of code, and a Ford F-150 pickup truck has 150 million lines of code in it.
A favorite of the bad guys is to attack Windows, especially the older versions of Windows that are notoriously insecure and, as is the case with the British National Health Service’s systems, so old that they are no longer supported or provided with security patches from Microsoft. In fact, Windows XP, retired and unsupported by Microsoft in 2014, is still running on 11% of desktops, including at Britain’s hospital system, making it still the third-most popular desktop OS.
Security hasn’t been a focus until very recently.
For many decades we have been on an unbelievable tear of silicon advancement such that an iPhone 6 has more compute power than all of NASA’s computers combined at the time of Apollo 11’s moon landing. The first commercial microprocessor, the Intel 4004 of 1971, had 2,000 transistors and the Oracle/Sparc M7 of today has 15 billion transistors. The focus has been on getting smaller, faster, cheaper. A focus on security, however, was not part of the equation until fairly recently. There was no Web until 1995; and the connectedness of billions of devices exposing every device we touch to every hacker around the globe, is less than a 10-year phenomenon.
When security did start to become an issue, fire walls and virus scanning were initially enough to do the trick. It was the infamous Stuxnet attack in 2008 that finally showed us that it was possible to destroy physical equipment half a world away; it destroyed 2,000 Iranian nuclear centrifuges.
Security at the software level is no longer enough.
For over 10 years, the cyber-defense focus has been increasing layers of defensive software surrounding our networks and the computers on them. We have firewalls designed to allow only certain types of traffic into a network. We have intrusion detection systems that look for “signatures” of bad programs (malware) and send alerts about their existence or try to eliminate them automatically. We have virus scanning systems that look for “infections” brought into a network as an attachment to an email or something off of a thumb drive.
We have lots of methods of defense out there, but they all just add more layers of software (with bugs).
We cannot eliminate bugs from software — it’s a human-driven process and human perfection is not attainable. So why haven’t we stopped these attacks yet? Because we keep trying to protect our systems with more and more software — all of which have bugs. Attackers are onto this, and often set out to attack the defense software itself! Meanwhile, the hacking business is booming. Attackers see billions being made through cyber attacks (especially ransomware), so more and more are getting into the game. We are losing the battle.
We have to do something that recognizes that there will be bugs and there will be attacks, yet we must still protect our computers from being over-taken and subverted.
So, what do we do?
We must build in security at the processor level.
We know attackers are “having their way” by finding and exploiting a bug in a program and then tricking the processor into executing their injected instructions or into jumping to a new location in a program to execute a function that was not what the programmer intended. Since the processor has no idea how to distinguish between instructions the programmer intended and what the attacker injected, it can’t enforce simple security rules like “don’t execute instructions that came in from the Internet.”
This leads to a key insight: a computer processor cannot enforce rules it does not know about.
Today’s processors cannot enforce security rules. So clearly what is needed is a way to provide more knowledge to the processor about what is going on and what was intended by the programmer. If we can provide the right kind of information so that it does know the rules, then the processor can enforce those rules and thwart the bad guy.
So everyone needs new processors?
Well, no. Changing mass-market processors is a non-starter. Can you imagine the herculean effort it would take to replace all the world’s processors? Impossible.
At Dover Microsystems, we’ve invented some really clever technology that addresses this exact problem. Let me explain a bit about how it works.
Here’s how the technology actually works.
We've created a co-processor we call CoreGuard™ that shadows the main work-horse processor and keeps track of a wealth of extra information the main processor does not have access to. This additional information is critical to activating CoreGuard’s interlock mechanism that stops an attack before it can ever take hold.
Examples of this additional information include CoreGuard knowing when the main processor is about to corrupt memory to which it should not even have access, or knowing that the place in the application where the main processor is being told to jump to is not a legal destination.
So now, instead of expecting the entire industry to uproot itself and change all processors to something new, we enable processor makers to use their existing designs but add this co-processor that enables their processors to know what the programmer’s intent was and to have an interlock — like a commercial airliner has for preventing a stall or a rollover — that stops an instruction that is about to do something bad but before it does the slightest bit of damage.
The tricks that the Wannacry ransomware play on processors would fail, disks full of precious data would not get locked up, and the attackers would not make billions of dollars.
The future is bleak if we don’t change our fundamental approach.
Wannacry certainly shined a bright light at how vulnerable a lot of systems all around the globe are. And certainly affecting 50 million people’s access to health care in the UK is no laughing matter. But ransomware is not what to be most worried about. What if they shut down our entire electric grid? What if they cause nuclear reactors to melt down? What if they siphoned off billions from our stock markets? You get the idea. Until the embedded systems and computer industries fix the root cause of our vulnerabilities — a lack of security at the hardware level — by applying solutions like CoreGuard, we’re all susceptible to being held hostage by hackers. Or worse.
Be sure and subscribe to Dover's Blog to never miss a beat!