Back to Blog
von Neumann's Impact On Modern Computing, Cybersecurity

A 70-Year-Old Computer Architecture May Be the Root of Our Modern Cybersecurity Problems

The von Neumann computer architecture, also known as a stored-program computer, was first described by mathematician John von Neumann in 1945.

The design describes a simple computer architecture with just a handful of distinct components: a processing unit, a memory unit to house both data and program instructions, an input device, and an output device. More than 70 years later, this simple architecture is still the basis for the majority of today’s computer systems.

A fundamental power of the von Neumann design is the concept of an instruction set, or a set of machine instructions strung together to create the program that will be executed. The ability to store either program instructions, or data, into the memory unit allows for arbitrary programs to be loaded into memory and executed — a feature that architectures prior to von Neumann (known as fixed-program computers) did not allow. Initially, this ability to arbitrarily manipulate instructions and data enabled the creation of tools like linkers, assemblers, and compilers. It also enabled the existence of self-modifying code. These valuable capabilities were leveraged by the nascent computer industry and crucial to its future growth, but also brought with them the insecurity of being able to execute data as if it were code.

Von Neumann machines are great at doing precisely what they’re told, and nothing more

They have no intuition and make no assumptions. They have no idea what the programmer’s intent was when the program was written. The programmer will write a program with a certain intent, but programmers are human and they make mistakes. The computer will execute the program exactly as it was written, but through either user input or a programming error, the computer can be tricked into doing something the programmer never intended.

There are many known bugs in shipping software. These vulnerabilities are typically programmer errors that can be exploited in some way that allows malicious actors to take control of the computer for their own purposes. Attackers craft targeted exploits to take advantage of vulnerabilities with the aim of getting the computer to behave in some new, unintended manner. The ultimate goal being to either run a new program of the attacker’s choosing or to manipulate, steal, or modify data on the computer system.

The software landscape in the computer industry is moving in one direction only — towards a greater number of ever-larger programs

Millions of lines of new code are written every day, and the size of software products released to the public grows exponentially. The software running on just one modern automobile, for example, can approach 150 million lines of code.  Shipping software statistically exhibits about 15 bugs for every 1000 lines of code. That’s a lot of software, and a lot of bugs — bugs that hackers can exploit in order to take over computer systems.

Fundamentally, the only way to prevent cyber attacks and protect our systems from all of this exploitable code running on our easy-to-subvert von Neumann machines is to modify the basic micro-architecture at its lowest levels.  We need to use the hardware to insure that the software does only what the programmer intended. We'll talk lots more about this in future posts ... for now, read more about Dover's solutions and be sure to check back regularly for more content.

Craving further insights into processor design and security? Subscribe to Dover's blog!

Subscribe Today

Share This Post

More from Dover

PublishedAugust 30, 2021

The ever-expanding universe of cybersecurity threats plaguing embedded systems today is  only getting more dangerous, costly, and pervasive with every year that passes.

Security Defense-in-Depth