Nicole explains how Minimum Viable Product is often misunderstood and how this can impact cybersecurity.
I’m not fond of the term Minimum Viable Product. It’s a buzz phrase in tech these days—somewhere in there with thought leader, deep dive, and gamification—and I cringe whenever I hear it. Now don’t get me wrong: while I don’t like the term, I understand the philosophy behind it. Introducing a new product with a minimal feature set can be an effective way to gauge customer response, identify strengths and weaknesses, and make the next release that much better. Releasing Minimum Viable Product, or MVP, is a particularly useful strategy for startups and early-stage companies who need to optimize limited resources and scale judiciously.
So I support the approach, but not always the way it is implemented. I think part of the problem stems from the term itself. I’m a strong believer in the power of words, and “minimum viable” does not, in my opinion, inspire teams to build high-quality products that attract and delight customers. While a Minimum Viable Product is not intended to be half-baked, those words literally translate to “least possible workable product.” The words set a tone, and often end up an excuse for cutting corners throughout the development and testing cycle. User interface not as polished as we would like? It’s MVP; we’ll clean it up later. Accrued some technical debt in the server code? Don’t worry; we’ll add a task for that in the next sprint. “Later” never comes, and “next sprint” is jam-packed with new priorities.
I usually get on my soapbox to complain about how this MVP state of mind impacts the user experience. I mean, would you go to a restaurant if you knew the chef was asked to cook “minimally edible” food? Would you buy a couch from a company that strived for “minimally comfortable” furniture? Of course not. But UX is not what I want to talk about here. I want to talk about security, and how it is impacted by two unintended side effects of MVP: errors and omissions.
First: omissions. When Eric Ries coined the term Minimum Viable Product in his book The Lean Startup, he described it as “…a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” With this in mind, teams want to deliver the most bang for the buck, and tend to focus first on user-facing functionality that will optimize the build-measure-learn feedback loop. Proper security is regularly omitted as something to “fix later” (and as mentioned before, that “later” may never come). Attackers recognize this pattern of neglect, and often go after smaller partners of larger organizations—seeing them as an easier gateway into bigger and more secure systems.
Next: errors. Research shows that software averages 15 bugs per 1000 lines of code. Even well-tested code has bugs, so what about the code we write when we know we are cutting corners? It’s inevitable that it will average a higher number of defects. And these bugs, no matter how small, are like leaving a window in your house cracked open. Cyber attackers find them and pry them open to break into a system. There is a special name for the serious bugs unknown to the software maker or user: “zero days.” It refers to the fact that once the defect is found and exploited, there are “zero days” to fix it. In other words, the damage is already done. Earlier this year, RAND Corporation published a comprehensive study on zero-day vulnerabilities. They revealed that zero-days have an average life expectancy of seven years, and the median amount of time it takes to create an exploit for a known vulnerability is only 22 days. Finding zero-days is big business. Cyber criminals, as well as intelligence agencies, pay lots of money to learn about zero-day bugs and develop or purchase exploits. There is even an international black market for buying and selling zero-day vulnerabilities. It’s no wonder that we hear about a new attack nearly every day, and there is no sign of this trend slowing down.
Security vulnerabilities and software bugs put us at risk for cyber attacks, and some software development philosophies, like MVP, unintentionally perpetuate the problem. Security software, like firewalls and antivirus applications, serve a purpose, but they also add more layers of vulnerability. Fortunately Dover Microsystems has a new approach that blocks cyber attacks in hardware. Dover CoreGuard is the only product that can stop zero-day attacks, and is designed to secure computing against all forms of attack that can come over a network. Read more about Dover’s solution for secure computing and providing cybersecurity in hardware.