Modern user-centric design has its roots in the security space: The worst commercial nuclear power plant accident on U.S. soil was a direct consequence of a poor control panel design.
In 1979, reactor number 2 at Three Mile Island Nuclear Generating Station in Pennsylvania partially melted down. Don Norman, now generally accepted as the father of UX design, was one of the experts brought in to investigate the accident. He and his colleagues concluded that the catastrophe could be attributed to the confusing array of buttons, lights and switches on the control panel. When a pressure relief valve became stuck open, large amounts of nuclear reactor coolant escaped; power plant operators did not close the valve, however, because an indicator light (one of about 1,200 lights on the panel) misleadingly seemed to indicate that the valve was shut.
The initial finding attributed the accident to “human error,” but Norman showed that the system that made sense to the engineers who created it was alarmingly complex to the power plant operators who used it. It’s easy to blame the user for “doing it wrong,” but shouldn’t we design products and experiences for humans to use correctly?
Today, 41 years later, the same pattern persists in cybersecurity — and the challenge is even greater. Hackers and their techniques are becoming both simpler and more sophisticated. For example, by leveraging the cloud, a simple bot (small, simple application) can continuously scan or “crawl” over a company’s public infrastructure looking for vulnerabilities to exploit, like an open port, a public S3 bucket, or an unsecured API endpoint. And while a hacker only has to find one access point to be successful, software engineers have to be able to account for all vulnerabilities. It’s easier than ever to open up vulnerabilities while building a product, but it’s a lot harder to have the discipline to close them. This keeps product leaders like me up at night.
Cybersecurity is typically framed as too technical for user experience. It is, after all, commonly designed by engineers who know what they are doing for engineers that know what they are doing. But isn’t that exactly the thinking that led to the accident at Three Mile Island?
When it comes to cybersecurity, you can have the best product on the market, but if it’s not easy to use in the way the user is trying to use it, you are more likely to confuse and frustrate your target users and ultimately lead to a false set of security. Here are three questions any company should consider when developing a software product:
Is your design consistent?
User stress is higher in a security context, and designers must allow for the human brain to easily interpret information. The “action” buttons, for example, should be the same color and positioning on each screen and each product. This is an under-cherished principle in cybersecurity. Consistency eases cognitive load and allows users to reallocate that load to figuring out what to do about the data they are seeing.
Most security cannot be fully automated. Instead, most security products are alert-focused and probabilistic, which means that a human must determine the probability that a breach has occurred or that an action needs to be taken. Humans have to be able to understand the data and move very quickly from input to judgement to action. At the end of the day, we must remember that human error or misunderstanding can undermine even the best software capabilities. Intuitive user design minimizes human error, and this starts with consistent design.
What do you know about your users?
This takes #1 to the next level. What are your users’ skill levels or knowledge of the problem you are trying to solve for them? What is the likely pretext for them to turn to your product in the moment? Are they coming with a lot of context or trying to figure out what happened? What actions are they likely to take from your product? If your product is used for security hygiene, are you including education so they understand why they are doing what they are doing?
This aspect of user experience design is called mental mapping and usually requires a technique called “contextual inquiry.” These are fancy words for “you aren’t your user” and you have to get out of the building. If you’re good at cybersecurity, you have built up a stable of people that really get security. Your users are not likely to resemble the people inside your company. You need to get out and meet your users, see their environment and understand how they see your product in their workday and lives. Remember, failure lies with the product, not the human.
Where might your product fit into the bigger landscape?
This is close to #2, but instead of focusing on user psychology, this focuses on the security landscape of the customer. Most cyber-product development companies focus on one particular capability that sits within an ecosystem of capabilities. You won’t always be designing the dashboard; it’s more likely that you are responsible for a function within a broader security ecosystem.
It’s difficult, as a designer, to get a unified picture of what is happening within the larger system, but that is exactly what we need to be doing. Tools need to be able to function together. While successfully executing a complicated part of the whole is important, it’s also important to know how that part will function within the whole. Your customer cares about the big picture — about BEING secure. If they have nine different systems doing nine different things around security, it’s hard to argue they have achieved more security despite an increasing security spend.
My favorite example of a tool that takes a wide open view to how it will be utilized is Google Maps. They have the website, an embeddable UI widget that you can embed on your website, and an expanding set of rich APIs that you can use in a variety of ways inside your application. Most cybersecurity products want to be the hub and everyone else to be the spokes. A more evolved strategy is to stay truly committed to the security of your clients and eager to play either position — hub or spoke. Perhaps through contextual inquiry, you will learn that your users consider your product to be a spoke and a darn good one, but they really need it to play nice with another infrastructure monitoring dashboard like NewRelic. Having an integration-first approach would then make the most sense, adding your own user interface minimally and slowly to serve other user bases as you expand your offering.
I’m often asked by companies, “How much should we spend on our security?” The answer is that it will always be “too much” when you don’t have a breach and “not enough” when you have a breach. Security breaches today are a matter of if, not when. Preventing them, or recovering from them, requires a combination of smart and usable products.
This guest post is a part of Design Month of Technical.ly's editorial calendar.
Before you go...
To keep our site paywall-free, we’re launching a campaign to raise $25,000 by the end of the year. We believe information about entrepreneurs and tech should be accessible to everyone and your support helps make that happen, because journalism costs money.
Can we count on you? Your contribution to the Technical.ly Journalism Fund is tax-deductible.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!