Answering “yeah, but is the solution secure?”
InternetSecure from what? From whom? Where? And for how long?
Moving from dev and ops to solution architecture has been an eye-opening experience. The first thing you notice is that prospective clients rarely know what they want, and those that do may be confused, have conflicting requirements, or are acting under dangerous misconceptions. I’m sure everyone from business analysts to support engineers know exactly what I’m talking about.
The challenge with being the interface between sales and engineering is being able to speak to both groups. The former are motivated by KPIs and balance sheets to say “yes!” to everything, and the latter need to build something to a spec. But a sales person who commits to something infeasible is as useful as an engineer who implements an unworkable solution with bad data.
Security is a perfect example of this struggle in practice. Nobody wants insecure systems, save for pen testers and bounty hunters! Yet ask a businessperson to quantify what they mean when they say a system “has to be secure”, and most can’t. You may get some vague references to encryption, firewalls, VPNs, keys, securing data in flight and at rest, and maybe a tender for flavour, but nothing about how it fits together, or what problems each component is attempting to solve atomically and in aggregate.
It’s why I’m troubled by those ubiquitous YouTube VPN ads. Their sweeping security claims don’t pass muster, and give viewers the complete wrong information. It’s one thing to say they bypass georestrictions (at least for now), but the other claims about it protecting you from viruses, tracking, and fraud are dangerous nonsense. But I digress!
It’s an architects job (and others in pre-sales engineering) to walk people through their threat domain, including stakeholders, what they’re trying to protect, from whom, and within what financial, time, business, and legal constraints. Only then can you truthfully propose a solution that addresses their concerns.
It’s jarring when you realise security in the real world isn’t binary, both for sales people used to thinking it’s a checkbox alongside “speed” and “easy to use”, and engineers and technical writers like me who are used to dealing with mathematical certainty. And there are serious consequences for getting this wrong:
-
It might not address their security concerns, or not in the way they expected or required. Worse, they may erroneously think it protects them from things it doesn’t.
-
It might be too expensive or complicated to deploy or maintain in the first place.
-
It might be technically sound, but so impractical that people either don’t use it, or route around it to do their jobs.
The latter is a regular blindspot in technical forums, Q&A sites, and the orange peanut gallery. Show me a bulletproof system with perfect security, integrity checks, and high availability, and I’ll point you to someone uploading confidential documents to Dropbox instead. It’s expedient to blame this on PEBKAC, but in reality it’s evidence of a much earlier problem.
There are best practices for security that anyone who works in the industry would be negligent for not following. It’s also incumbent upon people in architecture to make the case for secure systems, and to make sure it’s being prioritised appropriately. But it’s also why it’s critical to reframe questions about security, or the entire exercise is moot.