We’ve now been living with the fallout of Heartbleed for a few weeks now, and if anything there’s more to be concerned about.
Very, very brief
Earlier this month, the now infamous Heartbleed bug was discovered in OpenSSL. In an extreme, abridged nutshell, the library allowed malicious users to access private data from a server. By over-reporting packet length, memory extraneous to that request could be leaked.
As with memory leaks on our desktop OSs, the implications are applications, and nefarious users, gaining access to memory they’re not supposed to. In the case of servers, this could include anything from private keys, to session data, to login credentials for users, databases, accounts, the works.
It’s terrifying, not only because traffic to so many unpatched sites could be compromised, but that they could have been compromised for so long in the past, and retroactively so. For so many servers rattling along without carers, or embedded systems, or others that will take time to patch, its feasible this attack could render some traffic insecure for the foreseeable future.
In his classic, no nonsense style, OpenBSD and OpenSSH’s Theo De Raadt took issue with the OpenSSL team’s use of a wrapper around C malloc and free calls, thereby sidestepping any future potential security improvements. In this case, the burden of maintaining secure memory was drawn away from the OS, which is rarely a good idea.
Regardless of whether the age old issue of performance and security played a part, the issue has raised several key questions.
Should we be writing such critical security code in languages that have arbitrary access to process memory at all?
Is the Torvaldian “with enough eyes, all bugs are shallow with open source code” meme dead and buried now?
Will this finally provide the impetus for widespread perfect forward secrecy deployment, given its random session keys could help protect against such attacks being retroactively used on archived encrypted communications?
For such critical software, should the overworked OpenSSL team be given more resources? Should we all be contributing to software that we all depend on?
Should organisations, such as the NSA, be obligated to share bugs of this nature that potentially affect everyone? Or more bluntly, should we be surprised that they exploited such a bug?
I’ve yet to develop my thoughts on these sufficiently, so they’ll have to be for future posts. Well, other than the NSA one, I think we all know the answer to that already.
If you see a sysadmin around, or know one, give them a hug and maybe some chocolates if that’s her/his thing. We’ll need all of it we can get.