This is one of the best ideas in computer security that I’ve heard in ages. It’s true that while continuously patching your programs allows you to have the latest technology, using older versions gives you better stability. Patching is an expensive and painful process. And besides, this is also one of the reasons why computers are getting out of the programmers’ control. We do not know our own systems anymore. At least not the same way that our predecessors do. What we really need is not what the market claim to be the latest and, I strongly doubt, the greatest softwares available. Patches are just proof of inperfection, which is, thinking about it, brought by patching in the first place. If we know our systems very well, we will be able to come up with better codes.
There is a reason why MJR is one of the top specialist in this field. Because he knows that the when developing a software you just have to “write it and forget it”.
Test-Driven Development?
No, i don’t think that’s such a good idea. Not when it comes to security, atleast.
best explained by this BASIC code from MJR:
10 GOSUB LOOK_FOR_HOLES
20 IF HOLE_FOUND = FALSE THEN GOTO 50
30 GOSUB FIX_HOLE
40 GOTO 10
50 GOSUB CONGRATULATE_SELF
60 GOSUB GET_HACKED_EVENTUALLY_ANYWAY
70 GOTO 10
the code/systems/implementation is not better by design but merely toughened by trial and error. This is doomed to failure. I mean, if “penetrate and patch” is so effective, then we would’ve ran out of security bugs years ago. 😀
Test driven development is always a good idea but its effectivity depends on the type of tests you use. If you test for a system’s security and integrity, then chances are, your final output may very well reach a higher level of security.
The thing is – a full secure system is impossible. There will always be flaws in systems since these systems are only created by human beings and last I’ve checked, they are not flawless.
Ideally, an entire system is used to replace a defective one. Patches make it quite difficult to manage but it is often the most cost effective. Imagine having to release a full Windows XP version just so you can get one or two bugs fixed? Not effective.
As long as software engineers remain imperfect, your systems will continue to be imperfect as well.
I agree that systems will never be perfect but if we design a system that is secured by design and with flaw-handling in mind then the problems should be minimal.
Most of the patching in software industry involves just adding a new code to mask away the flaws instead of removing the flaws itself.
There are a handful, like PostFix, Qmail, etc, that were engineered to be compartmented against themselves, with modularized permissions and processing, and – not surprisingly – they have histories of amazingly few bugs. What I’m saying is, we have to be very vigilant when developing a software. We don’t really need to know every single buffer overruns that are being exploioted but we have to always assume that they’re there. I’ve even encountered a taunt in a sourcecode that went something like:
/*Trying to outsmart me? Yes, I thought you might do that. -bob*/