There doesn't seem to be a decisive answer in principle. The experts all have opinions, precedents and theories.
- Everyone understands that open source gives equal opportunities for friends and foes alike to examine the code. Some argue that a determined attacker reverse engineers anyway, but why make it easy on the attacker, anyway?
- Sure, having the source means that you can look for trap doors and more easily detect them, but on the other hand, when release 1.1 of an open source project comes out with a lot of publicity that it is an IMPORTANT SECURITY FIX, people running version 1 (for whatever reason) have crosshairs on them.
- It seems pretty clear that public cryptographic algorithms get a lot of scrutiny and benefit from it, but they are extremely high value targets, and experience suggests that a lot of open source and proprietary code alike goes unexamined because it's, well, a boring job to audit others code.
- The processor that executes the code doesn't know if it's open or not; it's the quality of the code and not its provenance.
I couldn't find any controlled experiments out there, but there are some ideas:
- Both open and proprietary folks have a lot of security bugs.
- Its more fun to add features than run security audits.
- Open source may well have greater longevity than proprietary code. Of course, since closed source is often done with a commercial motive, end-of-life announcements are ways to start new revenue streams, rather than purge old code.
I use Firefox, because it's supposed to be more secure than Internet Explorer.
While Jon's article provoked a lot of backlash, there's little glory in bind's security record, compared to other DNS code, particularly Nominum's. Of course, bind tries to do more and has been a target for a decade or more longer than anybody else.
So I think it's silly that there's a general rule here; I suspect that in some cases the advantage goes to open source and in other cases proprietary code.
No comments:
Post a Comment