Sometimes we are asked about the failures of rating agencies and if a rating system could be a good approach for security evaluations. We have posted about it some time ago, but we think it is interesting to comment about the article titled "How Certification Systems Fail: Lessons from the Ware Report" (pdf), where Steven H. Murdoch, Mike Bond, and Ross Anderson give us a fantastic view of the reasons that make certification systems fail.
This article based on the report, "Security Controls for Computer Systems" (pdf) (commonly known as the Ware Report, after the chair of the task force - Willis H. Ware), summarizes the facts identified in that report from 1970 (!!!!) that explains the failures in certification systems.
Basically, there are three main reasons:
- Conflict of interest - Testing laboratories are selected and paid by the vendor of the product that creates a clear pressure to downgrade evaluations standards.
- Certification performed on only a part of the system - As complying with standards is onerous, economic pressures exits that lead to certification of what is most expedient, not on those components on which most assurance is needed.
- Design Certification - Normally, there is not installation certifications, nor recertifications and, without them, "it is difficult to say the system does actually fulfill the security properties in the real world".
So, we agree with the authors of this article that "we should not expect certification to be a silver bullet" and that it should be used together with other security assessments systems, in this case, rating.
You can follow us on twitter.com/leet_security.
You can follow us on twitter.com/leet_security