I originally wrote this back in 2009. I was having a lot of discussions around maturity and the role of maturity in security programs. Interestingly, this is a topic that continues to resurface to this day. With that in mind, I figured I’d publish it again for prosperity.
There are a lot of security standards and practices defined within the industry. Moreover, there are enough regulatory demands facing a broad range of companies and organizations to fill the ocean. Nevertheless, what always seems to be missing or rarely heard of is the maturity of the security program. I think companies are missing out on something that could be of enormous value to the business and the security group.
Capability maturity models have a long history. One of the first to address systems engineering was CMU/SEI-95-MM-003, published in late 1995 by Carnegie Mellon University. This provided the foundation for other models and promoted the development of a security model, the Systems Security Engineering CMM (SSE-CMM), published in 1999 and managed by the International System Security Engineering Association (ISSEA). In 2002, the SSE-CMM was adopted by the International Organization for Standardization (ISO) and became the ISO Standard ISO/IEC DIS 21827, which was later updated in late 2008 as ISO-21827:2008.
In late 2002, as a result of the attacks on September 11, the formation of the Department of Homeland security was made in the United States. Part of its role was as the federal center for cyber security and to act as a focal point for collaboration between local, state, federal, government, and non-government entities in the protection of national assets. Part of this was to establish standards concerning the interpretation of information security within the context of evaluation. As a result, the National Security Agency (NSA) established the INFOSEC Assurance Training and Rating Program (IATRP) to build capabilities in the assessment of security functions stretching across multiple areas and standards. As a result, they created the INFOSEC Assessment Capability Maturity Model (IA-CMM). The IA-CMM, which is based on the SSE-CMM, provides a maturity-based framework for assessing security and focuses on the ability to establish assurance in the management of processes.
The combination of the SSE-CMM and IA-CMM should be applied throughout a security management model to establish expectations on the management of the program and the processes within each domain. Within this context, the capability maturity model is focused on the consistent execution of the program. Although the IA-CMM defines nine practice areas and the ISO-21827:2008 (aka: SSE-CMM) defines as many as 22, there can be clear alignment to the domains of ISO-27002/1. Moreover, given the general practices define maturity elements, practice areas could easily be made to reflect PCI, HIPAA, GLBA, EU Privacy Laws, FFIEC, and many others. Nevertheless, both capability maturity models define five levels of maturity, with an added level of 0 within the IA-CMM to identify a rating representing that nothing is being performed in a given practice area.
Interestingly, with all the focus on BS-7799 in the late 90’s, then Part 2 later defining certification requirements shortly thereafter and of course this migrating to ISO as 17799 and ultimately the ISO-27000 series, you actually hear very little about the evolution of ISO-21827. I find this astonishing because maturity is reflective of confidence and confidence is founded on the ability to ensure meaningful repeatability. Once you have confidence in the process, you “trust” it. Therefore, if we connect these attributes together there is obvious value to security groups and the organizations they support. What use is defining an Information Security Management Framework or System (ISMF/S) based on ISO-27002 if it isn’t used effectively? Moreover, why is the certification relative to only the existence of said program and not equally about the execution of the program?
The higher the capability maturity level, the greater the confidence that a process is well established throughout the organization and the more likely the processes are applied consistently. This attribute of maturity, and why it is essential as the underlying framework of security management, is confidence and consistency. Without an overarching method that peers into the security program to ensure all the parts are meshing and operating at peak performance (which is a relative term), it is exceedingly likely that companies will 1) not get as much from the program as desired, 2) not have a resilient program, for example have reduction in workforce, budget, scope, etc., 3) not be easily adapted to changes in business demands, risk, and compliance.
Confidence in the security program for the business is critical. Given the deep interrelations with the business concerning operations and the application of security in a complex framework, the potential for problems is substantial. This potential is founded on a common theme: people are prone to error. Moreover, the potential for human error is infinite if they are not trained and educated on the processes. Therefore, a significant part of ensuring a meaningful capability maturity is institutional knowledge and intimacy with the security program. For example, there can be little confidence in the consistency in the security program if someone does not know the existence of a tool, procedure, or process within the program. In-depth knowledge of the program elements is paramount to success of the program and its ability to achieve a meaningful level of maturity. In short, what use is a process or tool if people don’t know it exists, when to employ it, or how to? You may have the best defined and documented program, but without people understanding them there is little hope for being consistent and effectual.
Capability maturity is a shared responsibility across all the domains (groups) of security and is a result of collaboration. As this may imply, the assessment and management of capability maturity is outside of these domains. It could be argued that compliance management or governance can act as the lead on assessing and managing capability maturity within the overall program. However, there is tangible value in separating these responsibilities to ensure clarity in results and to act as the ultimate indication of program stability.
A maturity management program need to exist to ensure each of the security program’s elements are performing consistently and meeting the mission and charter for the program, capability maturity assessment and management bonds the program and offers visibility into the overall “trustworthiness” and performance of the program itself. Without this form of oversight, there can be little confidence in the program by the business, much less within the various domains. Security management is broad and deep requiring diverse resources. Additionally, it can be come complex. These two attributes can conspire against the overall success of the program if not managed.
This is a very simple concept. Today, security groups define standards, practices, policies, and procedures and perform against a set of expectations. These expectations come in many forms, such as playbooks, performance measurements, and other management-based programs for monitoring the execution of security activities. But very rarely do you see a specific method for determining the “competence” of people, processes, and technology being measured and managed.
Several years ago I led a team of exceedingly smart people on a project to become the NSA’s highest rated organization (formally INS, now BT). As introduced earlier, the NSA’s IATRP is a comprehensive collection of materials, such as the INFOSEC Assessment Methodology (IAM) and INFOSEC Evaluation Methodology (IEM) defining specific processes in the assessment of risk and vulnerabilities; the IA-CMM that defines the practices areas (analogous to a standard), the general practices that defines what is required for a given level of maturity, which is reflective of SSE-CMM; and the assessment rating process, essentially how maturity will be measured. It was a fantastic process that resulted in a level 4 (of a possible 5) rating for the company demonstrating the maturity of our ability to perform security assessments (i.e. IAM and IAM, or in our vernacular “security assessments” and “Ethical Hacking”).
The comprehensiveness of our NSA program was and is still quite impressive. Vast amounts of documentation, tools, procedures, management infrastructure, practice management, training and education with our own internal certification and validation processes, measurements and management against performance goals, quality control internally and externally, and a consistent collection of people, processes, and tools that allowed us to monitor internal and external forces for the evolution of the program and program quality. It was a huge undertaking. However, the results and value from the project greatly exceeded the time and investment made by the company and well ahead of forecast. In short, the rating demonstrated to those around us – customers, partners, general management, and executive management – that there was no doubt we knew what we were doing and made certain that all security activities we’re in alignment with mission, charter, and quality expectations. This eventually led to more success and investments – why? Simply because the executive community knew there would be value and returns because they “trusted” the security program – they had confidence in our execution.
That my friends is the Achilles’ heel of today’s security programs in the eyes of the business – lack of trust and confidence in the ability to execute in an efficient and effective manner that resonates with the business.