Introduction:
Welcome back, friends, to the ongoing series titled “Concepts of CISSP.” Today, we’re diving into Domain 3, which focuses on Security Architecture and Engineering. Before we explore this domain, let’s recap the foundational concepts covered in Domains 1 and 2.
Recap of Domain 1 and 2:
In Domain 1, we laid the groundwork by discussing the principles of information security, including confidentiality, integrity, availability, non-repudiation, and authenticity. These principles are fundamental in shaping a security framework, which organizations use to design effective security policies. We also examined various governance strategies to ensure that security policies align with organizational goals.
Moving on to Domain 2, we delved into asset security, focusing on the lifecycle of data within an organization. We explored the security controls necessary to maintain the desired level of confidentiality, integrity, and availability (CIA).
Security Architecture and Engineering:
Domain 3 takes us deeper into the realm of security by exploring the architecture and engineering aspects. These concepts might seem straightforward, but within the context of CISSP, they carry significant weight.
What is Security Architecture?
Security architecture is essentially the design and organization of components, processes, and services that form the backbone of a secure system. Think of it as creating a high-level blueprint or structural organization that outlines how security measures are integrated into a system.
What is Security Engineering?
While architecture involves the design phase, engineering is about implementation. It’s the process of putting the architectural blueprint into action using standard methodologies to achieve the desired security outcomes.
Key Principles in Security Architecture and Engineering:
Understanding the principles of security architecture and engineering is crucial. Much like the principles of information security, these principles guide the design and implementation of secure systems.
Architectural Principles
Two major bodies of knowledge provide the foundation for security architecture principles:
- Saltzer and Schroeder’s Principles:
- Economy of Mechanism: Simplify design to reduce the likelihood of errors.
- Fail-Safe Defaults: Default settings should deny access unless explicitly granted.
- Complete Mediation: Ensure every access to every resource is checked.
- Open Design: The security of a system should not depend on secrecy of design.
- Separation of Privilege: Multiple conditions should be required for access.
- Least Privilege: Grant the minimal level of access necessary for tasks.
- Least Common Mechanism: Minimize the sharing of mechanisms between users.
- Psychological Acceptability: User interfaces should be designed for ease of use.
- ISO/IEC 19249:2017 Principles:
- Domain Separation: Separate different areas of functionality.
- Layering: Structure the system in layers to mitigate threats.
- Encapsulation: Restrict access to specific information.
- Redundancy: Implement backup components to ensure reliability.
- Virtualization: Create virtual versions of physical resources for better security.
Trusted Systems and Reference Monitors
A trusted system is a computer system that can enforce a specified security policy to a defined extent. This system includes a crucial component called a Reference Monitor—a logical part of the system responsible for making access control decisions.
To be considered a trusted system, certain criteria must be met:
- Tamper-Proof: The system should resist unauthorized alterations.
- Always Invoked: The security controls must always be active.
- Testable: The system should be small enough to allow for independent verification.
Conclusion:
In Domain 3, we focus on dissecting and understanding security architectures rather than creating them from scratch. This approach allows CISSP professionals to evaluate and enhance existing systems, ensuring they meet the highest security standards. By understanding the principles of security architecture and engineering, you can design and implement robust security measures that align with organizational goals.
References:
- Saltzer, Jerome H., and Michael D. Schroeder. “The Protection of Information in Computer Systems.” Proceedings of the IEEE, vol. 63, no. 9, 1975, pp. 1278-1308.
- ISO/IEC 19249:2017. Information technology – Security techniques – Design principles for secure systems. International Organization for Standardization, 2017.
- National Security Agency (NSA). “Trusted Computer System Evaluation Criteria (Orange Book).” Department of Defense, 1983.
This foundational knowledge will prepare you for the upcoming discussions on the principles of security engineering and how to apply them effectively in real-world scenarios. Stay tuned for more in-depth exploration!
Detailed Video discussion:
Hello friends, welcome back. Welcome to this series, which I named as Concepts of CISSP. This is Domain 3, and in Domain 3, we will be dealing with security architecture and engineering. Architecture and engineering sound interesting, but before we dive into Domain 3, I will just give you a very high-level, quick recap of Domain 1 and Domain 2.
So, what we studied in Domain 1 was the foundation that is going to be followed in the rest of the domains, right? We discussed the principles of information security and how these principles take shape in a security framework, and how the framework can be used to design the security policy of a specific company or organization. With that in mind, we then looked into different governance strategies and how these security policies can be set into action to achieve organizational business goals. That was the crux of Domain 1.
There are different security principles like confidentiality, integrity, availability, non-repudiation, and authenticity—these are what we studied in Domain 1. In Domain 2, we looked into asset security. In asset security, we specifically examined the lifecycle of data or information, how it flows in an organization, and the different security controls we put in place to ensure that we achieve the organization’s desired CIA levels.
Now, in Domain 3, we are going to study more about the different architectures and frameworks, and the security models we use to achieve the desired security outcomes of an organization. We’ll be dealing with two key terms here: architecture and engineering. We all have a rough idea of what architecture and engineering are, but if we look into the perspective of CISSP, we will see that security architecture and engineering—if we look into what is architecture—architecture is basically the design and organization of components, processes, and services, right? This is what security architecture is: we are designing and organizing it into some sort of structural organization, a high-level block diagram, and that gives rise to security architecture. So, when we talk about security architecture, we will be talking about components, processes, and services.
What is engineering? Engineering is basically the implementation part of security architecture. Implementation is not in the architecture; it’s the next phase of the overall security solution design. So first, we design, making a blueprint which is the architecture. What do we do in architecture? We design and organize components, processes, and services, and then we implement those using some standard methodology—that is the engineering methodology. This is what we are going to do in the coming discussions in Domain 3. There are more interesting things to come: we’ll be discussing the principles of engineering and architecture.
As we’ve seen with the principles of information security and how these principles give rise to a security framework or policy, similarly, we have to look into the different principles of security architecture and engineering, and how these can give rise to a secure system. The term architecture and engineering might give the impression that we are going to design some product, but when it comes to CISSP, and the CISSP exam specifically, we are not dealing with designing a security product. Our approach is a bit backward; we are dissecting the product or service to see how the security is engineered and implemented.
We should not have the idea that we are going to design a secure product. Designing a secure product also needs information or knowledge, which is part of the CISSP curriculum, but in the world where CISSP professionals operate, in the majority of the domains, it is basically the implementation. When we talk of the architecture, we are not architecting a semiconductor chip or a computer. That also requires a foundational understanding of how we architect something securely or how we implement something securely, but here we are using those blocks, those components, to achieve an organization’s security objectives.
Our understanding of architecture and implementation is like the way we architect a cloud service in Azure and AWS. We take different services and design in a Lego-like manner on Visio or a drawing board, then we see what security objectives we are going to achieve. This is the way we will approach it. We’ll discuss the principles, then how these principles are modeled using industry models, and how they are implemented.
If we go to my drawing board now, I have explained that security architecture and engineering are basically the design and organization of components, processes, and services. This is something you should keep in mind as a definition. When it comes to engineering, engineering is basically the implementation of the design and organization. Any creation we conceive and produce is a two-step process: first, we think of it and make some sort of blueprint, which is the architecture, and then we implement it. There’s a famous saying, “measure twice and hammer once.” So, a great deal of attention has to be given to the architecture phase of the process, and then we implement it. If we have given enough consideration, enough security concentration, in architecting a service, our implementation will be easy, with no rework. But if the architecture is rushed to achieve business objectives and security is sidelined, there will be many problems.
The process of security architecture in an organization or company follows three steps: first, we do a risk assessment, then we identify and agree on the identified risks, and then we address the risks using secure design. We go with standard security mitigation processes like accepting the risk, avoiding the risk, mitigating the risk, or transferring the risk. All these can be addressed with a secure design. The secure design addresses how we actually deal with the identified risks of a system or organization.
Now, secure design principles, as I already explained, go hand-in-hand with what we studied in Domain 1, where we have information security principles that take the form of a framework and give rise to a policy, which is used to govern the organization. Similarly, we have design principles here. When we talk about design principles, there are two major bodies of knowledge that produce these principles, which we should be aware of: one is Saltzer and Schroeder’s principles, and another is ISO/IEC 19249:2017’s set of design principles. We will look briefly into these principles and what they entail.
When it comes to Saltzer and Schroeder’s principles, there are eight architectural principles plus two more architectural principles borrowed from physical security. These eight architectural principles are: economy of mechanism, fail-safety, complete mediation, open design, separation of privilege, least privilege, least common mechanism, and psychological acceptability. The two additional principles, work factor and compromise recording, come from traditional physical security.
When it comes to ISO/IEC 19249 design principles, they differentiate between architectural principles and design principles. In architectural principles, they have five distinct principles: domain separation, layering, encapsulation, redundancy, and virtualization. For design principles, they have least privilege, attack surface minimization, centralized parameter validation, centralized general security services, and preparation for error and exception handling.
I explained that there are two major bodies of knowledge: ISO/IEC 19249 and Saltzer and Schroeder’s principles. You can refer to the official CBK book for more details on this, and we will be going into each principle to better understand how CISSP questions are framed around these principles.
Another major topic related to understanding design principles and design models is something called a trusted system. So, what is a trusted system? A trusted system is a computer system that can be trusted to a specified extent to enforce a specified security policy. It’s a theoretical concept. If you are creating any computer system or architecture that provides a service, a trusted system is one that can be trusted to a certain extent, as mentioned in the definition, to enforce a specified security policy. We can’t have a situation of 100% or 0% policy; we have to agree on a baseline, and that baseline will tell us what the specified security policy is. The level of trust we can have in the system is an attribute of the trusted system.
Now, the trusted system makes use of a term called reference monitor, which we should also know. So, what is a reference monitor? A reference monitor is basically an entity or a component of a trusted system. It is the logical part of the computer system and is responsible for all decisions related to access control. So, whenever you hear the term reference monitor, you should know that it is a component primarily dealing with access control to the trusted system. A reference monitor is a module, entity, or component of a trusted system that makes decisions regarding access control, such as who can access what resource, for how long, and with what privilege or authorization levels. This will be the topic of reference monitors.
Now, a trusted system has a reference monitor, and with that, there are certain expectations. The trusted system should be tamper-proof, always be invoked, which we will discuss more in Saltzer and Schroeder’s principle of complete mediation, and be small enough to be tested independently. If the trusted system is too large to test its firmware separately, it defies its purpose.
In 1983, the United States Department of Defense published the Orange Book, also called TCSEC (Trusted Computer System Evaluation Criteria). It describes the features and assurances that users can expect from a trusted system. It gives a sort of scale or benchmark to measure how trusted a system is or to what level a user can trust a system.
A trusted system, as I already explained, includes the concept of a trusted system, reference monitor, and the expectations from a trusted system. Now, with this trusted system, when it comes to TCSEC, they introduced the term trusted computing base (TCB). A trusted computing base is a combination of hardware, software, and firmware responsible for the security policy of an information system. You may have a system with functional parts, input/output, memory, CPU, and everything, but a portion of the system is responsible for its security. That portion is called the trusted computing base. The trusted computing base is a logical structure, and it has a lot to do with hardware, software, and firmware.
We need to know that any system can be divided into functional blocks and security blocks. The trusted computing base deals with the security block of the system. It enforces the security policy, and we can trust it to a certain level.
Now, as we saw in Domain 1, security controls can be administrative, physical, or technical. The administrative control comes from a trusted computing base, which is logical. The trusted computing base is where technical security controls reside, right? So, administrative controls are the administrative part of an organization; the trusted computing base gives technical controls. These technical controls are in the form of access controls, encryption, etc. They are found in the trusted computing base, which is logically part of the system.
The trusted computing base consists of a reference monitor, which we discussed earlier. The reference monitor must have a security kernel, which is a core component of the reference monitor. The security kernel is responsible for enforcing the security policy and should meet three essential conditions: isolation, verifiability, and mediation. Isolation means the security kernel must be isolated from the rest of the system, verifiability means it must be verifiable through independent testing, and mediation means it should mediate or control access to resources.
The security kernel is at the heart of the reference monitor, and the reference monitor is at the heart of the trusted computing base. This gives rise to a secure system, which is a combination of the trusted computing base, the security kernel, and the reference monitor. We need to understand this because questions in CISSP might test our understanding of how the trusted computing base, security kernel, and reference monitor work together.
One final thing we need to touch on is the different security models we use in security architecture and engineering. There are several models, but the main ones are the Bell-LaPadula model, the Biba model, the Clark-Wilson model, the Brewer-Nash model, and the Harrison-Ruzzo-Ullman model.
The Bell-LaPadula model focuses on maintaining data confidentiality and controls access to information based on security classifications. The Biba model is concerned with data integrity and prevents unauthorized users from modifying data. The Clark-Wilson model ensures that transactions are performed correctly, enforcing integrity through well-formed transactions and separation of duties. The Brewer-Nash model, also known as the Chinese Wall model, prevents conflicts of interest by restricting access to information based on the user’s previous interactions. The Harrison-Ruzzo-Ullman model focuses on access control and the management of user permissions.
We’ll discuss these models in more detail in future sessions, but it’s important to understand the basics of each model and how they contribute to security architecture and engineering. Each model has its strengths and weaknesses, and they are used in different contexts to achieve specific security objectives.
That concludes our overview of security architecture and engineering. In the next session, we’ll dive deeper into the principles of design and architecture, and we’ll explore how these principles are applied in real-world scenarios. Thank you for watching, and I look forward to continuing our journey through Domain 3 of the CISSP curriculum.
One thought on “Domain3: Understanding Security Architecture and Engineering in CISSP”