Breaking News

Tuesday, August 25, 2015

ISM unit 3 question bank answers 91-95

QUESTION NUMBER 91-95

91. Write a short note on Key Management Policy.




92. Explain any six server security principles.

When addressing server security issues, it is an excellent idea to keep in mind the following general information security principles:


Simplicity—
Security mechanisms (and information systems in general) should be as simple as possible. Complexity is at the root of many security issues.

Fail-Safe—
If a failure occurs, the system should fail in a secure manner, i.e., security controls and settings remain in effect and are enforced. It is usually better to lose functionality rather than security.

Complete Mediation—
Rather than providing direct access to information, mediators that enforce access policy should be employed. Common examples of mediators include file system permissions, proxies, firewalls, and mail gateways.

Open Design—
System security should not depend on the secrecy of the implementation or its components.

Separation of Privilege—
Functions, to the degree possible, should be separate and provide as much granularity as possible. The concept can apply to both systems and operators and users. In the case of systems, functions such as read, edit, write, and execute should be separate. In the case of system operators and users, roles should be as separate as possible. For example, if resources allow, the role of system administrator should be separate from that of the database administrator

Least Privilege—
This principle dictates that each task, process, or user is granted the minimum rights required to perform its job. By applying this principle consistently, if a task, process, or user is compromised, the scope of damage is constrained to the limited resources available to the compromised entity.

Psychological Acceptability—
Users should understand the necessity of security. This can be provided through training and education. In addition, the security mechanisms in place should present users with sensible options that give them the usability they require on a daily basis. If users find the security mechanisms too cumbersome, they may devise ways to work around or compromise them. The objective is not to weaken security so it is understandable and acceptable, but to train and educate users and to design security mechanisms and policies that are usable and effective.

Least Common Mechanism—
When providing a feature for the system, it is best to have a single process or service gain some function without granting that same function to other parts of the system. The ability for the Web server process to access a back-end database, for instance, should not also enable other applications on the system to access the back-end database.

Defense-in-Depth—
Organizations should understand that a single security mechanism is generally insufficient. Security mechanisms (defenses) need to be layered so that compromise of a single security mechanism is insufficient to compromise a host or network. No “silver bullet” exists for information system security.

Work Factor—
Organizations should understand what it would take to break the system or network’s security features. The amount of work necessary for an attacker to break the system or network should exceed the value that the attacker would gain from a successful compromise.

Compromise Recording—
Records and logs should be maintained so that if a compromise does occur, evidence of the attack is available to the organization. This information can assist in securing the network and host after the compromise and aid in identifying the methods and exploits used by the attacker. This information can be used to better secure the host or network in the future. In addition, these records and logs can assist organizations in identifying and prosecuting attackers.


93. How the server security is planned?

Installation and Deployment Planning 
Security should be considered from the initial planning stage at the beginning of the systems development life cycle to maximize security and minimize costs. It is much more difficult and expensive to address security after deployment and implementation. Organizations are more likely to make decisions about configuring hosts appropriately and consistently if they begin by developing and using a detailed, welldesigned deployment plan. Developing such a plan enables organizations to make informed tradeoff decisions between usability and performance, and risk. A deployment plan allows organizations to maintain secure configurations and aids in identifying security vulnerabilities, which often manifest themselves as deviations from the plan.

In the planning stages of a server, the following items should be considered:

Identify the purpose(s) of the server.
– What information categories will be stored on the server?
– What information categories will be processed on or transmitted through the server? 
– What are the security requirements for this information? 
– Will any information be retrieved from or stored on another host (e.g., database server, directory server, Web server, Network Attached Storage (NAS) server, Storage Area Network (SAN) server)?
– What are the security requirements for any other hosts involved?

Identify the network services that will be provided on the server, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Network File System.(NFS), or database services (e.g., Open Database Connectivity [ODBC]). The network protocols to be used for each service (e.g., IPv4, IPv6) should also be identified.

Identify any network service software, both client and server, to be installed on the server and any other support servers.
Identify the users or categories of users of the server and any support hosts.

Determine the privileges that each category of user will have on the server and support hosts. 

Determine how the server will be managed (e.g., locally, remotely from the internal network, remotely from external networks).

Decide if and how users will be authenticated and how authentication data will be protected. 

Determine how appropriate access to information resources will be enforced. 

Determine which server applications meet the organization’s requirements. Consider servers that may offer greater security, albeit with less functionality in some instances. Some issues to consider include—
– Cost 
– Compatibility with existing infrastructure
– Knowledge of existing employees
– Existing manufacturer relationship
– Past vulnerability history
– Functionality. 
Security Management Staff 
Because server security is tightly intertwined with the organization’s general information system security posture, a number of IT and system security staff may be involved in server planning, implementation, and administration. This section provides a list of generic roles and identifies their responsibilities as they relate to server security. These roles are for the purpose of discussion and may vary by organization. 
Chief Information Officer
The Chief Information Officer (CIO) ensures that the organization’s security posture is adequate. The CIO provides direction and advisory services for the protection of information systems for the entire organization. The CIO is responsible for the following activities associated with servers:

  • Coordinating the development and maintenance of the organization’s information security policies, standards, and procedures  
  • Coordinating the development and maintenance of the organization’s change control and management procedures  
  • Ensuring the establishment of, and compliance with, consistent IT security policies for departments throughout the organization 

Information Systems Security Program Managers 
The Information Systems Security Program Managers (ISSPM) oversee the implementation of and compliance with the standards, rules, and regulations specified in the organization’s security policy. The ISSPMs are responsible for the following activities associated with servers: 
  • Ensuring that security procedures are developed and implemented  
  • Ensuring that security policies, standards, and requirements are followed  
  • Ensuring that all critical systems are identified and that contingency planning, disaster recovery plans, and continuity of operations plans exist for these critical systems  
  • Ensuring that critical systems are identified and scheduled for periodic security testing according to the security policy requirements of each respective system. 
Information Systems Security Officers 
Information Systems Security Officers (ISSO) are responsible for overseeing all aspects of information security within a specific organizational entity. They ensure that the organization’s information security practices comply with organizational and departmental policies, standards, and procedures. ISSOs are responsible for the following activities associated with servers: 
  • Developing internal security standards and procedures for the servers and supporting network infrastructure  
  • Cooperating in the development and implementation of security tools, mechanisms, and mitigation techniques  
  • Maintaining standard configuration profiles for the servers and supporting network infrastructure controlled by the organization, including, but not limited to, OSs, firewalls, routers, and server applications  
  • Maintaining operational integrity of systems by conducting security tests and ensuring that designated IT professionals are conducting scheduled testing on critical systems. 
Server, Network, and Security Administrators 
Server administrators are system architects responsible for the overall design, implementation, and maintenance of a server. Network administrators are responsible for the overall design, implementation, and maintenance of a network. Security administrators are dedicated to performing information security functions for servers and other hosts, as well as networks. Organizations that have a dedicated information security team usually have security administrators. On a daily basis, server, network, and security administrators contend with the security requirements of the specific systems for which they are responsible. Security issues and solutions can originate from either outside (e.g., security patches and fixes from the manufacturer or computer security incident response teams) or within the organization (e.g., the security office). The administrators are responsible for the following activities associated with servers:
  • Installing and configuring systems in compliance with the organizational security policies and standard system and network configurations  
  • Maintaining systems in a secure manner, including frequent backups and timely application of patches  
  • Monitoring system integrity, protection levels, and security-related events  
  • Following up on detected security anomalies associated with their information system resources  
  • Conducting security tests as required.  

Management Practices
Appropriate management practices are critical to operating and maintaining a secure server. Security practices entail the identification of an organization’s information system assets and the development, documentation, and implementation of policies, standards, procedures, and guidelines that ensure confidentiality, integrity, and availability of information system resources. 
To ensure the security of a server and the supporting network infrastructure, organizations should implement the following practices: 
Organizational Information System Security Policy—A security policy should specify the basic information system security tenets and rules, and their intended internal purpose. The policy should also outline who in the organization is responsible for particular areas of information security (e.g., implementation, enforcement, audit, review). The policy must be enforced consistently throughout the organization to be effective. Generally, the CIO is responsible for drafting the organization’s security policy. 
Configuration/Change Control and Management—The process of controlling modification to a system’s design, hardware, firmware, and software provides sufficient assurance that the system is protected against the introduction of an improper modification before, during, and after system implementation. Configuration control leads to consistency with the organization’s information system security policy. Configuration control is traditionally overseen by a configuration control board that is the final authority on all proposed changes to an information system. If resources allow, consider the use of development, quality assurance, and/or test environments so that changes can be vetted and tested before deployment in production.  
Risk Assessment and Management—Risk assessment is the process of analyzing and interpreting risk. It involves determining an assessment’s scope and methodology, collecting and analyzing riskrelated data, and interpreting the risk analysis results. Collecting and analyzing risk data requires identifying assets, threats, vulnerabilities, safeguards, consequences, and the probability of a successful attack. Risk management is the process of selecting and implementing controls to reduce risk to a level acceptable to the organization. 
Standardized Configurations—Organizations should develop standardized secure configurations for widely used OSs and server software. This will provide recommendations to server and network administrators on how to configure their systems securely and ensure consistency and compliance with the organizational security policy. Because it only takes one insecurely configured host to compromise a network, organizations with a significant number of hosts are especially encouraged to apply this recommendation. 
Secure Programming Practices—Organizations should adopt secure application development guidelines to ensure that they develop their applications for servers in a sufficiently secure manner. 
Security Awareness and Training—A security training program is critical to the overall security posture of an organization. Making users and administrators aware of their security responsibilities and teaching the correct practices helps them change their behavior to conform to security best practices. Training also supports individual accountability, which is an important method for improving information system security. If the user community includes members of the general public, providing security awareness specifically targeting them might also be appropriate
Contingency, Continuity of Operations, and Disaster Recovery Planning—Contingency plans, continuity of operations plans, and disaster recovery plans are established in advance to allow an organization or facility to maintain operations in the event of a disruption.
Certification and Accreditation—Certification in the context of information system security means that a system has been analyzed to determine how well it meets all of the security requirements of the organization. Accreditation occurs when the organization’s management accepts that the system meets the organization’s security requirements.
System Security Plan 
The objective of system security planning is to improve protection of information system resources.10 Plans that adequately protect information assets require managers and information owners—directly affected by and interested in the information and/or processing capabilities—to be convinced that their information assets are adequately protected from loss, misuse, unauthorized access or modification, unavailability, and undetected activities.
The purpose of the system security plan is to provide an overview of the security and privacy requirements of the system and describe the controls in place or planned for meeting those requirements. The system security plan also delineates responsibilities and expected behavior of all individuals who access the system. The system security plan should be viewed as documentation of the structured process of planning adequate, cost-effective security protection for a system. It should reflect input from various managers with responsibilities concerning the system, including information owners, the system owner, and the ISSPM.
For Federal agencies, all information systems must be covered by a system security plan. Other organizations should strongly consider the completion of a system security plan for each of their systems as well. The information system owner is generally the party responsible for ensuring that the security plan is developed and maintained and that the system is deployed and operated according to the agreedupon security requirements. 
In general, an effective system security plan should include the following: 
System Identification—The first sections of the system security plan provide basic identifying information about the system. They contain general information such as the key points of contact for the system, the purpose of the system, the sensitivity level of the system, and the environment in which the system is deployed, including the network environment, the system’s placement on the network, and the system’s relationships with other systems. 
Controls—This section of the plan describes the control measures (in place or planned) that are intended to meet the protection requirements of the information system. Controls fall into three general categories: 
– Management controls, which focus on the management of the computer security system and the management of risk for a system. 
– Operational controls, which are primarily implemented and executed by people (rather than systems). They often require technical or specialized expertise, and often rely upon management activities as well as technical controls. 
– Operational controls, which are primarily implemented and executed by people (rather than systems). They often require technical or specialized expertise, and often rely upon management activities as well as technical controls. 
Human Resources Requirements 
The greatest challenge and expense in developing and securely maintaining a server is providing the necessary human resources to adequately perform the required functions. Many organizations fail to fully recognize the amount of expense and skills required to field a secure server. This failure often results in overworked employees and insecure systems. From the initial planning stages, organizations need to determine the necessary human resource requirements. Appropriate and sufficient human resources are the single most important aspect of effective server security. Organizations should also consider the fact that, in general, technical solutions are not a substitute for skilled and experienced personnel.  
When considering the human resource implications of developing and deploying a server, organizations should consider the following: 
Required Personnel—What types of personnel are required? Examples of possible positions are system administrators, server administrators, network administrators, and ISSOs. 
Required Skills—What are the required skills to adequately plan, develop, and maintain the server in a secure manner? Examples include OS administration, network administration, and programming. 
Available Personnel—What are the available human resources within the organization? In addition, what are their current skill sets and are they sufficient for supporting the server? Often, an organization discovers that its existing human resources are not sufficient and needs to consider the following options: 
– Train Current Staff—If personnel are available but they do not have the requisite skills, the organization may choose to train the existing staff in the skills required. Although this is an excellent option, the organization should ensure that employees meet all prerequisites for training. 
– Acquire Additional Staff—If not enough staff members are available or they do not have the requisite skills, it may be necessary to hire additional personnel or use external resources. 
Once the organization has staffed the project and the server is active, it will be necessary to ensure the number and skills of the personnel are still adequate. The threat and vulnerability levels of IT systems, including servers, are constantly changing, as is the technology. This means that what is adequate today may not be tomorrow, so staffing needs should be reassessed periodically and additional training and other skills-building activities conducted as needed. 



94. How the server security is maintained?

After initially deploying a server, administrators need to maintain its security continuously. This section provides general recommendations for securely administering servers. Vital activities include handling and analyzing log files, performing regular server backups, recovering from server compromises, testing server security regularly, and performing remote administration securely. As discussed in Section 4, security configuration guides and checklists are publicly available for many OSs and server software; many of these documents contain OS and server-specific recommendations for security maintenance. Other maintenance activities discussed in earlier sections, and thus not duplicated here, include testing and deploying OS and server patches and updates, maintaining the secure configuration of the OS and server software, and maintaining additional security controls used for the server

Logging 
Logging is a cornerstone of a sound security posture. Capturing the correct data in the logs and then monitoring those logs closely is vital. Network and system logs are important, especially system logs in the case of encrypted communications, where network monitoring is less effective. Server software can provide additional log data relevant to server-specific events.

Reviewing logs is mundane and reactive, and many server administrators devote their time to performing duties that they consider more important or urgent. However, log files are often the only record of suspicious behavior. Enabling the mechanisms to log information allows the logs to be used to detect failed and successful intrusion attempts and to initiate alert mechanisms when further investigation is needed. Procedures and tools need to be in place to process and analyze the log files and to review alert notifications

Server logs provide—

  • Alerts to suspicious activities that require further investigation  
  • Tracking of an attacker’s activities  
  • Assistance in the recovery of the server  
  • Assistance in post-event investigation  
  • Required information for legal proceedings. 

Identifying Logging Capabilities and Requirements 
Each type of server software supports different logging capabilities. Some server software may use a single log, while other server software may use multiple logs (each for different types of records). Some server software permits administrators to select from multiple log formats, such as proprietary, database, and delimiter-separated.
If a server supports the execution of programs, scripts, or plug-ins, it may be necessary for the programs, scripts, or plug-ins to perform additional logging. Often, critical events take place within the application code itself and will not be logged by the server. If server administrators develop or acquire application programs, scripts, or plug-ins, it is strongly recommended that they define and implement a comprehensive and easy-to-understand logging approach based on the logging mechanisms provided by the server host OS. Log information associated with programs, scripts, and plug-ins can add significantly to the typical information logged by the server and may prove invaluable when investigating events

Reviewing and Retaining Log Files 
Reviewing log files is a tedious and time-consuming task that informs administrators of events that have already occurred. Accordingly, files are often useful for corroborating other evidence, such as a CPU utilization spike or anomalous network traffic reported by an IDPS. When a log is used to corroborate other evidence, a focused review is in order. For example, if an IDPS reported a suspicious outbound FTP connection from a Web server at 8:17 a.m., then a review of the logs generated around 8:17 a.m. is appropriate. Server logs should also be reviewed for indications of attacks. The frequency of the reviews depends on the following factors:

  • Amount of traffic the server receives  
  • General threat level (certain servers receive many more attacks than other servers and thus should have their logs reviewed more frequently)  
  • Specific threats (at certain times, specific threats arise that may require more frequent log file analysis)  
  • Vulnerability of the server  
  • Value of data and services provided by the server. 

Automated Log File Analysis Tools
Many servers receive significant amounts of traffic, and the log files quickly become voluminous. Automated log analysis tools should be installed to ease the burden on server administrators. These tools analyze the entries in the server log files and identify suspicious and unusual activity. some organizations use SIEM software for centralized logging, which can also perform automated log file analysis. Many commercial and public domain tools are also available to support regular analysis of particular types of server logs.

The automated log analyzer should forward any suspicious events to the responsible server administrator or security incident response team as soon as possible for follow-up investigation. Some organizations may wish to use two or more log analyzers, which will reduce the risk of missing an attacker or other significant events in the log files.


Server Backup Procedures
One of the most important functions of a server administrator is to maintain the integrity of the data on the server. This is important because servers are often some of the most exposed and vital hosts on an organization’s network. The server administrator needs to perform backups of the server on a regular basis for several reasons. A server could fail as a result of a malicious or unintentional act or a hardware or software failure. In addition, Federal agencies and many other organizations are governed by regulations on the backup and archiving of server data. Server data should also be backed up regularly for legal and financial reasons.

Server Data Backup Policies 
All organizations need to create a server data backup policy. Three main factors influence the contents of this policy:

 Legal requirements


– Applicable laws and regulations (Federal, state, and international)
– Litigation requirements

Mission requirements 
– Contractual
– Accepted practices
– Criticality of data to organization

Organizational guidelines and policies.

Organizational guidelines and policies.
Three primary types of backups exist: full, incremental, and differential. Full backups include the OS, applications, and data stored on the server (i.e., an image of every piece of data stored on the server hard drives). The advantage of a full backup is that it is easy to restore the entire server to the state (e.g., configuration, patch level, data) it was in when the backup was performed. The disadvantage of full backups is that they take considerable time and resources to perform. Incremental backups reduce the impact of backups by backing up only data that has changed since the previous backup (either full or incremental).

Differential backups reduce the number of backup sets that must be accessed to restore a configuration by backing up all changed data since the last full backup. However, each differential backup increases as time lapses from the last full backup, requiring more processing time and storage than would an incremental backup. Generally, full backups are performed less frequently (weekly to monthly or when a significant change occurs), and incremental or differential backups are performed more frequently (daily to weekly)

Maintain a Test Server
Most organizations will probably wish to maintain a test or development server for their most important servers, at a minimum. Ideally, this server should have hardware and software identical to the production or live server and be located on an internal network segment (intranet) where it can be fully protected by the organization’s perimeter network defenses. Although the cost of maintaining an additional server is not inconsequential, having a test server offers numerous advantages:

  • It provides a platform to test new patches and service packs before application on the production server.  
  • It provides a development platform for the server administrator to develop and test new content and applications.  
  • It provides a platform to test configuration settings before applying them to production servers. 
  • Software critical for development and testing but that might represent an unacceptable security risk on the production server can be installed on the development server (e.g., software compliers, administrative tool kits, remote access software)


Recovering From a Security Compromise 
Most organizations eventually face a successful compromise of one or more hosts on their network. Organizations should create and document the required policies and procedures for responding to successful intrusions. The response procedures should outline the actions that are required to respond to a successful compromise of the server and the appropriate sequence of these actions (sequence can be critical). Most organizations already have a dedicated incident response team in place, which should be contacted immediately when there is suspicion or confirmation of a compromise. In addition, the organization may wish to ensure that some of its staff are knowledgeable in the fields of computer and network forensics.


Security Testing Servers 
Periodic security testing of servers is critical. Without periodic testing, there is no assurance that current protective measures are working or that the security patch applied by the server administrator is functioning as advertised. Although a variety of security testing techniques exists, vulnerability scanning is the most common. Vulnerability scanning assists a server administrator in identifying vulnerabilities and verifying whether the existing security measures are effective. Penetration testing is also used, but it is used less frequently and usually only as part of an overall penetration test of the organization’s network.

Vulnerability Scanning 
Vulnerability scanners are automated tools that are used to identify vulnerabilities and misconfigurations of hosts. Many vulnerability scanners also provide information about mitigating discovered vulnerabilities. Vulnerability scanners attempt to identify vulnerabilities in the hosts scanned. Vulnerability scanners can help identify out-of-date software versions, missing patches, or system upgrades, and they can validate compliance with or deviations from the organization’s security policy. To accomplish this, vulnerability scanners identify OSs, server software, and other major software applications running on hosts and match them with known vulnerabilities in their vulnerability databases.

Penetration Testing 
Penetration testing is “security testing in which evaluators attempt to circumvent the security features of a system based on their understanding of the system design and implementation”.45 The purpose of penetration testing is to exercise system protections (particularly human response to attack indications) by using common tools and techniques developed by attackers. This testing is highly recommended for complex or critical servers


Remotely Administering a Server
Remote administration of a server should be allowed only after careful consideration of the risks. The risk of enabling remote administration varies considerably depending on the location of the server on the network. For a server that is located behind a firewall, remote administration can be implemented relatively securely from the internal network, but not without added risk. Remote administration should generally not be allowed from a host located outside the organization’s network unless it is performed from an organization-controlled computer through the organization’s remote access solution, such as a VPN

If an organization determines that it is necessary to remotely administer a server, following these steps should ensure that remote administration is implemented in as secure a manner as possible:

  • Use a strong authentication mechanism (e.g., public/private key pair, two-factor authentication)
  • Restrict which hosts can be used to remotely administer the server. 
  • Use secure protocols that can provide encryption of both passwords and data (e.g., SSH, HTTPS); do not use less secure protocols (e.g., telnet, FTP, NFS, HTTP) unless absolutely required and tunneled over an encrypted protocol, such as SSH, SSL, or IPsec. 
  • Enforce the concept of least privilege on remote administration (e.g., attempt to minimize the access rights for the remote administration accounts).
  • Do not allow remote administration from the Internet through the firewall unless accomplished via strong mechanisms, such as VPNs. 
  • Use remote administration protocols that support server authentication to prevent man-in-the-middle attacks.
  • Change any default accounts or passwords for the remote administration utility or application.  


95. List various PKI data structures. Explain in short.

Two basic data structures are used in PKIs. These are the public key certificate and the certificate revocation lists.A third data structure, the attribute certificate, may be used as an addendum

 X.509 PublicKey Certificates
The X.509 public key certificate format [IETF 01] has evolved into a flexible and powerful mechanism. It may be used to convey a wide variety of information. Much of that information is optional, and the contents of mandatory fields may vary as well. It is important for PKI implementers to understand the choices they face, and their consequences. Unwise choices may hinder interoperability or prevent support for critical applications.

The X.509 public key certificate is protected by a digital signature of the issuer. Certificate users know the contents have not been tampered with since the signature was generated if the signature can be verified. Certificates contain a set of common fields, and may also include an optional set of extensions.

There are ten common fields: six mandatory and four optional. The mandatory fields are: the serial number, the certificate signature algorithm identifier, the certificate issuer name, the certificate validity period, the public key, and the subject name. The subject is the party that controls the corresponding private key. There are four optional fields: the version number, two unique identifiers, and the extensions. These optional fields appear only in version 2 and 3 certificates

Version. The version field describes the syntax of the certificate. When the version field is omitted, the certificate is encoded in the original, version 1, syntax. Version 1 certificates do not include the unique identifiers or extensions. When the certificate includes unique identifiers but not extensions, the version field indicates version 2. When the certificate includes extensions, as almost all modern certificates do, the version field indicates version 3

Serial number. The serial number is an integer assigned by the certificate issuer to each certificate. The serial number must be unique for each certificate generated by a particular issuer. The combination of the issuer name and serial number uniquely identifies any certificate.

Signature. The signature field indicates which digital signature algorithm (e.g., DSA with SHA-1 or RSA with MD5) was used to protect the certificate.

Issuer. The issuer field contains the X.500 distinguished name of the TTP that generated the certificate

Validity. The validity field indicates the dates on which the certificate becomes valid and the date on which the certificate expires.

Subject. The subject field contains the distinguished name of the holder of the private key corresponding to the public key in this certificate. The subject may be a CA, a RA, or an end entity. End entities can be human users, hardware devices, or anything else that might make use of the private key.

Subject public key information. The subject public key information field contains the subject’s public key, optional parameters, and algorithm identifier. The public key in this field, along with the optional algorithm parameters, is used to verify digital signatures or perform key management. If the certificate subject is a CA, then the public key is used to verify the digital signature on a certificate

Issuer unique ID and subject unique ID. These fields contain identifiers, and only appear in version 2 or version 3 certificates. The subject and issuer unique identifiers are intended to handle the reuse of subject names or issuer names over time. However, this mechanism has proven to be an unsatisfactory solution. The Internet Certificate and CRL profile does not [HOUS99] recommend inclusion of these fields.

Extensions. This optional field only appears in version 3 certificates. If present, this field contains one or more certificate extensions. Each extension includes an extension identifier, a criticality flag, and an extension value. Common certificate extensions have been defined by ISO and ANSI to answer questions that are not satisfied by the common fields.

Subject type. This field indicates whether a subject is a CA or an end entity.

Names and identity information. This field aids in resolving questions about a user’s identity, e.g., are “alice@gsa.gov” and “c=US; o=U.S. Government; ou=GSA; cn=Alice Adams” the same person?

Key attributes. This field specifies relevant attributes of public keys, e.g., whether it can be used for key transport, or be used to verify a digital signature.

Policy information. This field helps users determine if another user’s certificate can be trusted, whether it is appropriate for large transactions, and other conditions that vary with organizational policies.

Certificate extensions allow the CA to include information not supported by the basic certificate content. Any organization may define a private extension to meet its particular business requirements. However, most requirements can be satisfied using standard extensions. Standard extensions are widely supported by commercial products. Standard extensions offer improved interoperability, and they are more cost effective than private extensions


Certificate Revocation Lists (CRLs)
Certificates contain an expiration date. Unfortunately, the data in a certificate may become unreliable before the expiration date arrives. Certificate issuers need a mechanism to provide a status update for the certificates they have issued. One mechanism is the X.509 certification revocation list (CRL).

CRLs are the PKI analog of the credit card hot list that store clerks review before accepting large credit card transactions. The CRL is protected by a digital signature of the CRL issuer. If the signature can be verified, CRL users know the contents have not been tampered with since the signature was generated. CRLs contain a set of common fields, and may also include an optional set of extensions.

The CRL contains the following fields:

Version. The optional version field describes the syntax of the CRL. (In general, the version will be two.)

Signature. The signature field contains the algorithm identifier for the digital signature algorithm used by the CRL issuer to sign the CRL.

Issuer. The issuer field contains the X.500 distinguished name of the CRL issuer.

This update. The this-update field indicates the issue date of this CRL

Next update. The next-update field indicates the date by which the next CRL will be issued.

Revoked certificates. The revoked certificates structure lists the revoked certificates. The entry for each revoked certificate contains the certificate serial number, time of revocation, and optional CRL entry extensions.

The CRL entry extensions field is used to provide additional information about this particular revoked certificate. This field may only appear if the version is v2.

CRL Extensions. The CRL extensions field is used to provide additional information about the whole CRL. Again, this field may only appear if the version is v2.


Attribute Certificates 
The public key certificates are focused on the binding between the subject and the public key. The relationship between the subject and public key is expected to be a long-lived relationship. Most end entity certificates include a validity period of a year or two years

Organizations seek improved access control. Public key certificates can be used to authenticate the identity of a user, and this identity can be used as an input to access control decision functions. However, in many contexts, the identity is not the criterion used for access control decisions. The access control decision may depend upon role, security clearance, group membership, or ability to pay.

Authorization information, such as membership in a group, often has a shorter lifetime than the binding of the identity and the public key. Authorization information could be placed in a public key certificate extension. However, this is not a good strategy for two reasons. First, the certificate is likely to be revoked because the authorization information needs to be updated. Revoking and reissuing the public key certificate with updated authorization information is quite expensive. Second, the CA that issues public key certificates is not likely to be authoritative for the authorization information. This results in additional steps for the CA to contact the authoritative authorization information source

The X.509 attribute certificate (AC) binds attributes to an AC holder [X509 97], This definition is being profiled for use in Internet applications. Since the AC does not contain a public key, the AC is used in conjunction with a public key certificate. An access control function may make use of the attributes in an AC, but it is not a replacement for authentication. The public key certificate must first be used to perform authentication, then the AC is used to associate attributes with the authenticated identity.

ACs may also be used in the context of a data origin authentication service and a non-repudiation service. In these contexts, the attributes contained in the AC provide additional information about the signing entity. This information can be used to make sure that the entity is authorized to sign the data. This kind of checking depends either on the context in which the data is exchanged or on the data that has been digitally signed.

An X.509 AC resembles the X.509 public key certificate. The AC is an ASN.1 DER encoded object, and is signed by the issuer. An AC contains nine fields: version, holder, issuer, signature algorithm identifier, serial number, validity period, attributes, issuer unique identifier, and extensions. The AC holder is similar to the public key certificate subject, but the holder may be specified with a name, the issuer and serial number of a public key certificate, or the one-way hash of a certificate or public key. The attributes describe the authorization information associated with the AC holder. The extensions describe additional information about the certificate and how it may be used.

No comments:

Post a Comment

Designed By Blogger Templates