Breaking News

Monday, August 24, 2015

ISM unit 3 question bank answers 81-85

QUESTION NUMBER 81-85

81. What is The Patch and Vulnerability Group & what are their duties? 

The Patch and Vulnerability Group(PVG)

The PVG should be a formal group that incorporates representatives from information security and
operations.  These representatives should include individuals with knowledge of vulnerability and patch management, as well as system administration, intrusion detection, and firewall management.  In addition, it is helpful to have specialists in the operating systems and applications most used within the organization.  Personnel who already provide system or network administration functions, perform vulnerability scanning, or operate intrusion detection systems are also likely candidates for the PVG.

The size of the group and the amount of time devoted to PVG duties will vary broadly across various organizations.  Much depends on the size and complexity of the organization, the size and complexity of its network, and its budget.  The PVG of smaller organizations may be comprised of only one or two members, with a focus on critical vulnerabilities and systems.  Regardless of the organization’s size or resources, patch and vulnerability management can be accomplished with proper planning and process.

The duties of the PVG are outlined below.  Subsequent sections discuss certain duties in more detail.

1. Create a System Inventory.
The PVG should use existing inventories of the organization’s IT resources to determine which hardware equipment, operating systems, and software applications are used within the organization. The PVG should also maintain a manual inventory of IT resources not captured in the existing inventories.

2. Monitor for Vulnerabilities, Remediations, and Threats.  
The PVG is responsible for monitoring security sources for vulnerability announcements, patch and non-patch remediations, and emerging threats that correspond to the software within the PVG’s system inventory.

3. Prioritize Vulnerability Remediation.
The PVG should prioritize the order in which the organization addresses vulnerability remediation.

4. Create an Organization-Specific Remediation Database.  
The PVG should create a database of remediations that need to be applied to the organization.

5. Conduct Generic Testing of Remediations.  
The PVG should be able to test patches and non-patch remediations on IT devices that use standardized configurations.  This will avoid the need for local administrators to perform redundant testing.  The PVG should also work closely with local administrators to test patches and configuration changes on important systems.

6. Deploy Vulnerability Remediations.
The PVG should oversee vulnerability remediation.

7. Distribute Vulnerability and Remediation Information to Local Administrators.  
The PVG is responsible for informing local administrators about vulnerabilities and remediations that correspond to software packages included within the PVG scope and that are in the organizational software inventory.

8. Perform Automated Deployment of Patches.
The PVG should deploy patches automatically to IT devices using enterprise patch management tools.  Alternately, the PVG could work closely with the group actually running the patch management tools.  Automated patching tools allow an administrator to update hundreds or even thousands of systems from a single console.  Deployment is fairly simple when there are homogeneous computing platforms, with standardized desktop systems and similarly configured servers.  Multiplatform environments, nonstandard desktop systems, legacy computers, and computers with unusual configurations may also be integrated.

9. Configure Automatic Update of Applications Whenever Possible and Appropriate.  
Many newer applications provide a feature that checks the vendor’s Web site for updates.  This feature can be very useful in minimizing the level of effort required to identify, distribute, and install patches.  However, some organizations may not wish to implement this feature because it might interfere with their configuration management process.  A recommended option would be a locally distributed automated update process, where the patches are made available from the organization’s network.  Applications can then be updated from the local network instead of from the Internet.

10. Verify Vulnerability Remediation Through Network and Host Vulnerability Scanning.  
The PVG should verify that vulnerabilities have been successfully remediated.

11. Vulnerability Remediation Training.  
The PVG should train administrators on how to apply vulnerability remediations.  In organizations that rely on end users to patch computers, the PVG must also train users on this function.


82. What are the primary methods of remediation that can be applied to an affected system?

There are three primary methods of remediation that can be applied to an affected system: the installation of a software patch, the adjustment of a configuration setting, and the removal of the affected software.

+ Security Patch Installation.  
Applying a security patch (also called a “fix” or “hotfix”) repairs the vulnerability, since patches contain code that modifies the software application to address and eliminate the problem.  Patches downloaded from vendor Web sites are typically the most up-to-date and are likely free of malicious code.

+ Configuration Adjustment.  
Adjusting how an application or security control is configured can effectively block attack vectors and reduce the threat of exploitation.  Common configuration adjustments include disabling services and modifying privileges, as well as changing firewall rules and modifying router access controls. Settings of vulnerable software applications can be modified by adjusting file attributes or registry settings.

+ Software Removal.
Removing or uninstalling the affected software or vulnerable service eliminates the vulnerability and any associated threat.  This is a practical solution when an application is not needed on a system. Determining how the system is used, removing unnecessary software and services, and running only what is essential for the system’s purpose is a recommended security practice.


83. Who are involved in log management planning? Explain their responsibilities.

As part of the log management planning process, an organization should define the roles and responsibilities of individuals and teams who are expected to be involved in log management.  Teams and individual roles often involved in log management include the following:

System and network administrators, 
who are usually responsible for configuring logging on individual systems and network devices, analyzing those logs periodically, reporting on the results of log management activities, and performing regular maintenance of the logs and logging software

Security administrators, 
who are usually responsible for managing and monitoring the log management infrastructures, configuring logging on security devices (e.g., firewalls, network-based intrusion detection systems, antivirus servers), reporting on the results of log management activities, and assisting others with configuring logging and performing log analysis

Computer security incident response teams,
who use log data when handling some incidents

Application developers, 
who may need to design or customize applications so that they perform logging in accordance with the logging requirements and recommendatio

Information security officers,
who may oversee the log management infrastructures

Chief information officers (CIO), 
who oversee the IT resources that generate, transmit, and store the logs

Auditors, 
who may use log data when performing audits

Individuals involved in the procurement of software 
that should or can generate computer security log data.

Organizations need to give particular consideration to the assignment of operational log management duties.  Some organizations, especially those with highly managed environments, may choose to perform all log management centrally instead of at the individual system level.  However, in most organizations, log management is not so centralized.  Typically, system, network, and security administrators are responsible for managing logging on their systems, performing regular analysis of their log data, documenting and reporting the results of their log management activities, and ensuring that log data is provided to the log management infrastructure in accordance with the organization’s policies.


84. What are the steps included in developing logging policies?

Organizations should develop policies that clearly define mandatory requirements and suggested recommendations for several aspects of log management, including the following:

Log generation

– Which types of hosts must or should perform logging
– Which host components must or should perform logging (e.g., OS, service, application)
– Which types of events each component must or should log (e.g., security events, network connections, authentication attempts)
– Which data characteristics must or should be logged for each type of event (e.g., username and source IP address for authentication attempts)
– How frequently each type of event must or should be logged (e.g., every occurrence, once for all instances in x minutes, once for every x instances, every instance after x instances)

Log transmission

– Which types of hosts must or should transfer logs to a log management infrastructure
– Which types of entries and data characteristics must or should be transferred from individual hosts to a log management infrastructure
– How log data must or should be transferred (e.g., which protocols are permissible), including out-of-band methods where appropriate (e.g., for standalone systems)
– How frequently log data should be transferred from individual hosts to a log management infrastructure (e.g., real-time, every 5 minutes, every hour)
– How the confidentiality, integrity, and availability of each type of log data must or should be protected while in transit, including whether a separate logging network should be used

Log storage and disposal

– How often logs should be rotated
– How the confidentiality, integrity, and availability of each type of log data must or should be protected while in storage (at both the system level and the infrastructure level)
– How long each type of log data must or should be preserved (at both the system level and the infrastructure level)
– How unneeded log data must or should be disposed of (at both the system level and the infrastructure level)
– How much log storage space must or should be available (at both the system level and the infrastructure level)
– How log preservation requests, such as a legal requirement to prevent the alteration and destruction of particular log records, must be handled (e.g., how the impacted logs must be marked, stored, and protected)

Log analysis

– How often each type of log data must or should be analyzed (at both the system level and the infrastructure level)
– Who must or should be able to access the log data (at both the system level and the infrastructure level), and how such accesses should be logged
– What must or should be done when suspicious activity or an anomaly is identified
– How the confidentiality, integrity, and availability of the results of log analysis (e.g., alerts, reports) must or should be protected while in storage (at both the system level and the infrastructure level) and in transit
– How inadvertent disclosures of sensitive information recorded in logs, such as passwords or the contents of e-mails, should be handled.

An organization’s policies should also address who within an organization can establish and manage log management infrastructures.

Organizations should also ensure that other policies, guidelines, and procedures that have some relationship to logging incorporate and support these log management requirements and recommendations, and also comply with functional and operational requirements.  An example is ensuring that software procurement and custom application development activities take log management requirements into consideration.


85. List and explain the components of key management infrastructure.


Central Oversight Authority 

The KMI’s central oversight authority is the entity that provides overall KMI data synchronization and system security oversight for an organization or set of organizations. The central oversight authority 1) coordinates protection policy and practices (procedures) documentation, 2) may function as a holder of data provided by service agents, and 3) serves as the source for common and system level information required by service agents (e.g., keying material and registration information, directory data, system policy specifications, and systemwide key compromise and certificate revocation information). As required by survivability or continuity of operations policies, central oversight facilities may be replicated at an appropriate remote site to function as a system back up.

Key Processing Facility(ies) 

Key processing services typically include one or more of the following:

• Acquisition or generation of public key certificates (where applicable),
• Initial generation and distribution of keying material,
• Maintenance of a database that maps user entities to an organization’s certificate/key structure,
• Maintenance and distribution of compromise key lists (CKLs) and/or certificate revocation lists (CRLs), and
• Generation of audit requests and the processing of audit responses as necessary for the prevention of undetected compromises.

An organization may use more than one key processing facility to provide these services (e.g., for purposes of inter-organizational interoperation). Key processing facilities can be added to meet new requirements or deleted when no longer needed and may support both public key and symmetric key establishment techniques.

Where public key cryptography is employed, the organization operating the key processing facility will generally perform most PKI registration authority, repository, and archive functions. The organization also performs at least some PKI certification authority functions. Actual X.509 public key certificates may be obtained from a government source (certification authorities generating identification, attribute, or encryption certificates) or a commercial external certification authority (usually a commercial infrastructure/CA that supplies/sells X.509 certificates). Commercial external certification authority certificates should be cross-certified by a government root CA.

A key processing facility may be distributed such that intermediary redistribution facilities maintain stores of keying material that exist in physical form (e.g., magnetic media, smart cards) and may also serve as a source for non-cryptographic products and services (e.g., software downloads for KMI-reliant users, usage documents, or policy authority).

All keys and non-cryptographic products that are electronically distributed to end users shall be encrypted for the end user or for intermediary redistribution services before transmission. Some key processing facilities may generate and produce human-readable key information and other key-related information that require physical distribution. Keys that are manually distributed shall either be encrypted or receive physical protection and be subject to controlled distribution (e.g., registered mail) between the key processing facility and the user. Part 1, Section 2.3.1 provides general guidance for key distribution. Newly deployed key processing facilities should be designed to support legacy and existing system requirements and should be designed to support future network services as they become available.

Service Agents 

Service agents support organizations’ KMIs as single points of access for other KMI nodes. All transactions initiated by client nodes are either processed by a service agent or forwarded to other nodes for processing. Service agents direct service requests from client nodes to key processing facilities, and when services are required from multiple processing facilities, coordinate services among the processing facilities to which they are connected. Service agents are employed by users to order keying material and services, retrieve keying material and services, and manage cryptographic material and public key certificates. A service agent may provide cryptographic material and/or certificates by utilizing specific key processing facilities for key and/or certificate generation. A service agent that supports a major organizational unit or geographic region may either access a central or inter-organizational key processing facility or employ local, dedicated processing facilities as required to support survivability, performance, or availability, requirements (e.g., a commercial external Certificate Authority).

Service agents may provide registration, directory, and support for data recovery services (i.e. key recovery), as well as provide access to relevant documentation, such as policy statements and infrastructure devices. Service agents may also process requests for keying material (e.g., user identification credentials), and assign and manage KMI user roles and privileges. A service agent may also provide interactive help desk services as required.

Client Nodes 

Client nodes are interfaces for managers, devices, and applications to access KMI functions, including the requesting of certificates and other keying material. They may include cryptographic modules, software, and procedures necessary to provide user access to the KMI. Client nodes interact with service agents to obtain cryptographic key services. Client nodes provide interfaces to end user entities (e.g., encryption devices) for the distribution of keying material, for the generation of requests for keying material, for the receipt and forwarding (as appropriate) of compromised key lists (CKLs) and/or certificate revocation lists (CRLs), for the receipt of audit requests, and for the delivery of audit responses. Client nodes typically initiate requests for keying material in order to synchronize new or existing user entities with the current key structure, and receive encrypted keying material for distribution to end-user cryptographic devices (in which the content - the unencrypted keying material – is not usually accessible to human users or user-node interface processes). A client node can be a FIPS 140-2 compliant workstation executing KMI security software or a FIPS 140-2 compliant special purpose device. Actual interactions between a client node and a service agent depend on whether the client node is a device, a manager, or a functional security application.


No comments:

Post a Comment

Designed By Blogger Templates