Breaking News

Sunday, August 16, 2015

ISM unit 2 question bank answers 55-60

QUESTION NUMBER 55-60

55. Recovering From a Security Compromise

Most organizations eventually face a successful compromise of one or more hosts on their network.
The first step in recovering from a compromise is to create and document the required policies and procedures for responding to successful intrusions before an intrusion.The response procedures should outline the actions that are required to respond to a successful compromise of the Web server and the appropriate sequence of these actions (sequence can be critical). Most organizations already have a dedicated incident response team in place, which should be contacted immediately when there is suspicion or confirmation of a compromise. In addition, the organization may wish to ensure that some of its staff are knowledgeable in the fields of computer and network forensics.

A Web server administrator should follow the organization’s policies and procedures for incident handling, and the incident response team should be contacted for guidance before the organization takes any action after a suspected or confirmed security compromise. Examples of steps commonly performed after discovering a successful compromise are as follows:
• Report the incident to the organization’s computer incident response capability.
• Isolate the compromised systems or take other steps to contain the attack so that additional information can be collected.
• Consult expeditiously, as appropriate, with management, legal counsel, and law enforcement.
• Investigate similar hosts to determine if the attacker also has compromised other systems.
• Analyze the intrusion, including—
      – The current state of the server, starting with the most ephemeral data (e.g., current network connections, memory dump, files time stamps, logged in users)
      – Modifications made to the server’s software and configuration
      – Modifications made to the data
      – Tools or data left behind by the attacker
      – System, intrusion detection, and firewall log files.
• ô€€ŸRestore the server before redeploying it.
      – Either install a clean version of the OS, applications, necessary patches, and server content; or restore the server from backups (this option can be more risky because the backups may have been made after the compromise, and restoring from a compromised backup may still allow the attacker access to the server).
      – Disable unnecessary services.
      – Apply all patches.
      – Change all passwords (including on uncompromised hosts, if their passwords are believed to have been seen by the compromised server, or if the same passwords are used on other hosts).
      – Reconfigure network security elements (e.g., firewall, router, IDPS) to provide additional protection and notification.
• Test the server to ensure security.
• Reconnect the server to the network.
• Monitor the server and network for signs that the attacker is attempting to access the server or network again.
• Document lessons learned.

Based on the organization’s policy and procedures, system administrators should decide whether to reinstall the OS of a compromised system or restore it from a backup. Factors that are often considered include the following:
• Level of access that the attacker gained (e.g., root, user, guest, system) • Type of attacker (internal or external)
• Purpose of compromise (e.g., Web page defacement, illegal software repository, platform for other attacks, data exfiltration)
• Method used for the server compromise
• Actions of the attacker during and after the compromise (e.g., log files, intrusion detection reports)
• Duration of the compromise
• Extent of the compromise on the network (e.g., the number of hosts compromised)
• Results of consultation with management and legal counsel.

The lower the level of access gained by the intruder and the more the server administrator understands about the attacker’s actions, the less risk there is in restoring from a backup and patching the vulnerability. For incidents in which there is less known about the attacker’s actions and/or in which the attacker gains high-level access, it is recommended that the OS, server software, and other applications be reinstalled from the manufacturer’s original distribution media and that the server data be restored only from a known good backup.
If legal action is pursued, server administrators need to be aware of the guidelines for handling a host after a compromise. Consult legal counsel and relevant law enforcement authorities as appropriate.


56. Security Testing Servers

Periodic security testing of servers is critical. Without periodic testing, there is no assurance that current protective measures are working or that the security patch applied by the server administrator is functioning as advertised. Although a variety of security testing techniques exists, vulnerability scanning is the most common. Vulnerability scanning assists a server administrator in identifying vulnerabilities and verifying whether the existing security measures are effective. Penetration testing is also used, but it is used less frequently and usually only as part of an overall penetration test of the organization’s network.

Vulnerability Scanning
Vulnerability scanners are automated tools that are used to identify vulnerabilities and misconfigurations of hosts. Many vulnerability scanners also provide information about mitigating discovered vulnerabilities

Vulnerability scanners attempt to identify vulnerabilities in the hosts scanned. Vulnerability scanners can help identify out-of-date software versions, missing patches, or system upgrades, and they can validate compliance with or deviations from the organization’s security policy. To accomplish this, vulnerability scanners identify OSs and major software applications running on hosts and match them with known vulnerabilities in their vulnerability databases.

However, vulnerability scanners have some significant weaknesses. Generally, they identify only surface vulnerabilities and are unable to address the overall risk level of a scanned Web server. Although the scan process itself is highly automated, vulnerability scanners can have a high false positive error rate (reporting vulnerabilities when none exist). This means an individual with expertise in Web server security and administration must interpret the results. Furthermore, vulnerability scanners cannot generally identify vulnerabilities in custom code or applications.

Vulnerability scanners rely on periodic updating of the vulnerability database to recognize the latest vulnerabilities. Before running any scanner, Web server administrators should install the latest updates to its vulnerability database. Some databases are updated more regularly than others (the frequency of updates should be a major consideration when choosing a vulnerability scanner).

Vulnerability scanners are often better at detecting well-known vulnerabilities than more esoteric ones because it is impossible for any one scanning product to incorporate all known vulnerabilities in a timely manner. In addition, manufacturers want to keep the speed of their scanners high (the more vulnerabilities detected, the more tests required, which slows the overall scanning process). Therefore, vulnerability scanners may be less useful to Web server administrators operating less popular Web servers, OSs, or custom-coded applications.

Vulnerability scanners provide the following capabilities:

• Identifying active hosts on a network
• Identifying active services (ports) on hosts and which of these are vulnerable
• Identifying applications and banner grabbing
• Identifying OSs
• Identifying vulnerabilities associated with discovered OSs, server software, and other applications
• Testing compliance with host application usage/security policies.

Organizations should conduct vulnerability scanning to validate that OSs and Web server applications are up-to-date on security patches and software versions. Vulnerability scanning is a labor-intensive activity that requires a high degree of human involvement to interpret the results. It may also be disruptive to operations by taking up network bandwidth, slowing network response times, and potentially affecting the availability of the scanned server or its applications. However, vulnerability scanning is extremely important for ensuring that vulnerabilities are mitigated as soon as possible, before they are discovered and exploited by adversaries. Vulnerability scanning should be conducted on a weekly to monthly basis. Many organizations also run a vulnerability scan whenever a new vulnerability database is released for the organization’s scanner application. Vulnerability scanning results should be documented and discovered deficiencies should be corrected.

Organizations should also consider running more than one vulnerability scanner. As previously discussed, no scanner is able to detect all known vulnerabilities; however, using two scanners generally increases the number of vulnerabilities detected. A common practice is to use one commercial and one freeware scanner. Network-based and host-based vulnerability scanners are available for free or for a fee.

Penetration Testing
“Penetration testing is security testing in which evaluators attempt to circumvent the security features of a system based on their understanding of the system design and implementation” [NISS99]. The purpose of penetration testing is to exercise system protections (particularly human response to attack indications) by using common tools and techniques developed by attackers. This testing is highly recommended for complex or critical systems.

Penetration testing can be an invaluable technique to any organization's information security program. However, it is a very labor-intensive activity and requires great expertise to minimize the risk to targeted systems. At a minimum, it may slow the organization's network response time because of network mapping and vulnerability scanning. Furthermore, the possibility exists that systems may be damaged or rendered inoperable in the course of penetration testing. Although this risk is mitigated by the use of experienced penetration testers, it can never be fully eliminated.

Penetration testing does offer the following benefits [NIST02b]:

• Tests the network using the same methodologies and tools employed by attackers
• Verifies whether vulnerabilities exist
• Goes beyond surface vulnerabilities and demonstrates how these vulnerabilities can be exploited iteratively to gain greater access
• Demonstrates that vulnerabilities are not purely theoretical
• Provides the “realism” necessary to address security issues
• Allows for testing of procedures and susceptibility of the human element to social engineering.


57. What is penetration testing?

Penetration Testing
“Penetration testing is security testing in which evaluators attempt to circumvent the security features of a system based on their understanding of the system design and implementation” [NISS99]. The purpose of penetration testing is to exercise system protections (particularly human response to attack indications) by using common tools and techniques developed by attackers. This testing is highly recommended for complex or critical systems.

Penetration testing can be an invaluable technique to any organization's information security program. However, it is a very labor-intensive activity and requires great expertise to minimize the risk to targeted systems. At a minimum, it may slow the organization's network response time because of network mapping and vulnerability scanning. Furthermore, the possibility exists that systems may be damaged or rendered inoperable in the course of penetration testing. Although this risk is mitigated by the use of experienced penetration testers, it can never be fully eliminated.

Penetration testing does offer the following benefits [NIST02b]:

• Tests the network using the same methodologies and tools employed by attackers
• Verifies whether vulnerabilities exist
• Goes beyond surface vulnerabilities and demonstrates how these vulnerabilities can be exploited iteratively to gain greater access
• Demonstrates that vulnerabilities are not purely theoretical
• Provides the “realism” necessary to address security issues
• Allows for testing of procedures and susceptibility of the human element to social engineering.


58. Write a note on Identification & Authentication Technologies.

For most systems, identification and authentication (I&A) is the first line of defense. I&A is a technical measure that prevents unauthorized people (or unauthorized processes) from entering a computer system.

I&A is a critical building block of computer security since it is the basis for most types of access control and for establishing user accountability.Access control often requires that the system be able to identify and differentiate among users. For example, access control is often based on least privilege, which refers to the granting to users of only those accesses required to perform their duties. User accountability requires the linking of activities on a computer system to specific individuals and, therefore, requires the system to identify users.

Identification is the means by which a user provides a claimed identity to the system. Authenticationm is the means of establishing the validity of this claim.

Computer systems recognize people based on the authentication data the systems receive. Authentication presents several challenges: collecting authentication data, transmitting the data securely, and knowing whether the person who was originally authenticated is still the person using the computer system. For example, a user may walk away from a terminal while still logged on, and another person may start using it.

There are three means of authenticating a user's identity which can be used alone or in combination:
• something the individual knows (a secret- e.g., a password, Personal Identification Number (PIN), or cryptographic key);
• something the individual possesses (a token - e.g., an ATM card or a smart card);
• something the individual is (a biometric - e.g., such characteristics as a voice pattern, handwriting dynamics, or a fingerprint).

While it may appear that any of these means could provide strong authentication, there are problems associated with each. If people wanted to pretend to be someone else on a computer system, they can guess or learn that individual's password; they can also steal or fabricate tokens. Each method also has drawbacks for legitimate users and system administrators: users forget passwords and may lose tokens, and administrative overhead for keeping track of I&A data and tokens can be substantial. Biometric systems have significant technical, user acceptance, and cost problems as well.

I&A Based on Something the User Knows
The most common form of I&A is a user ID coupled with a password. This technique is based solely on something the user knows. There are other techniques besides conventional passwords that are based on knowledge, such as knowledge of a cryptographic key.

I&A Based on Something the User Possesses
Although some techniques are based solely on something the user possesses, most of the techniques described in this section are combined with something the user knows. This combination can provide significantly stronger security than either something the user knows or possesses alone.
Objects that a user possesses for the purpose of I&A are called tokens. This section divides tokens into two categories: memory tokens and smart tokens.

I&A Based on Something the User Is
Biometric authentication technologies use the unique characteristics (or attributes) of an individual to authenticate that person's identity. These include physiological attributes (such as fingerprints, hand geometry, or retina patterns) or behavioral attributes (such as voice patterns and hand-written signatures). Biometric authentication technologies based upon these attributes have been developed for computer log-in applications.


59. List and explain the important implementation issues for I&A systems.

Some of the important implementation issues for I&A systems include administration, maintaining authentication, and single log-in.

Administration
Administration of authentication data is a critical element for all types of authentication systems. The administrative overhead associated with I&A can be significant. I&A systems need to create, distribute, and store authentication data. For passwords, this includes creating passwords, issuing them to users, and maintaining a password file. Token systems involve the creation and distribution of tokens/PINs and data that tell the computer how to recognize valid tokens/PINs. For biometric systems, this includes creating and storing profiles.

The administrative tasks of creating and distributing authentication data and tokens can be a substantial. Identification data has to be kept current by adding new users and deleting former users. If the distribution of passwords or tokens is not controlled, system administrators will not know if they have been given to someone other than the legitimate user. It is critical that the distribution system ensure that authentication data is firmly linked with a given individual. Some of these issues are discussed in Chapter 10 under User Administration.

In addition, I&A administrative tasks should address lost or stolen passwords or tokens. It is often necessary to monitor systems to look for stolen or shared accounts.

Authentication data needs to be stored securely, as discussed with regard to accessing password files. The value of authentication data lies in the data's confidentiality, integrity, and availability. If confidentiality is compromised, someone may be able to use the information to masquerade as a legitimate user. If system administrators can read the authentication file, they can masquerade as another user. Many systems use encryption to hide the authentication data from the system administrators. If integrity is compromised, authentication data can be added or the system can be disrupted. If availability is compromised, the system cannot authenticate users, and the users may not be able to work.

Maintaining Authentication
So far, this chapter has discussed initial authentication only. It is also possible for someone to use a legitimate user's account after log-in. Many computer systems handle this problem by logging a user out or locking their display or session after a certain period of inactivity. However, these methods can affect productivity and can make the computer less user-friendly.

Single Log-in
 From an efficiency viewpoint, it is desirable for users to authenticate themselves only once and then to be able to access a wide variety of applications and data available on local and remote systems, even if those systems require users to authenticate themselves. This is known as single log-in. if the access is within the same host computer, then the use of a modern access control system (such as an access control list) should allow for a single log-in. If the access is across multiple platforms, then the issue is more complicated, as discussed below. There are three main techniques that can provide single log-in across multiple computers: host-to-host authentication, authentication servers, and user-to-host authentication.

Host-to-Host Authentication. Under a host-to-host authentication approach, users authenticate themselves once to a host computer. That computer then authenticates itself to other computers and vouches for the specific user. Host-to-host authentication can be done by passing an identification, a password, or by a challenge-response mechanism or other one-time password scheme. Under this approach, it is necessary for the computers to recognize each other and to trust each other.

Authentication Servers. When using authentication server, the users authenticate themselves to a special host computer (the authentication server). This computer then authenticates the user to other host computers the user wants to access. Under this approach, it is necessary for the computers to trust the authentication server. (The authentication server need not be a separate computer, although in some environments this may be a cost-effective way to increase the security of the server.) Authentication servers can be distributed geographically or logically, as needed, to reduce workload.

User-to-Host. A user-to-host authentication approach requires the user to log-in to each host computer. However, a smart token (such as a smart card) can contain all authentication data and perform that service for the user. To users, it looks as though they were only authenticated once.

Interdependencies
There are many interdependencies among I&A and other controls. Several of them have been discussed in the chapter.

Logical Access Controls. Access controls are needed to protect the authentication database. I&A is often the basis for access controls.

Audit. I&A is necessary if an audit log is going to be used for individual accountability.

Cryptography. Cryptography provides two basic services to I&A: it protects the confidentiality of authentication data, and it provides protocols for proving knowledge and/or possession of a token without having to transmit data that could be replayed to gain access to a computer system.

Cost Considerations
In general, passwords are the least expensive authentication technique and generally the least secure. They are already embedded in many systems. Memory tokens are less expensive than smart tokens, but have less functionality. Smart tokens with a human interface do not require readers, but are more inconvenient to use. Biometrics tends to be the most expensive.

For I&A systems, the cost of administration is often underestimated. Just because a system comes with a password system does not mean that using it is free. For example, there is significant overhead to administering the I&A system.


60. What are various criteria used by the system to determine if a request for access will be granted?

In deciding whether to permit someone to use a system resource logical access controls examine whether the user is authorized for the type of access requested. (Note that this inquiry is usually distinct from the question of whether the user is authorized to use the system at all, which is usually addressed in an identification and authentication process.)

The system uses various criteria to determine if a request for access will be granted. They are typically used in some combination. Many of the advantages and complexities involved in implementing and managing access control are related to the different kinds of user accesses supported.

1) Identity
It is probably fair to say that the majority of access controls are based upon the identity of the user (either human or process), which is usually obtained through identification and authentication (I&A). The identity is usually unique, to support individual accountability, but can be a group identification or can even be anonymous. For example, public information dissemination systems may serve a large group called "researchers" in which the individual researchers are not known.

2) Roles
Access to information may also be controlled by the job assignment or function (i.e., the role) of the user who is seeking access. Examples of roles include data entry clerk, purchase officer, project leader, programmer, and technical editor. Access rights are grouped by role name, and the use of resources is restricted to individuals authorized to assume the associated role. An individual may be authorized for more than one role, but may be required to act in only a single role at a time. Changing roles may require logging out and then in again, or entering a role-changing command. Note that use of roles is not the same as shared-use accounts. An individual may be assigned a standard set of rights of a shipping department data entry clerk, for example, but the account would still be tied to that individual's identity to allow for auditing.

The use of roles can be a very effective way of providing access control. The process of defining roles should be based on a thorough analysis of how an organization operates and should include input from a wide spectrum of users in an organization.

3) Location
Access to particular system resources may also be based upon physical or logical location. For example, in a prison, all users in areas to which prisoners are physically permitted may be limited to read-only access. Changing or deleting is limited to areas to which prisoners are denied physical access. The same authorized users (e.g., prison guards) would operate under significantly different logical access controls, depending upon their physical location. Similarly, users can be restricted based upon network addresses (e.g., users from sites within a given organization may be permitted greater access than those from outside).

4) Time
Time-of-day or day-of-week restrictions are common limitations on access. For example, use of confidential personnel files may be allowed only during normal working hours -- and maybe denied before 8:00 a.m. and after 6:00 p.m. and all day during weekends and holidays.

5) Transaction
Another approach to access control can be used by organizations handling transactions (e.g., account inquiries). Phone calls may first be answered by a computer that requests that callers key in their account number and perhaps a PIN. Some routine transactions can then be made directly, but more complex ones may require human intervention. In such cases, the computer, which already knows the account number, can grant a clerk, for example, access to a particular account for the duration of the transaction. When completed, the access authorization is terminated. This means that users have no choice in which accounts they have access to, and can reduce the potential for mischief. It also eliminates employee browsing of accounts (e.g., those of celebrities or their neighbors) and can thereby heighten privacy.

6) Service Constraints
Service constraints refer to those restrictions that depend upon the parameters that may arise during use of the application or that are preestablished by the resource owner/manager. For example, a particular software package may only be licensed by the organization for five users at a time. Access would be denied for a sixth user, even if the user were otherwise authorized to use the application. Another type of service constraint is based upon application content or numerical thresholds. For example, an ATM machine may restrict transfers of money between accounts to certain dollar limits or may limit maximum ATM withdrawals to $500 per day. Access may also be selectively permitted based on the type of service requested. For example, users of computers on a network may be permitted to exchange electronic mail but may not be allowed to log in to each others' computers.

7) Common Access Modes
In addition to considering criteria for when access should occur, it is also necessary to consider the types of access, or access modes. The concept of access modes is fundamental to access control. Common access modes, which can be used in both operating or application systems, include the following:

Read access provides users with the capability to view information in a system resource (such as a file, certain records, certain fields, or some combination thereof), but not to alter it, such as delete from, add to, or modify in any way. One must assume that information can be copied and printed if it can be read (although perhaps only manually, such as by using a print screen function and retyping the information into another file).
Write access allows users to add to, modify, or delete information in system resources (e.g., files, records, programs). Normally user has read access to anything they have write access to.
Execute privilege allows users to run programs.
Delete access allows users to erase system resources (e.g., files, records, fields, programs).Note that if users have write access but not delete access, they could overwrite the field or file with gibberish or otherwise inaccurate information and, in effect, delete the information.
Other specialized access modes (more often found in applications) include:
Create access allows users to create new files, records, or fields.
Search access allows users to list the files in a directory.
Of course, these criteria can be used in conjunction with one another. For example, an organization may give authorized individuals write access to an application at any time from within the office but only read access during normal working hours if they dial-in.
Depending upon the technical mechanisms available to implement logical access control, a wide variety of access permissions and restrictions are possible. No discussion can present all possibilities.

No comments:

Post a Comment

Designed By Blogger Templates