Breaking News

Tuesday, August 25, 2015

ISM unit 3 question bank answers 96-101

QUESTION NUMBER 96-101

96. What is the need for log management? 




97. What are the challenges in log management?




98. Explain the tiers used in a log management infrastructure.

A log management infrastructure typically comprises the following three tiers:
Read more ...

ISM unit 3 question bank answers 91-95

QUESTION NUMBER 91-95

91. Write a short note on Key Management Policy.




92. Explain any six server security principles.

When addressing server security issues, it is an excellent idea to keep in mind the following general information security principles:
Read more ...

Monday, August 24, 2015

ISM unit 3 question bank answers 86-90

QUESTION NUMBER 86-90

86. Write a short note on key management policy. 

Each U.S. Government organization that manages cryptographic systems that are intended to protect sensitive information should base the management of those systems on an organizational policy
Read more ...

ISM unit 3 question bank answers 81-85

QUESTION NUMBER 81-85

81. What is The Patch and Vulnerability Group & what are their duties? 

The Patch and Vulnerability Group(PVG)

The PVG should be a formal group that incorporates representatives from information security and
Read more ...

Saturday, August 22, 2015

ISM unit 3 question bank answers 76-80

QUESTION NUMBER 76-80

76. What are the recommended capabilities of an antivirus software? 

Antivirus software is the most commonly used technical control for malware threat mitigation. There are many brands of antivirus software, with most providing similar protection through the following recommended capabilities:
Read more ...

ISM unit 3 question bank answers 71-75

QUESTION NUMBER 71-75

71. What are the various functions of log management infrastructure? 




72. Write short note on Syslog Security
Read more ...

ISM unit 3 question bank answers 66-70

QUESTION NUMBER 66-70

66. List the most commonly logged types of information and their potential benefits.

The following lists some of the most commonly logged types of information and the potential benefits of each:

Client requests and server responses,
Read more ...

Thursday, August 20, 2015

ISM unit 3 question bank answers 61-65

QUESTION NUMBER 61-65

61. What are the various components of PKI?

Functional elements of a public key infrastructure include certification authorities, registration authorities, repositories, and archives. The users of the PKI come in two flavors: certificate holders
Read more ...

Sunday, August 16, 2015

ISM unit 2 question bank answers 55-60

QUESTION NUMBER 55-60

55. Recovering From a Security Compromise

Most organizations eventually face a successful compromise of one or more hosts on their network.
Read more ...

ISM unit 2 question bank answers 50-54

QUESTION NUMBER 50-54

50. State IEEE 802.11 Network Components and explain its Architectural Models.

IEEE 802.11 has two fundamental architectural components, as follows:

• ¬ Station (STA). A STA is a wireless endpoint device. Typical examples of STAs are laptop
Read more ...

ISM unit 2 question bank answers 45-49

QUESTION NUMBER 45-49

45. What are the various policies based on applications, user identity & Network Activity.

Policies Based on Applications
Most early firewall work involved simply blocking unwanted or suspicious traffic at the network
Read more ...

Saturday, August 15, 2015

ISM unit 2 question bank answers 40-44

QUESTION NUMBER 40-44
Read more ...

ISM unit 2 question bank answers 35-39

QUESTION NUMBER 35-39

35. Explain how firewall act as network address translators.

Most firewalls can perform NAT, which is sometimes called port address translation (PAT) or
Read more ...

Friday, August 14, 2015

ISM unit 2 question bank answers 30-34

QUESTION BANK 30-34

30. What are the typical components of network based IDPS

A typical network-based IDPS is composed of sensors, one or more management servers, multiple
Read more ...

Thursday, August 13, 2015

ISM unit 2 question bank answers 25-29

QUESTION NUMBER 25-29

25. What are the various uses of IDPS technologies?

IDPSs are primarily focused on identifying possible incidents. For example, an IDPS could detect
Read more ...

Wednesday, August 12, 2015

ISM unit 5 question bank answers 132-136

QUESTION NUMBER 132-136

132. How is the collection of files done in forensic science?

Data Collection

The first step in the forensic process is to identify potential sources of data and acquire data from them.


Identifying Possible Sources of Data

The increasingly widespread use of digital technology for both professional and personal purposes has led to an abundance of data sources. The most obvious and common sources of data are desktop computers, servers, network storage devices, and laptops. These systems typically have internal drives that accept media, such as CDs and DVDs, and also have several types of ports (e.g., Universal Serial Bus [USB], Firewire, Personal Computer Memory Card International Association [PCMCIA]) to which external data storage media and devices can be attached. Examples of external storage forms that might be sources of data are thumb drives, memory and flash cards, optical discs, and magnetic disks. Standard computer systems also contain volatile data that is available temporarily (i.e., until the system is shut down or rebooted). In addition to computer-related devices, many types of portable digital devices (e.g., PDAs, cell phones, digital cameras, digital recorders, audio players) may also contain data. Analysts should be able to survey a physical area, such as an office, and recognize the possible sources of data.
Analysts should also think of possible data sources located in other places. For example, there are usually many sources of information within an organization regarding network activity and application usage. Information may also be recorded by other organizations, such as logs of network activity for an Internet service provider (ISP). Analysts should be mindful of the owner of each data source and the effect that this might have on collecting data. For example, getting copies of ISP records typically requires a court order. Analysts should also be aware of the organization’s policies, as well as legal considerations, regarding externally owned property at the organization’s facilities (for example, an employee’s personal laptop or a contractor’s laptop). The situation can become even more complicated if locations outside the organization’s control are involved, such as an incident involving a computer at a telecommuter’s home office. Sometimes it is simply not feasible to collect data from a primary data source; therefore, analysts should be aware of alternate data sources that might contain some or all of the same data, and should use those sources instead of the unattainable source.
Organizations can take on going proactive measures to collect data that may be useful for forensic purposes. For example, most OSs can be configured to audit and record certain types of events, such as authentication attempts and security policy changes, as part of normal operations. Audit records can provide valuable information, including the time that an event occurred and the origin of the event. Another helpful action is to implement centralized logging, which means that certain systems and applications forward copies of their logs to secure central log servers. Centralized logging prevents unauthorized users from tampering with logs and employing anti-forensic techniques to impede analysis. Performing regular backups of systems allows analysts to view the contents of the system as they were at a particular time. In addition, as described in Sections 6 and 7, security monitoring controls such as intrusion detection software, antivirus software, and spyware detection and removal utilities can generate logs that show when and how an attack or intrusion took place.
Another proactive data collecting measure is the monitoring of user behaviour, such as keystroke monitoring, which records the keyboard usage of a particular system. Although this measure can provide a valuable record of activity, it can also be a violation of privacy unless users are advised through organizational policy and login banners that such monitoring may be performed. Most organizations do not employ techniques such as keystroke monitoring except when gathering additional information on a suspected incident. Authority for performing such monitoring should be discussed with legal advisors and documented clearly in the organization’s policy. 

Acquiring the Data 

After identifying potential data sources, the analyst needs to acquire the data from the sources. Data acquisition should be performed using a three-step process: developing a plan to acquire the data, acquiring the data, and verifying the integrity of the acquired data. Although the following items provide an overview of these three steps, the specific details behind steps 2 and 3 vary based on the type of data being acquired.
1. Develop a plan to acquire the data. 
Developing a plan is an important first step in most cases because there are multiple potential data sources. The analyst should create a plan that prioritizes the sources, establishing the order in which the data should be acquired. Important factors for prioritization include the following:
Likely Value. Based on the analyst.s understanding of the situation and previous experience in similar situations, the analyst should be able to estimate the relative likely value of each potential data source.
Volatility. Volatile data refers to data on a live system that is lost after a computer is powered down or due to the passage of time. Volatile data may also be lost as a result of other actions performed on the system. In many cases, acquiring volatile data should be given priority over non-volatile data. However, non-volatile data may also be somewhat dynamic in nature (e.g., log files that are overwritten as new events occur).
Amount of Effort Required. The amount of effort required to acquire different data sources may vary widely. The effort involves not only the time spent by analysts and others within the organization (including legal advisors) but also the cost of equipment and services (e.g., outside experts). For example, acquiring data from a network router would probably require much less effort than acquiring data from an ISP.


2. Acquire the data. 
If the data has not already been acquired by security tools, analysis tools, or other means, the general process for acquiring data involves using forensic tools to collect volatile data, duplicating non-volatile data sources to collect their data, and securing the original non-volatile data sources. Data acquisition can be performed either locally or over a network. Although it is generally preferable to acquire data locally because there is greater control over the system and data, local data collection is not always feasible (e.g., system in locked room, system in another location). When acquiring data over a network, decisions should be made regarding the type of data to be collected and the amount of effort to use. For instance, it might be necessary to acquire data from several systems through different network connections, or it might be sufficient to copy a logical volume from just one system.


3. Verify the integrity of the data. 
After the data has been acquired, its integrity should be verified. It is particularly important for an analyst to prove that the data has not been tampered with if it might be needed for legal reasons. Data integrity verification typically consists of using tools to compute the message digest of the original and copied data, then comparing the digests to make sure that they are the same.


Incident Response Considerations

When performing forensics during incident response, an important consideration is how and when the incident should be contained. Isolating the pertinent systems from external influences may be necessary to prevent further damage to the system and its data or to preserve evidence. In many cases, the analyst should work with the incident response team to make a containment decision (e.g., disconnecting network cables, unplugging power, increasing physical security measures, gracefully shutting down a host). This decision should be based on existing policies and procedures regarding incident containment, as well as the team’s assessment of the risk posed by the incident, so that the chosen containment strategy or combination of strategies sufficiently mitigates risk while maintaining the integrity of potential evidence whenever possible.
The organization should also consider in advance the impact that various containment strategies may have on the ability of the organization to operate effectively. For example, taking a critical system offline for several hours to acquire disk images and other data might adversely affect the ability of the organization to perform its necessary operations. Significant downtime could result in substantial monetary losses to the organization. Therefore, care should be taken to minimize disruptions to an organization’s operations.


133. What is the need for forensics?

The Need for Forensics
Over the last decade, the number of crimes that involve computers has grown, spurring an increase in companies and products that aim to assist law enforcement in using computer-based evidence to determine the who, what, where, when, and how for crimes. As a result, computer and network forensics has evolved to assure proper presentation of computer crime evidentiary data into court. Forensic tools and techniques are most often thought of in the context of criminal investigations and computer security incident handling used to respond to an event by investigating suspect systems, gathering and preserving evidence, reconstructing events, and assessing the current state of an event.
However, forensic tools and techniques are also useful for many other types of tasks, such as the following: .

Operational Troubleshooting:
Many forensic tools and techniques can be applied to troubleshooting operational issues, such as finding the virtual and physical location of a host with an incorrect network configuration, resolving a functional problem with an application, and recording and reviewing the current OS and application configuration settings for a host. .

Log Monitoring.
Various tools and techniques can assist in log monitoring, such as analyzing log entries and correlating log entries across multiple systems. This can assist in incident handling, identifying policy violations, auditing, and other efforts. .

Data Recovery.
There are dozens of tools that can recover lost data from systems, including data that has been accidentally or purposely deleted or otherwise modified. The amount of data that can be recovered varies on a case-by-case basis. .

Data Acquisition.
Some organizations use forensics tools to acquire data from hosts that are being redeployed or retired. For example, when a user leaves an organization, the data from the user’s workstation can be acquired and stored in case it is needed in the future. The workstation’s media can then be sanitized to remove all of the original user’s data. .

Due Diligence/Regulatory Compliance.
Existing and emerging regulations require many organizations to protect sensitive information and maintain certain records for audit purposes. Also, when protected information is exposed to other parties, organizations may be required to notify other agencies or impacted individuals. Forensics can help organizations exercise due diligence and comply with such requirements. .

Regardless of the situation, the forensic process comprises the following basic phases: .

Collection.
The first phase in the process is to identify, label, record, and acquire data from the possible sources of relevant data, while following guidelines and procedures that preserve the integrity of the data. Collection is typically performed in a timely manner because of the likelihood of losing dynamic data such as current network connections, as well as losing data from battery-powered devices (e.g., cell phones, PDAs). .

Examination.
Examinations involve forensically processing large amounts of collected data using a combination of automated and manual methods to assess and extract data of particular interest, while preserving the integrity of the data. .

Analysis.
The next phase of the process is to analyze the results of the examination, using legally justifiable methods and techniques, to derive useful information that addresses the questions that were the impetus for performing the collection and examination. .

Reporting.
The final phase is reporting the results of the analysis, which may include describing the actions used, explaining how tools and procedures were selected, determining what other actions need to be performed (e.g., forensic examination of additional data sources, securing identified vulnerabilities, improving existing security controls), and providing recommendations for improvement to policies, guidelines, procedures, tools, and other aspects of the forensic process. The formality of the reporting step varies greatly depending on the situation.


134. What are the key recommendations on establishing and organizing a forensic capability?

The key recommendations on establishing and organizing a forensic capability are as follows:

Organizations should have a capability to perform computer and network forensics.
Forensics is needed for various tasks within an organization, including investigating crimes and inappropriate behavior, reconstructing computer security incidents, troubleshooting operational problems, supporting due diligence for audit record maintenance, and recovering from accidental system damage. Without such a capability, an organization will have difficulty determining what events have occurred within its systems and networks, such as exposures of protected, sensitive data. Also, handling evidence in a forensically sound manner puts decision makers in a position where they can confidently take the necessary actions. :

Organizations should determine which parties should handle each aspect of forensics.
Most organizations rely on a combination of their own staff and external parties to perform forensic tasks. Organizations should decide which parties should take care of which tasks based on skills and abilities, cost, response time, and data sensitivity. :

Incident handling teams should have robust forensic capabilities.
More than one team member should be able to perform each typical forensic activity. Hands-on exercises and IT and forensic training courses can be helpful in building and maintaining skills, as can demonstrations of new tools and technologies. :

Many teams within an organization should participate in forensics.
Individuals performing forensic actions should be able to reach out to other teams and individuals within an organization, as needed, for additional assistance. Examples of teams that may provide assistance in these efforts include IT professionals, management, legal advisors, human resources personnel, auditors, and physical security staff. Members of these teams should understand their roles and responsibilities in forensics, receive training and education on forensic.related policies, guidelines, and procedures, and be prepared to cooperate with and assist others on forensic actions. :

Forensic considerations should be clearly addressed in policies.
At a high level, policies should allow authorized personnel to monitor systems and networks and perform investigations for legitimate reasons under appropriate circumstances. Organizations may also have a separate forensic policy for incident handlers and others with forensic roles that provides more detailed rules for appropriate behavior. Everyone who may be called upon to assist with any forensic efforts should be familiar with and understand the forensic policy. Additional policy considerations are as follows: :

• . Forensic policy should clearly define the roles and responsibilities of all people performing or assisting with the organization’s forensic activities. The policy should include all internal and external parties that may be involved and should clearly indicate who should contact which parties under different circumstances.
• . The organization’s policies, guidelines, and procedures should clearly explain what forensic actions should and should not be performed under normal and special circumstances and should address the use of anti-forensic tools and techniques. Policies, guidelines, and procedures should also address the handling of inadvertent exposures of sensitive information.
• . Incorporating forensic considerations into the information system life cycle can lead to more efficient and effective handling of many incidents. Examples include performing auditing on hosts and establishing data retention policies that support performing historical reviews of system and network activity.

Organizations should create and maintain guidelines and procedures for performing forensic tasks.
The guidelines should include general methodologies for investigating an incident using forensic techniques, and step-by-step procedures should explain how to perform routine tasks. The guidelines and procedures should support the admissibility of evidence into legal proceedings. Because electronic logs and other records can be altered or otherwise manipulated, organizations should be prepared, through their policies, guidelines, and procedures, to demonstrate the reliability and integrity of such records. The guidelines and procedures should also be reviewed regularly and maintained so that they are accurate.


135. List various phases in forensics process. Explain in short.

Refer question number 130 and 121 



136. Explain the two techniques used to copy files from media.

Refer question number 124

Read more ...

ISM unit 5 question bank answers 127-131

QUESTION NUMBER 127-131

127. What are the control objectives of ISO 17799 standard?

ISO 17799 is an information security code of practice. It includes a number of sections, covering a wide range of security issues. Broadly (very) the objectives of these are as follows:

1. Risk Assessment and Treatment
This section was an addition to the latest version, and deals with the fundamentals of security risk analysis..

2. System Policy

Objective: To provide management direction and support for information security

3. Organizing Information Security
Objectives:
a) To manage information security within the organization
b) Maintain the security of information and processing facilities with respect to external parties.

4. Asset Management
Objectives:
a) Achieve and maintain appropriate protection of organizational assets.
b) Ensure that information receives an appropriate level of protection.

5. Human Resources Security
Objectives:
a) Ensure that employees, contractors and third parties are suitable for the jobs they are considered for, understand their responsibilities, and to reduce the risk of abuse (theft, misuse, etc).
b) Ensure that the above are aware of IS threats and their responsibilities, and able to support the organization's security policies
c) Ensure that the above exit the organization in an orderly and controlled manner.

6. Physical and Environmental Security
Objectives:
a) Prevent unauthorized physical access, interference and damage to the organization's information and premises.
b) Prevent loss, theft and damage of assets
c) Prevent interruption to the organization's activities.

7. Communications and Operations Management
Objectives:
a) Ensure the secure operation of information processing facilities
b) Maintain the appropriate level of information security and service delivery, aligned with 3rd party agreements
c) Minimize the risk of systems failures
d) Protect the integrity of information and software
e) Maintain the availability and integrity of information and processing facilities
f) Ensure the protection of information in networks and of the supporting infrastructure
g) Prevent unauthorized disclosure, modification, removal or destruction of assets.
h) Prevent unauthorized disruption of business activities.
i) Maintain the security of information and/or software exchanged internally and externally.
j) Ensure the security of e-commerce services
k) Detect unauthorized information processing activities

8. Access Control
Objectives:
a) Control access to information
b) Ensure authorized user access
c) Prevent unauthorized access to information systems
d) Prevent unauthorized user access and compromise of information and processing facilities
e) Prevent unauthorized access to networked services
f) Prevent unauthorized access to operating systems
g) Prevent unauthorized access to information within application systems
h) Ensure information security with respect to mobile computing and teleworking facilities

9. Information Systems Acquisition, Development and Maintenance
Objectives:
a) Ensure that security is an integral part of information systems
b) Prevent loss, errors or unauthorized modification/use of information within applications
c) Protect the confidentiality, integrity or authenticity of information via cryptography
d) Ensure the security of system files
e) Maintain the security of application system information and software
f) Reduce/manage risks resulting from exploitation of published vulnerabilities

10. Information Security Incident Management
Objectives:
a) Ensure that security information is communicated in a manner allowing corrective action to be taken in a timely fashion
b) Ensure a consistent and effective approach is applied to the management of IS issues

11. Business Continuity Management
Objectives:
a) Counteract interruptions to business activities and protect critical processes from the effects of major failures/disasters
b) Ensure timely resumption of the above
12. Compliance
Objectives:
a) Avoid the breach of any law, regulatory or contractual obligation and of any security requirement.
b) Ensure systems comply with internal security policies/standards
c) Maximize the effectiveness of and minimize associated interference from and to the systems audit process


128. What is the functionality of NMAP tool?

Nmap (Network Mapper) is a security scanner originally written by Gordon Lyon (also known by his pseudonym Fyodor Vaskovich) used to discover hosts and services on a computer network, thus creating a "map" of the network. To accomplish its goal, Nmap sends specially crafted packets to the target host and then analyzes the responses.

The software provides a number of features for probing computer networks, including host discovery and service and operating system detection. These features are extensible by scripts that provide more advanced service detection, vulnerability detection,[2] and other features. Nmap is also capable of adapting to network conditions including latency and congestion during a scan. Nmap is under development and refinement by its user community.

Nmap was originally a Linux-only utility, but it was ported to Microsoft Windows, Solaris, HP-UX, BSD variants (including Mac OS X), AmigaOS, and SGI IRIX.[4] Linux is the most popular platform, followed closely by Windows.

Provide nmap with a TCP/IP address, and it will identify any open "doors" or ports that might be available on that remote TCP/IP device. The real power behind nmap is the amazing number of scanning techniques and options available! Each nmap scan can be customized to be as blatantly obvious or as invisible as possible. Some nmap scans can forge your identity to make it appear that a separate computer is scanning the network, or simulate multiple scanning decoys on the network! This document will provide an overview of all nmap scanning methods, complete with packet captures and real-world perspectives of how these scans can be best used in enterprise networks.

Nmap is a very powerful utility that can be used to:

  • Detect the live host on the network (host discovery) 
  • Detect the open ports on the host (port discovery or enumeration) 
  • Detect the software and the version to the respective port (service discovery) 
  • Detect the operating system, hardware address, and the software version 
  • Detect the vulnerability and security holes (Nmap scripts)

129. State the features of NMAP.

Nmap features include :

Host discovery
– Identifying hosts on a network. For example, listing the hosts that respond to TCP and/or ICMP requests or have a particular port open.

Port scanning
– Enumerating the open ports on target hosts.

Version detection
– Interrogating network services on remote devices to determine application name and version number.

OS detection
– Determining the operating system and hardware characteristics of network devices.

Scriptable interaction with the target
– using Nmap Scripting Engine (NSE) and Lua programming language.

Nmap can provide further information on targets, including reverse DNS names, device types, and MAC addresses.

Typical uses of Nmap:

• Auditing the security of a device or firewall by identifying the network connections which can be made to, or through it.
• Identifying open ports on a target host in preparation for auditing.
• Network inventory, network mapping, maintenance and asset management.
• Auditing the security of a network by identifying new servers.
• Generating traffic to hosts on a network.


130. What are the basic phases of forensic process? Give a brief overview of it.

The most common goal of performing forensics is to gain a better understanding of an event of interest by finding and analyzing the facts related to that event.

forensics may be needed in many different situations, such as evidence collection for legal proceedings and internal disciplinary actions, and handling of malware incidents and unusual operational problems. Regardless of the need, forensics should be performed using the four-phase process shown in figure.


The exact details of these steps may vary based on the specific need for forensics; the organization’s policies, guidelines, and procedures should indicate any variations from the standard procedure.

During collection, data related to a specific event is identified, labeled, recorded, and collected, and its integrity is preserved. .

In the second phase, examination, forensic tools and techniques appropriate to the types of data that were collected are executed to identify and extract the relevant information from the collected data while protecting its integrity. Examination may use a combination of automated tools and manual processes. .

The next phase, analysis, involves analyzing the results of the examination to derive useful information that addresses the questions that were the impetus for performing the collection and examination. .

The final phase involves reporting the results of the analysis, which may include describing the actions performed, determining what other actions need to be performed, and recommending improvements to policies, guidelines, procedures, tools, and other aspects of the forensic process


Data Collection
The first step in the forensic process is to identify potential sources of data and acquire data from them. Identifying Possible Sources of Data describes the variety of data sources available and discusses actions that organizations can take to support the ongoing collection of data for forensic purposes. Section Acquiring the Data describes the recommended steps for collecting data, including additional actions necessary to support legal or internal disciplinary proceedings. Incident Response Considerations discusses incident response considerations, emphasizing the need to weigh the value of collected data against the costs and impact to the organization of the collection process.

Examination
After data has been collected, the next phase is to examine the data, which involves assessing and extracting the relevant pieces of information from the collected data. This phase may also involve bypassing or mitigating OS or application features that obscure data and code, such as data compression, encryption, and access control mechanisms. An acquired hard drive may contain hundreds of thousands of data files; identifying the data files that contain information of interest, including information concealed through file compression and access control, can be a daunting task. In addition, data files of interest may contain extraneous information that should be filtered. For example, yesterday’s firewall log might hold millions of records, but only five of the records might be related to the event of interest.

Analysis
Once the relevant information has been extracted, the analyst should study and analyze the data to draw conclusions from it.The foundation of forensics is using a methodical approach to reach appropriate conclusions based on the available data or determine that no conclusion can yet be drawn. The analysis should include identifying people, places, items, and events, and determining how these elements are related so that a conclusion can be reached. Often, this effort will include correlating data among multiple sources. For instance, a network intrusion detection system (IDS) log may link an event to a host, the host audit logs may link the event to a specific user account, and the host IDS log may indicate what actions that user performed. Tools such as centralized logging and security event management software can facilitate this process by automatically gathering and correlating the data. Comparing system characteristics to known baselines can identify various types of changes made to the system.

Reporting
The final phase is reporting, which is the process of preparing and presenting the information resulting from the analysis phase. Many factors affect reporting, including the following:
Alternative Explanations.
When the information regarding an event is incomplete, it may not be possible to arrive at a definitive explanation of what happened. When an event has two or more plausible explanations, each should be given due consideration in the reporting process. Analysts should use a methodical approach to attempt to prove or disprove each possible explanation that is proposed.

Audience Consideration.
Knowing the audience to which the data or information will be shown is important. An incident requiring law enforcement involvement requires highly detailed reports of all information gathered, and may also require copies of all evidentiary data obtained. A system administrator might want to see network traffic and related statistics in great detail. Senior management might simply want a high-level overview of what happened, such as a simplified visual representation of how the attack occurred, and what should be done to prevent similar incidents.

Actionable Information.
Reporting also includes identifying actionable information gained from data that may allow an analyst to collect new sources of information. For example, a list of contacts may be developed from the data that might lead to additional information about an incident or crime. Also, information might be obtained that could prevent future events, such as a backdoor on a system that could be used for future attacks, a crime that is being planned, a worm scheduled to start spreading at a certain time, or a vulnerability that could be exploited.


131. Write a short note on File Systems.

Before media can be used to store files, the media must usually be partitioned and formatted into logical volumes. Partitioning is the act of logically dividing a media into portions that function as physically separate units. A logical volume is a partition or a collection of partitions acting as a single entity that has been formatted with a filesystem. Some media types, such as floppy disks, can contain at most one partition (and consequently, one logical volume). The format of the logical volumes is determined by the selected filesystem.

A filesystem defines the way that files are named, stored, organized, and accessed on logical volumes. Many different filesystems exist, each providing unique features and data structures. However, all filesystems share some common traits. First, they use the concepts of directories and files to organize and store data. Directories are organizational structures that are used to group files together. In addition to files, directories may contain other directories called subdirectories. Second, filesystems use some data structure to point to the location of files on media. In addition, they store each data file written to media in one or more file allocation units. These are referred to as clusters by some filesystems (e.g., File Allocation Table [FAT], NT File System [NTFS]) and as blocks by other filesystems (e.g., UNIX and Linux). A file allocation unit is simply a group of sectors, which are the smallest units that can be accessed on media.

Some commonly used filesystems are as follows:

FAT12.
FAT12 is used only on floppy disks and FAT volumes smaller than 16 MB. FAT12 uses a 12-bit file allocation table entry to address an entry in the filesystem.

FAT16.
MS-DOS, Windows 95/98/NT/2000/XP, Windows Server 2003, and some UNIX OSs support FAT16 natively. FAT16 is also commonly used for multimedia devices such as digital cameras and audio players. FAT16 uses a 16-bit file allocation table entry to address an entry in the filesystem. FAT16 volumes are limited to a maximum size of 2 GB in MS-DOS and Windows 95/98. Windows NT and newer OSs increase the maximum volume size for FAT16 to 4 GB.

FAT32.
Windows 95 Original Equipment Manufacturer (OEM) Service Release 2 (OSR2), Windows 98/2000/XP, and Windows Server 2003 support FAT32 natively, as do some multimedia devices. FAT32 uses a 32-bit file allocation table entry to address an entry in the filesystem. The maximum FAT32 volume size is 2 terabytes (TB).

NTFS.
Windows NT/2000/XP and Windows Server 2003 support NTFS natively. NTFS is a recoverable filesystem, which means that it can automatically restore the consistency of the filesystem when errors occur. In addition, NTFS supports data compression and encryption, and allows user and group-level access permissions to be defined for data files and directories.The maximum NTFS volume size is 2 TB.

High-Performance File System (HPFS).
HPFS is supported natively by OS/2 and can be read by Windows NT 3.1, 3.5, and 3.51. HPFS builds on the directory organization of FAT by providing automatic sorting of directories. In addition, HPFS reduces the amount of lost disk space by utilizing smaller units of allocation. The maximum HPFS volume size is 64 GB.

Second Extended Filesystem (ext2fs).
ext2fs is supported natively by Linux. It supports standard UNIX file types and filesystem checks to ensure filesystem consistency. The maximum ext2fs volume size is 4 TB.

Third Extended Filesystem (ext3fs).
ext3fs is supported natively by Linux. It is based on the ext2fs filesystem and provides journaling capabilities that allow consistency checks of the filesystem to be performed quickly on large amounts of data. The maximum ext3fs volume size is 4 TB.

ReiserFS.
ReiserFS is supported by Linux and is the default filesystem for several common versions of Linux. It offers journaling capabilities and is significantly faster than the ext2fs and ext3fs filesystems. The maximum volume size is 16 TB.

Hierarchical File System (HFS).
HFS is supported natively by Mac OS. HFS is mainly used in older versions of Mac OS but is still supported in newer versions. The maximum HFS volume size under Mac OS 6 and 7 is 2 GB. The maximum HFS volume size in Mac OS 7.5 is 4 GB. Mac OS 7.5.2 and newer Mac OSs increase the maximum HFS volume size to 2 TB.

HFS Plus.
HFS Plus is supported natively by Mac OS 8.1 and later and is a journaling filesystem under Mac OS X. It is the successor to HFS and provides numerous enhancements, such as long filename support and Unicode filename support for international filenames. The maximum HFS Plus volume size is 2 TB.

UNIX File System (UFS).
UFS is supported natively by several types of UNIX OSs, including Solaris, FreeBSD, OpenBSD, and Mac OS X. However, most OSs have added proprietary features, so the details of UFS differ among implementations.

Compact Disk File System (CDFS).
As the name indicates, the CDFS filesystem is used for CDs.

International Organization for Standardization (ISO) 9660 and Joliet.
The ISO 9660 filesystem is commonly used on CD-ROMs. Another popular CD-ROM filesystem, Joliet, is a variant of ISO 9660. ISO 9660 supports filename lengths of up to 32 characters, whereas Joliet supports up to 64 characters. Joliet also supports Unicode characters within filenames.

Universal Disk Format (UDF).
UDF is the filesystem used for DVDs and is also used for some CDs.


Read more ...

Tuesday, August 11, 2015

ISM unit 5 question bank answers 122-126

QUESTION NUMBER 122-126

122. Write a note on forensic toolkit.

Analysts should have access to various tools that enable them to perform examinations and analysis of data, as well as some collection activities. Many forensic products allow the analyst to perform a wide range of processes to analyze files and applications, as well as collecting files, reading disk images, and extracting data from files. Most analysis products also offer the ability to generate reports and to log all errors that occurred during the analysis. Although these products are invaluable in performing analysis, it is critical to understand which processes should be run to answer particular questions about the data. An analyst may need to provide a quick response or just answer a simple question about the collected data. In these cases, a complete forensic evaluation may not be necessary or even feasible. The forensic toolkit should contain applications that can accomplish data examination and analysis in many ways and can be run quickly and efficiently from floppy disks, CDs, or a forensic workstation. The following processes are among those that an analyst should be able to perform with a variety of tools:

Using File Viewers.
Using viewers instead of the original source applications to display the contents of certain types of files is an important technique for scanning or previewing data, and is more efficient because the analyst does not need native applications for viewing each type of file. Various tools are available for viewing common types of files, and there are also specialized tools solely for viewing graphics. If available file viewers do not support a particular file format, the original source application should be used; if this is not available, then it may be necessary to research the file’s format and manually extract the data from the file.

Uncompressing Files.
Compressed files may contain files with useful information, as well as other compressed files. Therefore, it is important that the analyst locate and extract compressed files. Uncompressing files should be performed early in the forensic process to ensure that the contents of compressed files are included in searches and other actions. However, analysts should keep in mind that compressed files might contain malicious content, such as compression bombs, which are files that have been repeatedly compressed, typically dozens or hundreds of times. Compression bombs can cause examination tools to fail or consume considerable resources; they might also contain malware and other malicious payloads. Although there is no definite way to detect compression bombs before uncompressing a file, there are ways to minimize their impact. For instance, the examination system should use up-to-date antivirus software and should be standalone to limit the effects to just that system. In addition, an image of the examination system should be created so that, if needed, the system can be restored.

Graphically Displaying Directory Structures.
This practice makes it easier and faster for analysts to gather general information about the contents of media, such as the type of software installed and the likely technical aptitude of the user(s) who created the data. Most products can display Windows, Linux, and UNIX directory structures, whereas other products are specific to Macintosh directory structures.

Identifying Known Files.
The benefit of finding files of interest is obvious, but it is also often beneficial to eliminate unimportant files, such as known good OS and application files, from consideration. Analysts should use validated hash sets, such as those created by NIST.s National Software Reference Library (NSRL) project or personally created hash sets that have been validated, as a basis for identifying known benign and malicious files. Hash sets typically use the SHA-1 and MD5 algorithms to establish message digest values for each known file.

Performing String Searches and Pattern Matches.
String searches aid in perusing large amounts of data to find key words or strings. Various searching tools are available that can use Boolean, fuzzy logic, synonyms and concepts, stemming, and other search methods. Examples of common searches include searching for multiple words in a single file and searching for misspelled versions of certain words. Developing concise sets of search terms for common situations can help the analyst reduce the volume of information to review. Some considerations or possible difficulties in performing string searches are as follows:

• Some proprietary file formats cannot be string searched without additional tools. In addition, compressed, encrypted, and password-protected files require additional pre-processing before a string search.

• The use of multi-character data sets that include foreign or Unicode characters can cause problems with string searches; some searching tools attempt to overcome this by providing language translation functions. .

• Another possible issue is the inherent limitations of the search tool or algorithm. For example, a match might not be found for a search string if part of the string resided in one cluster and the rest of the string resided in a nonadjacent cluster. Similarly, some search tools might report a false match if part of a search string resided in one cluster and the remainder of the string resided in another cluster that was not part of the same file that contained the first cluster. .

Accessing File Metadata.
File metadata provides details about any given file. For example, collecting the metadata on a graphic file might provide the graphic’s creation date, copyright information, and description, and the creator’s identity.Metadata for graphics generated by a digital camera might include the make and model of the digital camera used to take the image, as well as F-stop, flash, and aperture settings. For word processing files, metadata could specify the author, the organization that licensed the software, when and by whom edits were last performed, and user-defined comments. Special utilities can extract metadata from files. .



123. Write a note on Examining data files.

After a logical backup or bit stream imaging has been performed, the backup or image may have to be restored to another media before the data can be examined. This is dependent on the forensic tools that will be used to perform the analysis. Some tools can analyze data directly from an image file, whereas others require that the backup or image be restored to a medium first. Regardless of whether an image file or a restored image is used in the examination, the data should be accessed only as read-only to ensure that the data being examined is not modified and that it will provide consistent results on successive runs. write-blockers can be used during this process to prevent writes from occurring to the restored image. After restoring the backup (if needed), the analyst begins to examine the collected data and performs an assessment of the relevant files and data by locating all files, including deleted files, remnants of files in slack and free space, and hidden files. Next, the analyst may need to extract the data from some or all of the files, which may be complicated by such measures as encryption and password protection.

Locating the Files
The first step in the examination is to locate the files. A disk image can capture many gigabytes of slack space and free space, which could contain thousands of files and file fragments. Manually extracting data from unused space can be a time-consuming and difficult process, because it requires knowledge of the underlying filesystem format. Fortunately, several tools are available that can automate the process of extracting data from unused space and saving it to data files, as well as recovering deleted files and files within a recycling bin. Analysts can also display the contents of slack space with hex editors or special slack recovery tools.

Extracting the Data
The rest of the examination process involves extracting data from some or all of the files. To make sense of the contents of a file, an analyst needs to know what type of data the file contains. The intended purpose of file extensions is to denote the nature of the file’s contents; for example, a jpg extension indicates a graphic file, and an mp3 extension indicates a music file. However, users can assign any file extension to any type of file, such as naming a text file mysong.mp3 or omitting a file extension. In addition, some file extensions might be hidden or unsupported on other OSs. Therefore, analysts should not assume that file extensions are accurate. Analysts can more accurately identify the type of data stored in many files by looking at their file headers. A file header contains identifying information about a file and possibly metadata that provides information about the file’s contents. Other patterns are indicative of files that are encrypted or that were modified through steganography.

Using a Forensic Toolkit
Analysts should have access to various tools that enable them to perform examinations and analysis of data, as well as some collection activities. Many forensic products allow the analyst to perform a wide range of processes to analyze files and applications, as well as collecting files, reading disk images, and extracting data from files. Most analysis products also offer the ability to generate reports and to log all errors that occurred during the analysis. Although these products are invaluable in performing analysis, it is critical to understand which processes should be run to answer particular questions about the data. An analyst may need to provide a quick response or just answer a simple question about the collected data. In these cases, a complete forensic evaluation may not be necessary or even feasible. The forensic toolkit should contain applications that can accomplish data examination and analysis in many ways and can be run quickly and efficiently from floppy disks, CDs, or a forensic workstation.



124. Explain the two different techniques used for copying files from media.

Copying Files from Media
Files can be copied from media using two different techniques:

 Logical Backup.
A logical backup copies the directories and files of a logical volume. It does not capture other data that may be present on the media, such as deleted files or residual data stored in slack space.

Bit Stream Imaging.
Also known as disk imaging, bit stream imaging generates a bit-for-bit copy of the original media, including free space and slack space. Bit stream images require more storage space and take longer to perform than logical backups.
If evidence may be needed for prosecution or disciplinary actions, the analyst should get a bit stream image of the original media, label the original media, and store it securely as evidence. All subsequent analysis should be performed using the copied media to ensure that the original media is not modified and that a copy of the original media can always be recreated if necessary. All steps that were taken to create the image copy should be documented. Doing so should allow any analyst to produce an exact duplicate of the original media using the same procedures. In addition, proper documentation can be used to demonstrate that evidence was not mishandled during the collection process. Besides the steps that were taken to record the image, the analyst should document supplementary information such as the hard drive model and serial number, media storage capacity, and information about the imaging software or hardware that was used (e.g., name, version number, licensing information). All of these actions support the maintenance of the chain of custody.
When a bit stream image is executed, either a disk-to-disk or a disk-to-file copy can be performed. A disk-to-disk copy, as its name suggests, copies the contents of the media directly to another media. A disk-to-file copy copies the contents of the media to a single logical data file. A disk-to-disk copy is useful since the copied media can be connected directly to a computer and its contents readily viewed. However, a disk-to-disk copy requires a second media similar to the original media. A disk-to-file copy allows the data file image to be moved and backed up easily. However, to view the logical contents of an image file, the analyst has to restore the image to media or open or read it from an application capable of displaying the logical contents of bit stream images.
Numerous hardware and software tools can perform bit stream imaging and logical backups. Hardware tools are generally portable, provide bit-by-bit images, connect directly to the drive or computer to be imaged, and have built-in hash functions. Hardware tools can acquire data from drives that use common types of controllers, such as Integrated Drive Electronics (IDE) and Small Computer System Interface (SCSI). Software solutions generally consist of a startup diskette, CD, or installed programs that run on a workstation to which the media to be imaged is attached. Some software solutions create logical copies of files or partitions and may ignore free or unallocated drive space, whereas others create a bit-by-bit image copy of the media.

Organizations should have policy, guidelines, and procedures that indicate the circumstances under which bit stream images and logical backups (including those from live systems) may be performed for forensic purposes and which personnel may perform them. It is typically most effective to establish policy, guidelines, and procedures based on categories of systems (i.e., low, moderate, or high impact) and the nature of the event of interest; some organizations also choose to create separate policy statements, guidelines, and procedures for particularly important systems. The policy, guidelines, or procedures should identify the individuals or groups with authority to make decisions regarding backups and images; these people should be capable of weighing the risks and making sound decisions. The policy, guidelines, or procedures should also identify which individuals or groups have the authority to perform the backup or imaging for each type of system. Access to some systems might be restricted because of the sensitivity of the operations or data in the system


125. What is NESSUS? Why is it considered as the most popular vulnerability scanner?

• Nessus is a network security scanner. It utilizes plug-ins, which are separate files, to handle the vulnerability checks.
• This makes it easy to install plug-ins and to see which plug-ins are installed to make sure that your are current. Nessus uses a server-client architecture.
• The main server will need to be built on a supported Unix-like operating system.
• The client is available for Unix, Linux, and Windows. The server is not an option because “it performs the security checks .
• The administrator of the server sets up user accounts for other team members and issues rights to those accounts.
• The clients must log in to the server to be able to run their scans.

Why Nessus is popular?
If you are familiar with other network vulnerability scanners, you might be wondering what advantages Nessus has over them. Key points include:

- Unlike other scanners, Nessus does not make assumptions about your server configuration (such as assuming that port 80 must be the only web server) that can cause other scanners to miss real vulnerabilities. :

- Nessus is very extensible, providing a scripting language for you to write tests specific to your system once you become more familiar with the tool. Its also provides a plug-in interface, and many free plug-ins are available from the Nessus plug-in site. These plugs are often specific to detecting a common virus or vulnerability.

- Up to date information about new vulnerabilities and attacks. The Nessus team updates the list of what vulnerabilities to check for on a daily basis in order to minimize the window between an exploit appearing in the wild, and you being able to detect it with Nessus.

- Open-source. Nessus is open source, meaning it costs nothing, and you are free to see and modify the source as you wish.

- Patching Assistance: When Nessus detects a vulnerability, it is also most often able to suggest the best way you can mitigate the vulnerability.


126. What types of vulnerabilities are scanned by NESSUS?

Nessus allows scans for the following types of vulnerabilities:
• Vulnerabilities that allow a remote hacker to control or access sensitive data on a system.
• Misconfiguration (e.g. open mail relay, missing patches, etc.).
• Default passwords, a few common passwords, and blank/absent passwords on some system accounts.
• Nessus can also call Hydra (an external tool) to launch a dictionary attack.
• Denials of service against the TCP/IP stack by using mangled packets
• Preparation for PCI DSS audits
On UNIX (including Mac OS X), it consists of nessusd, the Nessus daemon, which does the scanning, and nessus, the client, which controls scans and presents the vulnerability results to the user. In typical operation, Nessus begins by doing a port scan with one of its four internal portscanners (or it can optionally use Amap [1] or Nmap [2]) to determine which ports are open on the target and then tries various exploits on the open ports. The vulnerability tests, available as subscriptions, are written in NASL (Nessus Attack Scripting Language), a scripting language optimized for custom network interaction.

Tenable Network Security produces several dozen new vulnerability checks (called plugins) each week, usually on a daily basis. These checks are available for free to the general public; commercial customers are not allowed to use this Home Feed any more. The Professional Feed (which is not free) also give access to support and additional scripts (e.g. audit files, compliance tests, additional vulnerability detection plugins).

Optionally, the results of the scan can be reported in various formats, such as plain text, XML, HTML and LaTeX. The results can also be saved in a knowledge base for debugging. On UNIX, scanning can be automated through the use of a command-line client. There exist many different commercial, free and open source tools for both UNIX and Windows to manage individual or distributed Nessus scanners.

If the user chooses to do so (by disabling the option 'safe checks'), some of Nessus' vulnerability tests may try to cause vulnerable services or operating systems to crash. This lets a user test the resistance of a device before putting it in production.

Nessus provides additional functionality beyond testing for known network vulnerabilities. For instance, it can use Windows credentials to examine patch levels on computers running the Windows operating system, and can perform password auditing using dictionary and brute force methods. Nessus 3 and later can also audit systems to make sure they have been configured per a specific policy, such as the NSA's guide for hardening Windows servers.
Read more ...

Monday, August 10, 2015

ISM unit 5 question bank answers 117-121

QUESTION NUMBER 117-121

117. What is forensic science? What is the need of it?

The techniques and processes presented in this guide are based on principles of digital forensics. Forensic science is generally defined as the application of science to the law. Digital forensics, also known as computer and network forensics, has many definitions. Generally, it is considered the application of science to the identification, collection, examination, and analysis of data while preserving the integrity of the information and maintaining a strict chain of custody for the data. Because different organizations are subject to different laws and regulations, this publication should not be used as a guide for executing a digital forensic investigation, construed as legal advice, or used as the basis for investigations of criminal activity. Instead, organizations should use this guide as a starting point for developing a forensic capability in conjunction with extensive guidance provided by legal advisors, law enforcement officials, and management.

The Need for Forensics
Over the last decade, the number of crimes that involve computers has grown, spurring an increase in companies and products that aim to assist law enforcement in using computer-based evidence to determine the who, what, where, when, and how for crimes. As a result, computer and network forensics has evolved to assure proper presentation of computer crime evidentiary data into court. Forensic tools and techniques are most often thought of in the context of criminal investigations and computer security incident handling used to respond to an event by investigating suspect systems, gathering and preserving evidence, reconstructing events, and assessing the current state of an event.
However, forensic tools and techniques are also useful for many other types of tasks, such as the following: .

Operational Troubleshooting:
Many forensic tools and techniques can be applied to troubleshooting operational issues, such as finding the virtual and physical location of a host with an incorrect network configuration, resolving a functional problem with an application, and recording and reviewing the current OS and application configuration settings for a host. .

Log Monitoring.
Various tools and techniques can assist in log monitoring, such as analyzing log entries and correlating log entries across multiple systems. This can assist in incident handling, identifying policy violations, auditing, and other efforts. .

Data Recovery.
There are dozens of tools that can recover lost data from systems, including data that has been accidentally or purposely deleted or otherwise modified. The amount of data that can be recovered varies on a case-by-case basis. .

Data Acquisition.
Some organizations use forensics tools to acquire data from hosts that are being redeployed or retired. For example, when a user leaves an organization, the data from the user’s workstation can be acquired and stored in case it is needed in the future. The workstation’s media can then be sanitized to remove all of the original user’s data. .

Due Diligence/Regulatory Compliance.
Existing and emerging regulations require many organizations to protect sensitive information and maintain certain records for audit purposes. Also, when protected information is exposed to other parties, organizations may be required to notify other agencies or impacted individuals. Forensics can help organizations exercise due diligence and comply with such requirements. .

Regardless of the situation, the forensic process comprises the following basic phases: .

Collection.
The first phase in the process is to identify, label, record, and acquire data from the possible sources of relevant data, while following guidelines and procedures that preserve the integrity of the data. Collection is typically performed in a timely manner because of the likelihood of losing dynamic data such as current network connections, as well as losing data from battery-powered devices (e.g., cell phones, PDAs). .

Examination.
Examinations involve forensically processing large amounts of collected data using a combination of automated and manual methods to assess and extract data of particular interest, while preserving the integrity of the data. .

Analysis.
The next phase of the process is to analyze the results of the examination, using legally justifiable methods and techniques, to derive useful information that addresses the questions that were the impetus for performing the collection and examination. .

Reporting.
The final phase is reporting the results of the analysis, which may include describing the actions used, explaining how tools and procedures were selected, determining what other actions need to be performed (e.g., forensic examination of additional data sources, securing identified vulnerabilities, improving existing security controls), and providing recommendations for improvement to policies, guidelines, procedures, tools, and other aspects of the forensic process. The formality of the reporting step varies greatly depending on the situation.


118. Who are the primary users of forensic tools and techniques? Also state the various factors to be considered when selecting an external or internal party?
Or
119. What are the different groups in which primary users of forensic tools and techniques within an organization usually can be divided into?

Practically every organization needs to have some capability to perform computer and network forensics. Without such a capability, an organization will have difficulty determining what events have occurred within its systems and networks, such as exposures of protected, sensitive data. Although the extent of this need varies, the primary users of forensic tools and techniques within an organization usually can be divided into the following three groups:

Investigators.
Investigators within an organization are most often from the Office of Inspector General (OIG), and they are responsible for investigating allegations of misconduct. For some organizations, the OIG immediately takes over the investigation of any event that is suspected to involve criminal activity. The OIG typically uses many forensic techniques and tools. Other investigators within an organization might include legal advisors and members of the human resources department. Law enforcement officials and others outside the organization that might perform criminal investigations are not considered part of an organization’s internal group of investigators. :

IT Professionals.
This group includes technical support staff and system, network, and security administrators. They use a small number of forensic techniques and tools specific to their area of expertise during their routine work (e.g., monitoring, troubleshooting, data recovery). :

Incident Handlers.
This group responds to a variety of computer security incidents, such as unauthorized data access, inappropriate system usage, malicious code infections, and denial of service attacks. Incident handlers typically use a wide variety of forensic techniques and tools during their investigations. :

Many organizations rely on a combination of their own staff and external parties to perform forensic tasks. For example, some organizations perform standard tasks themselves and use outside parties only when specialized assistance is needed. Even organizations that want to perform all forensic tasks themselves usually outsource the most demanding ones, such as sending physically damaged media to a data recovery firm for reconstruction, or having specially trained law enforcement personnel or consultants collect data from an unusual source (e.g., cell phone). Such tasks typically require the use of specialized software, equipment, facilities, and technical expertise that most organizations cannot justify the high expense of acquiring and maintaining:

. When deciding which internal or external parties should handle each aspect of forensics, organizations should keep the following factors in mind:

Cost.
There are many potential costs. Software, hardware, and equipment used to collect and examine data may carry significant costs (e.g., purchase price, software updates and upgrades, maintenance), and may also require additional physical security measures to safeguard them from tampering. Other significant expenses involve staff training and labor costs, which are particularly significant for dedicated forensic specialists. In general, forensic actions that are needed rarely might be more cost-effectively performed by an external party, whereas actions that are needed frequently might be more cost-effectively performed internally. :

Response Time.
Personnel located on-site might be able to initiate computer forensic activity more quickly than could off-site personnel. For organizations with geographically dispersed physical locations, off-site outsourcers located near distant facilities might be able to respond more quickly than personnel located at the organization’s headquarters. :

Data Sensitivity.
Because of data sensitivity and privacy concerns, some organizations might be reluctant to allow external parties to image hard drives and perform other actions that provide access to data. For example, a system that contains traces of an incident might also contain health care information, financial records, or other sensitive data; an organization might prefer to keep that system under its own control to safeguard the privacy of the data. On the other hand, if there is a privacy concern within the team, for example, if an incident is suspected to involve a member of the incident handling team, use of an independent third party to perform forensic actions would be preferable.


120. What are the key recommendations of establishing and organizing a forensic capability?

The key recommendations on establishing and organizing a forensic capability are as follows:

Organizations should have a capability to perform computer and network forensics.
Forensics is needed for various tasks within an organization, including investigating crimes and inappropriate behavior, reconstructing computer security incidents, troubleshooting operational problems, supporting due diligence for audit record maintenance, and recovering from accidental system damage. Without such a capability, an organization will have difficulty determining what events have occurred within its systems and networks, such as exposures of protected, sensitive data. Also, handling evidence in a forensically sound manner puts decision makers in a position where they can confidently take the necessary actions. :

Organizations should determine which parties should handle each aspect of forensics.
Most organizations rely on a combination of their own staff and external parties to perform forensic tasks. Organizations should decide which parties should take care of which tasks based on skills and abilities, cost, response time, and data sensitivity. :

Incident handling teams should have robust forensic capabilities.
More than one team member should be able to perform each typical forensic activity. Hands-on exercises and IT and forensic training courses can be helpful in building and maintaining skills, as can demonstrations of new tools and technologies. :

Many teams within an organization should participate in forensics.
Individuals performing forensic actions should be able to reach out to other teams and individuals within an organization, as needed, for additional assistance. Examples of teams that may provide assistance in these efforts include IT professionals, management, legal advisors, human resources personnel, auditors, and physical security staff. Members of these teams should understand their roles and responsibilities in forensics, receive training and education on forensic.related policies, guidelines, and procedures, and be prepared to cooperate with and assist others on forensic actions. :

Forensic considerations should be clearly addressed in policies.
At a high level, policies should allow authorized personnel to monitor systems and networks and perform investigations for legitimate reasons under appropriate circumstances. Organizations may also have a separate forensic policy for incident handlers and others with forensic roles that provides more detailed rules for appropriate behavior. Everyone who may be called upon to assist with any forensic efforts should be familiar with and understand the forensic policy. Additional policy considerations are as follows: :

• . Forensic policy should clearly define the roles and responsibilities of all people performing or assisting with the organization’s forensic activities. The policy should include all internal and external parties that may be involved and should clearly indicate who should contact which parties under different circumstances.
• . The organization’s policies, guidelines, and procedures should clearly explain what forensic actions should and should not be performed under normal and special circumstances and should address the use of anti-forensic tools and techniques. Policies, guidelines, and procedures should also address the handling of inadvertent exposures of sensitive information.
• . Incorporating forensic considerations into the information system life cycle can lead to more efficient and effective handling of many incidents. Examples include performing auditing on hosts and establishing data retention policies that support performing historical reviews of system and network activity.

Organizations should create and maintain guidelines and procedures for performing forensic tasks.
The guidelines should include general methodologies for investigating an incident using forensic techniques, and step-by-step procedures should explain how to perform routine tasks. The guidelines and procedures should support the admissibility of evidence into legal proceedings. Because electronic logs and other records can be altered or otherwise manipulated, organizations should be prepared, through their policies, guidelines, and procedures, to demonstrate the reliability and integrity of such records. The guidelines and procedures should also be reviewed regularly and maintained so that they are accurate.


121. Write a note on forensic process.

The most common goal of performing forensics is to gain a better understanding of an event of interest by finding and analyzing the facts related to that event.Forensics may be needed in many different situations, such as evidence collection for legal proceedings and internal disciplinary actions, and handling of malware incidents and unusual operational problems. Regardless of the need, forensics should be performed using the four-phase process shown in Figure
This section describes the basic phases of the forensic process: collection, examination, analysis, and reporting.
During collection, data related to a specific event is identified, labeled, recorded, and collected, and its integrity is preserved. 
In the second phase, examination, forensic tools and techniques appropriate to the types of data that were collected are executed to identify and extract the relevant information from the collected data while protecting its integrity. Examination may use a combination of automated tools and manual processes. 
The next phase, analysis, involves analyzing the results of the examination to derive useful information that addresses the questions that were the impetus for performing the collection and examination. 
The final phase involves reporting the results of the analysis, which may include describing the actions performed, determining what other actions need to be performed, and recommending improvements to policies, guidelines, procedures, tools, and other aspects of the forensic process.

As shown at the bottom of Figure 3-1, the forensic process transforms media into evidence, whether evidence is needed for law enforcement or for an organization’s internal usage.Specifically, the first transformation occurs when collected data is examined, which extracts data from media and transforms it into a format that can be processed by forensic tools.Second, data is transformed into information through analysis. Finally, the information transformation into evidence is analogous to transferring knowledge into action—using the information produced by the analysis in one or more ways during the reporting phase. For example, it could be used as evidence to help prosecute a specific individual, actionable information to help stop or mitigate some activity, or knowledge in the generation of new leads for a case.






Read more ...

Quiz Handling System Project in Visual Basic

Quiz Handling System Project in Visual Basic


This Quiz Handling System Project in Visual Basic is mainly useful in conducting the quiz . This project is  performed in Visual Basic because it is user friendly and it is easy conduct the quiz inquick format. This project includes various categories so that the user can participate in all the categories or any particular category. Here the time duration is very important the user has to be complete with in a time otherwise it will exit from that category. Whenever the participation of that particular category is completed the report is shown immediately. In this project we can also able to see a particular participant performance by pressing a button. After the completion of all the categories we can get the result quickly. Thus this project can be useful in conducting the quiz programs.  The project aims to implement the Quiz Handling by using VISUAL BASIC on WINDOWS environment.


The process of computerization involves a front end to eliminate the manual procedures. In the back end an on-line system is provided to enable the various activities to be performed on a day-to-day basis.
The system is flexible since it implemented in VISUAL BASIC so that it provides user friendly menu-driven software with online help and validation features for accurate data capture, data storage and retrieval and any changes arising out can we done with out affecting change in the design specification.
EXISTING SYSTEM:
The main menu is designed as a multiple documents interface technology of Visual Basic. In this there are mainly four categories they are
1.    registration form
2.    categories
3.    reports
Registration form:-
In this form the user has to register his biodata and an unique code is given to the each user. Once the user has registered so that he can participate in all the categories.
Categories:-
In this mainly there are four categories there are namely sports,general,computers,history.so that the user can participate in all the categories. the user has to complete the attempting the questions with in a time duration.
Reports:-
Reports play an important role to display the user about the performance in the categories.thus the reports helps to display the final result in the quiz program.
The software “ Quiz Handling” has been developed in windows environment using visual basic as front end and ms access as back end. Time consumptions reduced to a great extent an d user as less complexity in handling this database.
Thus the project quiz handling is successfully completed. This software is fully functional and operational .Since this project is implemented using VB, it provides a more direct and easier way to conduct examinations.  This project is very flexible and that are lot of provisions made for the enhancement of this project.
The project is full fledged and user friendly, End users will be lightened in using this software because it is easy to have student skills and accuracy and mostly all contents to be entered are to selected from combo box. This reduces the calculating efforts to be carried out by the users.
SCOPE FOR ENHANCEMENT
This Application is designed to be generic .as we develop our site we should take advantage of several areas in which you can improve general knowledge and accurate skills.
This project is mainly useful in conducting the quiz . This project is   performed in Visual Basic because it is user friendly and it is easy conduct the quiz in quick format. This project includes various categories so that the user can participate in all the categories or any particular category. Here the time duration is very important  the user has to be complete with in a time otherwise it will exit from that category.
Read more ...
Designed By Blogger Templates