Breaking News

Saturday, August 22, 2015

ISM unit 3 question bank answers 76-80

QUESTION NUMBER 76-80

76. What are the recommended capabilities of an antivirus software? 

Antivirus software is the most commonly used technical control for malware threat mitigation. There are many brands of antivirus software, with most providing similar protection through the following recommended capabilities:

  • Scanning critical host components such as startup files and boot records. 
  • Watching real-time activities on hosts to check for suspicious activity; a common example is scanning all email attachments for known malware as emails are sent and received. Antivirus software should be configured to perform real-time scans of each file as it is downloaded, opened, or executed, which is known as on-access scanning
  • Monitoring the behavior of common applications, such as email clients, web browsers, and instant messaging software. Antivirus software should monitor activity involving the applications most likely to be used to infect hosts or spread malware to other hosts. 
  • Scanning files for known malware. Antivirus software on hosts should be configured to scan all hard drives regularly to identify any file system infections and, optionally, depending on organization security needs, to scan removable media inserted into the host before allowing its use. Users should also be able to launch a scan manually as needed, which is known as on-demand scanning.
  • Identifying common types of malware as well as attacker tools.
  • Disinfecting files, which refers to removing malware from within a file, and quarantining files, which means that files containing malware are stored in isolation for future disinfection or examination. Disinfecting a file is generally preferable to quarantining it because the malware is removed and the original file restored; however, many infected files cannot be disinfected. Accordingly, antivirus software should be configured to attempt to disinfect infected files and to either quarantine or delete files that cannot be disinfected. 
Organizations should use both host-based and network-based antivirus scanning. Organizations should deploy antivirus software on all hosts for which satisfactory antivirus software is available. Antivirus software should be installed as soon after OS installation as possible and then updated with the latest signatures and antivirus software patches (to eliminate any known vulnerabilities in the antivirus software itself). The antivirus software should then perform a complete scan of the host to identify any potential infections. To support the security of the host, the antivirus software should be configured and maintained properly so that it continues to be effective at detecting and stopping malware. Antivirus software is most effective when its signatures are fully up-to-date. Accordingly, antivirus software should be kept current with the latest signature and software updates to improve malware detection

Organizations should use centrally managed antivirus software that is controlled and monitored regularly by antivirus administrators, who are also typically responsible for acquiring, testing, approving, and delivering antivirus signature and software updates throughout the organization. Users should not be able to disable or delete antivirus software from their hosts, nor should they be able to alter critical settings. Antivirus administrators should perform continuous monitoring to confirm that hosts are using current antivirus software and that the software is configured properly. Implementing all of these recommendations should strongly support an organization in having a strong and consistent antivirus deployment across the organization.

Although antivirus software has become a necessity for malware incident prevention, it is not possible for antivirus software to stop all malware incidents. As discussed previously in this section, antivirus software does not excel at stopping previously unknown threats. Antivirus software products detect malware primarily by looking for certain characteristics of known instances of malware. This is highly effective for identifying known malware, but is not so effective at detecting the highly customized, tailored malware increasingly being used.


77. Write a note on sandboxing.

Sandboxing refers to a security model where applications are run within a sandbox—a controlled environment that restricts what operations the applications can perform and that isolates them from other applications running on the same host. In a sandbox security model, typically only authorized “safe” operations may be performed within the sandbox; the sandbox prohibits applications within the sandbox from performing any other operations. The sandbox also restricts access to system resources, such as memory and the file system, to keep the sandbox’s applications isolated from the host’s other applications.

Sandboxing provides several benefits in terms of malware incident prevention and handling. By limiting the operations available, it can prevent malware from performing some or all of the malicious actions it is attempting to execute; this could prevent the malware from succeeding or reduce the damage it causes. And the sandboxing environment—the isolation—can further reduce the impact of the malware by restricting what information and functions the malware can access. Another benefit of sandboxing is that the sandbox itself can be reset to a known good state every time it is initialized.


78. Explain malware incident response life cycle in detail.

As defined in NIST SP 800-61, Computer Security Incident Handling Guide, the incident response process has four major phases: preparation, detection and analysis, containment/eradication/recovery, and post-incident activity. Figure 4-1 displays this incident response life cycle. This section of the guide builds on the concepts of SP 800-61 by providing additional details about responding to malware incidents.

The initial phase of malware incident response involves performing preparatory activities, such as developing malware-specific incident handling procedures and training programs for incident response teams. The preparation phase also involves using policy, awareness activities, vulnerability mitigation, and security tools to reduce the number of malware incidents. Despite these measures, residual risk will inevitably persist, and no solution is foolproof. Detection of malware infections is thus necessary to alert the organization whenever incidents occur. Early detection is particularly important for malware incidents because they are more likely than other types of incidents to increase their impact over time, so faster detection and handling can help reduce the number of infected hosts and the damage done.

For each incident, the organization should act appropriately, based on the severity of the incident, to mitigate its impact by containing it, eradicating infections, and ultimately recovering from the incident. The organization may need to jump back to the detection and analysis phase during containment, eradication, and recovery for example, to check for additional infections that have occurred since the original detection was done. After an incident has been handled, the organization should issue a report that details the cause and cost of the incident and the steps the organization should take to prevent future incidents and to prepare more effectively to handle incidents that do occur.

Preparation
Preparation is necessary to mitigate the risk of an attack or incident before issues arise within the organization. This includes ensuring the organization’s network security, systems, and applications. The Incident Response Team should have all the tools and resources necessary to perform their job duties when incidents do occur. Contact information including on-call and escalation should be disseminated to the team and management. Best security practices should be implemented and constantly refined in the areas of risk assessments, network perimeter security, malware prevention, and employee security awareness

Detection & Analysis
Accurately detecting incidents has become the greatest challenge to organizations. This can be attributed to many factors including detection through different means, the volume of network traffic throughout an organization, and the fact that most attacks do not have any detectable precursors. Detection can occur through alerts in various network security tools (IDS/IPS, AV, WAFs), logs (OS, application, network devices), people within the organization, and publicly available information about new vulnerabilities and exploits. In general, incident handlers should assume that an incident has occurred until determining otherwise

The Incident Response Team should work quickly to analyze and validate each incident, while documenting all steps taken. The initial analysis should include determining the scope, which systems/applications are affected, how the incident occurred, and the attack vectors. To be more effective, an incident handler should understand normal network behavior within their organization, keep a knowledge base of information, and research the latest vulnerabilities and exploits. Every incident records should be maintained to ensure incidents are tracked and resolved in a timely manner. Each incident record should include the current status, an incident summary, indicators, all actions taken, contact information for involved parties, all evidence gathered, and next steps. After the initial analysis, a severity must be given to each incident in order to prioritize incidents and their possible impact to the organization. The team must then notify all individuals who need to be involved in the case and also provide status updates to management on the status of the incident during analysis.

Incident Response Life CycleContainment, Eradication, and Remediation
Containment is necessary before an incident spreads throughout the organization’s network. An important part of containment is the decision-making process of the incident handler. Predetermined strategies and procedures on containment based on the type of incident should be established to make decisions easier. Gathering and preserving evidence is also important if legal proceedings occur from the incident. Eradication is necessary to eliminate components of the incident, including removing malware from systems and disabling user accounts. In recovery, system administrators restore systems back to normal operations and remediates vulnerabilities to prevent similar attacks from occurring again. This may include rebuilding systems, installing patches, and implementing tighter network perimeter security controls.

Post-Incident Activity
After the incident has been recovered from, it is important the team conduct a “lesson learned” to improve the security posture of the organization. A “lessons learned” meeting should include all personnel involved including management and provide closure to the incident by reviewing what occurred, what was done, what did and did not work well, and a plan of action to prevent a similar attack from occurring.

Furthermore, a follow-up report should be created to provide a reference that can be used for handling similar incidents in the future.

Collected incident data throughout all incidents should be gathered in order to assess costs to the organization and containment, identify trends, justify resource needs, and provide performance metrics to senior management to judge the success of the Incident Response Plan and team. An Incident Response Plan is important to all organizations. A successful Incident Response Plan proactively mitigates incidents before occurring and also allows the organization to react quickly and effectively when incidents occur. Not establishing an adequate Incident Response plan could leave your organization susceptible to cyber security attacks without methods to mitigate, contain, eradicate, and remediate incidents



79. List and explain the major component of containment of malware.

Containment of malware has two major components: stopping the spread of the malware and preventing further damage to hosts. Nearly every malware incident requires containment actions. In addressing an incident, it is important for an organization to decide which methods of containment to employ initially, early in the response. Containment of isolated incidents and incidents involving noninfectious forms of malware is generally straightforward, involving such actions as disconnecting the affected hosts from networks or shutting down the hosts.

Containment methods can be divided into four basic categories: relying on user participation, performing automated detection, temporarily halting services, and blocking certain types of network connectivity.

Containment Through User Participation
At one time, user participation was a valuable part of containment efforts, particularly during large-scale incidents in non-managed environments. Users were provided with instructions on how to identify infections and what measures to take if a host was infected, such as calling the help desk, disconnecting the host from the network, or powering off the host. The instructions might also cover malware eradication, such as updating antivirus signatures and performing a host scan, or obtaining and running a specialized malware eradication utility. As hosts have increasingly become managed, user participation in containment has sharply decreased. However, having users perform containment actions is still helpful in non-managed environments and other situations in which use of fully automated containment methods

Effectively communicating helpful information to users in a timely manner is challenging. Although email is typically the most efficient communication mechanism, it might be unavailable during certain incidents, or users might not read the email until it is too late. Therefore, organizations should have several alternate mechanisms in place for distributing information to users, such as sending messages to all voice mailboxes within the organization, posting signs in work areas, and handing out instructions at building and office entrances. Organizations with significant numbers of users in alternate locations, such as home offices and small branch offices, should ensure that the communication mechanisms reach these users. Another important consideration is that users might need to be provided with software, such as cleanup utilities, and software updates, such as patches and updated antivirus signatures. Organizations should identify and implement multiple methods for delivering software utilities and updates to users who are expected to assist with containment.

Although user participation can be very helpful for containment, organizations should not rely on this means for containing malware incidents unless absolutely necessary. No matter how containment guidance is communicated, it is unlikely that all users will receive it and realize that it might pertain to them. In addition, some users who receive containment instructions are unlikely to follow the directions successfully because of a lack of understanding, a mistake in following the directions, or host-specific characteristics or variations in the malware that make the directions incorrect for that host. Some users also might be focused on performing their regular tasks and be unconcerned about the possible effects of malware on their hosts. Nevertheless, for large-scale incidents involving a sizable percentage of the organization’s hosts in non-managed environments, user involvement in containment can significantly reduce the burden on incident handlers and technical support staff in responding to the incident.

Containment Through Automated Detection
Many malware incidents can be contained primarily through the use of the automated technologies described in Section 3.4 for preventing and detecting infections. These technologies include antivirus software, content filtering, and intrusion prevention software. Because antivirus software on hosts can detect and remove infections, it is often the preferred automated detection method for assisting in containment. However, as previously discussed, many of today’s malware threats are novel, so antivirus software and other technologies often fail to recognize them as being malicious. Also, malware that compromises the OS may disable security controls such as antivirus software, particularly in unmanaged environments where users have greater control over their hosts. Containment through antivirus software is not as robust and effective as it used to be.

Examples of automated detection methods other than antivirus software are as follows:


  • Content Filtering.For example, email servers and clients, as well as anti-spam software, can be configured to block emails or email attachments that have certain characteristics, such as a known bad subject, sender, message text, or attachment name or type.19 This is only helpful when the malware has static characteristics; highly customized malware usually cannot be blocked effectively using content filtering. Web content filtering and other content filtering technologies may also be of use for static malware. 
  • Network-Based IPS Software. Most IPS products allow their prevention capabilities to be enabled for specific signatures. If a network-based IPS device is inline, meaning that it is an active part of the network, and it has a signature for the malware, it should be able to identify the malware and stop it from reaching its targets. If the IPS device does not have its prevention capabilities enabled, it may be prudent during a severe incident to reconfigure or redeploy one or more IPS sensors and enable IPS so they can stop the activity. IPS technologies should be able to stop both incoming and outgoing infection attempts. Of course, the value of IPSs in malware containment depends on the availability and accuracy of a signature to identify the malware. Several IPS products allow administrators to write custom signatures based on some of the known characteristics of the malware, or to customize existing signatures. For example, an IPS may allow administrators to specify known bad email attachment names or subjects, or to specify known bad destination port numbers. In many cases, IPS administrators can have their own accurate signature in place hours before antivirus vendors have signatures available. In addition, because the IPS signature affects only network-based IPS sensors, whereas antivirus signatures generally affect all workstations and servers, it is generally less risky to rapidly deploy a new IPS signature than new antivirus signatures. 
  • Executable Blacklisting. Some operating systems, host-based IPS products, and other technologies can restrict certain executables from being run. For example, administrators can enter the names of files that should not be executed. If antivirus signatures are not yet available for a new threat, it might be possible to configure a blacklisting technology to block the execution of the files that are part of the new threat

Containment Through Disabling Services 
Some malware incidents necessitate more drastic and potentially disruptive measures for containment. These incidents make extensive use of a particular service. Containing such an incident quickly and effectively might be accomplished through a loss of services, such as shutting down a service used by malware, blocking a certain service at the network perimeter, or disabling portions of a service (e.g., large mailing lists). Also, a service might provide a channel for infection or for transferring data from infected hosts—for example, a botnet command and control channel using Internet Relay Chat (IRC).In either case, shutting down the affected services might be the best way to contain the infection without losing all services. This action is typically performed at the application level (e.g., disabling a service on servers) or at the network level (e.g., configuring firewalls to block IP addresses or ports associated with a service). The goal is to disable as little functionality as possible while containing the incident effectively. To support the disabling of network services, organizations should maintain lists of the services they use and the TCP and UDP ports used by each service.

From a technology standpoint, disabling a service is generally a simple process; understanding the consequences of doing so tends to be more challenging. Disabling a service that the organization relies on has an obvious negative impact on the organization’s functions. Also, disabling a service might inadvertently disrupt other services that depend on it. For example, disabling email services could impair directory services that replicate information through email. Organizations should maintain a list of dependencies between major services so that incident handlers are aware of them when making containment decisions. Also, organizations might find it helpful to provide alternative services with similar functionality. For example, in a highly managed environment, if a vulnerability in an email client were being exploited by a new virus, users could be blocked temporarily from using that email client and instead directed to use a web-based email client that did not have the vulnerability. This step would help contain the incident while providing users with email access. The same strategy could be used for cases involving exploitation of vulnerabilities in web browsers and other common client applications.


Containment Through Disabling Connectivity
Containing incidents by placing temporary restrictions on network connectivity can be very effective. For example, if infected hosts attempt to establish connections with an external host to download rootkits, handlers should consider blocking all access to the external host (by IP address or domain name, as appropriate). Similarly, if infected hosts within the organization attempt to spread their malware, the organization might block network traffic from the hosts’ IP addresses to control the situation while the infected hosts are physically located and disinfected. An alternative to blocking network access for particular IP addresses is to disconnect the infected hosts from the network, which could be accomplished by reconfiguring network devices to deny network access or physically disconnecting network cables from infected hosts.

The most drastic containment step is purposely breaking needed network connectivity for uninfected hosts. This could eliminate network access for groups of hosts, such as remote VPN users. In worst-case scenarios, isolating subnets from the primary network or the Internet might be necessary to stop the spread of malware, halt damage to hosts, and provide an opportunity to mitigate vulnerabilities. Implementing a widespread loss of connectivity to achieve containment is most likely to be acceptable to an organization in cases in which malware activity is already causing severe network disruptions or infected hosts are performing an attack against other organizations. Because a major loss of connectivity almost always affects many organizational functions, connectivity usually must be restored as soon as possible.

Organizations can design and implement their networks to make containment through loss of connectivity easier to do and less disruptive. For example, some organizations place their servers and workstations on separate subnets; during a malware incident targeting workstations, the infected workstation subnets can be isolated from the main network, and the server subnets can continue to provide functionality to external customers and internal workstation subnets that are not infected. Another network design strategy related to malware containment is the use of separate virtual local area networks (VLAN) for infected hosts. With this design, a host’s security posture is checked when it wants to join the network, and also may be checked periodically while connected. The security checking is often done through network access control software by placing on each host an agent that monitors various characteristics of the host, such as OS patches and antivirus updates. When the host attempts to connect to the network, a network device such as a router requests information from the host’s agent. If the host does not respond to the request or the response indicates that the host is insecure, the network device causes the host to be placed onto a separate VLAN. The same technique can be used with hosts that are already on the organization’s regular networks, allowing infected hosts to be moved automatically to a separate VLAN.

Having a separate VLAN for infected hosts also helps organizations to provide antivirus signature updates and OS and application patches to the hosts while severely restricting what they can do. Without a separate VLAN, the organization might need to remove infected hosts’ network access entirely, which necessitates transferring and applying updates manually to each host to contain and eradicate the malware and mitigate vulnerabilities. A variant of the separate VLAN strategy that can be effective in some situations is to place all hosts on a particular network segment in a VLAN and then move hosts to the production network as each is deemed to be clean and remediated.


Containment Recommendations
Containment can be performed through many methods in the four categories described above (users, automated detection, loss of services, and loss of connectivity). Because no single malware containment category or individual method is appropriate or effective in every situation, incident handlers should select a combination of containment methods that is likely to be effective in containing the current incident while limiting damage to hosts and reducing the impact that containment methods might have on other hosts. For example, shutting down all network access might be very effective at stopping the spread of malware, but it would also allow infections on hosts to continue damaging files and would disrupt many important functions of the organization.

The most drastic containment methods can be tolerated by most organizations for only a brief period of time. Accordingly, organizations should support sound containment decisions by having policies that clearly state who has authority to make major containment decisions and under what circumstances various actions (e.g., disconnecting subnets from the Internet) are appropriate.


80. Explain the three main categories of patch and vulnerability metrics.

There are three main categories of patch and vulnerability metrics: susceptibility to attack, mitigation response time, and cost.

Measuring a System’s Susceptibility to Attack
An organization’s susceptibility to attack can be approximated by several measurements.  An organization can measure the number of patches needed, the number of vulnerabilities, and the number of network services running on a per system basis.  These measurements should be taken individually for each computer within the system, and the results then aggregated to determine the system-wide result.

Both raw results and ratios (e.g., number of vulnerabilities per computer) are important.  The raw results help reveal the overall risk a system faces because the more vulnerabilities, unapplied patches, and exposed network services that exist, the greater the chance that the system will be penetrated.  Large systems consisting of many computers are thus inherently less secure than smaller similarly configured systems.  This does not mean that the large systems are necessarily secured with less rigor than the smaller systems.  To avoid such implications, ratios should be used when comparing the effectiveness of the security programs of multiple systems.  Ratios (e.g., number of unapplied patches per computer) allow effective comparison between systems.  Both raw results and ratios should be measured and published for each system, as appropriate, since they are both useful and serve different purposes.

The initial measurement approach should not take into account system security perimeter architectures (e.g., firewalls) that would prevent an attacker from directly accessing vulnerabilities on system computers.  This is because the default position should be to secure all computers within a system even if the system is protected by a strong security perimeter.  Doing so will help prevent insider attacks and help prevent successful external attackers from spreading their influence to all computers within a system.

While the initial measurement of a system’s susceptibility to attack should not take into account the system security perimeter architecture, it may be desirable to take into account an individual computer’s security architecture.  For example, vulnerabilities exploitable by network connections might not be counted if a computer’s personal firewall would prevent such exploit attempts.  This should be done cautiously because a change in a computer's security architecture could expose vulnerabilities to exploitation.

Number of Patches 
Measuring the number of vulnerabilities that exist per system is a better measure of an organization's susceptibility to attack, but still is far from perfect.  Organizations that employ vulnerability scanning tools are most likely to employ this metric, since such tools usually output the needed statistics.24  As with measuring patches, organizations should take into account the severity ratings of the vulnerabilities, and the measurement should output the number of vulnerabilities at each severity level (or range of severity levels).  Vulnerability databases (such as the National Vulnerability Database, http://nvd.nist.gov/), vulnerability scanning tools, and the patch vendors themselves usually provide rating systems for vulnerabilities; however, currently there is no standardized rating system.  Such rating systems only approximate the impact of a vulnerability on a stereotypical generic organization.  The true impact of a vulnerability can only be determined by looking at each vulnerability in the context of an organization's unique security infrastructure and architecture.  In addition, the impact of a vulnerability on a system depends on the network location of the system (i.e., when the system is accessible from the Internet, vulnerabilities are usually more serious).

Number of Network Services 
The last example of an attack susceptibility metric is measuring the number of network services running per system.  The concept behind this metric is that each network service represents a potential set of vulnerabilities, and thus there is an enhanced security risk when systems run additional network services.  When taken on a large system, the measurement can indicate a system’s susceptibility to network attacks (both current and future).  It is also useful to compare the number of network services running between multiple systems to identify systems that are doing a better job at minimizing their network services.  Having a large number of network services active is not necessarily indicative of system administrator mismanagement.  However, such results should be scrutinized carefully to make sure that all unneeded network services have been turned off.


Mitigation Response Time
It is also important to measure how quickly an organization can identify, classify, and respond to a new vulnerability and mitigate the potential impact within the organization.  Response time has become increasingly important, because the average time between a vulnerability announcement and an exploit being released has decreased dramatically in the last few years.  There are three primary response time measurements that can be taken: vulnerability and patch identification, patch application, and emergency security configuration changes.

Response Time for Vulnerability and Patch Identification 
This metric measures how long it takes the PVG to learn about a new vulnerability or patch.  Timing should begin from the moment the vulnerability or patch is publicly announced.  This measurement should be taken on a sampling of different patches and vulnerabilities and should include all of the different resources the PVG uses to gather information.

Response Time for Patch Application 
This metric measures how long it takes to apply a patch to all relevant IT devices within the system. Timing should begin from the moment the PVG becomes aware of a patch.  This measurement should be taken on patches where it is relatively easy for the PVG to verify patch installation.  This measurement should include the individual and aggregate time spent for the following activiti
+ PVG analysis of patch
+ Patch testing
+ Configuration management process
+ Patch deployment effort

Verification can be done through the use of enterprise patch management tools or through vulnerability scanning (both host and network-based).

It may be useful to take this measurement on both critical and non-critical security patches, since a different process is usually used by organizations in both cases, and the timing will likely be different.

Response Time for Emergency Configuration Changes 
This metric applies in situations where a vulnerability exists that must be mitigated but where there is no patch.  In such cases the organization is forced to make emergency configuration changes that may reduce functionality to protect the organization from exploitation of the vulnerability.  Such changes are often done at the firewall, e-mail server, Web server, central file server, or servers in the DMZ. The changes may include turning off or filtering certain e-mail attachments, e-mail subjects, network ports, and server applications.  The metric should measure the time it takes from the moment the PVG learns about the vulnerability to the moment that an acceptable workaround has been applied and verified.  Because many vulnerabilities will not warrant emergency configuration changes, this metric will be for a subset of the total number vulnerabilities for any system.

These activities are normally done on an emergency basis, so obtaining a reasonable measurement sample size may be difficult.  However, given the importance of these activities, these emergency processes should be tested, and the timing metric can be taken on these test cases.  The following list contains examples of emergency processes that can be timed:
+ Firewall or router configuration change
+ Network disconnection
+ Intrusion prevention device activation or reconfiguration
+ E-mail filtering rules addition
+ Computer isolation
+ Emergency notification of staff.


Cost 
Measuring the cost of patch and vulnerability management is difficult because the actions are often split between many different personnel and groups.  In the simplest case, there will be a dedicated centralized PVG that deploys patches and security configurations directly.  However, most organizations will have the patch and vulnerability functions split between multiple groups and allocated to a variety of full-time and part-time personnel.  There are four main cost measurements that should be taken: the PVG, system administrator support, enterprise patch and vulnerability management tools, and incidents that occurred due to failures in the patch and vulnerability management program.

Cost of the Patch and Vulnerability Group 
This measurement is fairly easy to obtain since the PVG personnel are easily identifiable and the percentage of each person’s time dedicated to PVG support should be well-documented.  When justifying the cost of the PVG to management, it will be useful to estimate the amount of system administrator labor that has been saved by centralizing certain functions within the PVG.  Some organizations outsource significant parts of their PVG, and the cost of this outsourcing should be included within the metric.

Cost of System Administrator Support
This measurement is always difficult to take with accuracy but is important nonetheless.  The main problem is that, historically, system administrators have not been asked to calculate the amount of time they spend on security, much less on security patch and vulnerability management.  As organizations improve in their overall efforts to measure the real cost of IT security, measuring the cost of patch and vulnerability measurement with respect to system administrator time will become easier.

Cost of Enterprise Patch and Vulnerability Management Tools 
This measurement includes patching tools, vulnerability scanning tools, vulnerability Web portals, vulnerability databases, and log analysis tools (used for verifying patches).  It should not include intrusion detection, intrusion prevention, and log analysis tools (used for intrusion detection).  Organizations should first calculate the purchase price and annual maintenance cost for each software package.  Organizations should then calculate an estimated annual cost that includes software purchases and annual maintenance.  To create this metric, the organization should add the annual maintenance cost to the purchase price of each software package divided by the life expectancy (in years) of that software.  If the software will be regularly upgraded, the upgrade price should be used instead of the purchase price.

Estimated annual cost = Sum of annual maintenance for each product + Sum of (purchase price or upgrade price / life expectancy in years) for each product


No comments:

Post a Comment

Designed By Blogger Templates