Sunday, August 28, 2011

Complex Structures of an Argument

Argumentation: The Study of Effective Reasoning

Commentary: This is a series on effective reasoning as it applies to project management. Using proper argumentation in a project while vetting risk, options, objectives, strategies, and workaround solutions can strengthen a project's performance, improve communications, and develop a sense of unity. Effective argumentations comes down to building the strongest case for a claim. In this series I will be summarizing points made by David Zarefsky in his Teaching Company coursework as well as drawing on other resources.   This series of posts may be reviewed at the Argumentation Series Posts link.

The model explored in the last post was a simple argument with a simple claim. Simple arguments develop sequentially through conversation. Most arguments, in contrast, are complex and are developed based on the premises that the audience accepts or rejects. In this posts we will explore more complex structures and how to formulate stronger arguments. I have also provided a review on Cognitive Biases as well.

Complex Structures of an Argument

Complex arguments begin with a resolution which is a single declarative statement or sentence that captures the substance of the controversy being explored. The resolution may be implicit or explicit and is the ultimate claim upon which judgement is made. There are different types of resolutions such as fact, definition, value, and policy of which all require proofs by the audience.

Part of a complex argument are issues, used loosely in everyday language, which are implicit with resolutions as well as questions inherent to the controversy. Thus, they are vital to the success of the resolution. One may observe issues by examining the text of the resolution, underlying context, or from the pattern of claims and responses, Figure 1. The actual issues are the potential issues less the uncontested issues within an complex argument.
Figure 1: General Complex Argument Structure
Source: The Teaching Company, Argumentation Course

Figure 2 illustrates the three major patterns for constructing an argument which are series, parallel, or convergent,. A series argument builds on earlier arguments and all the arguments must yield to free ascent in order for the resolution to be accepted.  In the parallel argument all the arguments are independent. However, any one of them is sufficient for acceptance of the resolution.In a convergent argument, each argument is independent but converges on the resolutions and, again, all must yield to free ascent in order for the resolution to be accepted.
Figure 2: Complex Argument Structures
Source: The Teaching Company, Argumentation Course

The use of argument models has been criticized as useful for analyzing and identifying argument structure but not useful for constructing them. The criticism is that models "Abstract Out" subtle features of language, emphasis, and presentation integral to the argument. Additionally, some argue that models incorrectly demonstrate a linearity in the thread line of an argument suggesting an evidence-to-claim relationship ignoring the inferential leap found in informal argumentation. The benefits of using models to structure an argument includes:
  1. Identifying the components of an argument
  2. Alerting to the internal dynamics of the argument
  3. Allowing to make a transition to a common form for comparisons.
CommentaryThe use of a model to structure an argument before offering the argument to the audience is left up the arguer. Mapping the arguments components may yield insight that strengthens the resolution that should realize a  free ascent more easily. For example, a project manager may be confronted with principles and stakeholders who are polarized having strong opinions. The project manager must broker a resolution that makes sense to all of them. Most likely there will need to be some compromise and some gains for each. Carefully, structuring the model the project manager may seek either unilateral or popular support for his resolution. Unilateral support will be difficult to achieve as everyone must agree with the claims and resolution. Whereas with popular support the project manager only needs a majority in agreement.

Once the project manager decides on the level of support required, he will select a model. In the case of popular support the parallel model works because not all the arguments need to come to free ascent in order to be accepted. So the project manager may focus energy and strengthen arguments for primary stakeholders or issues that once won over will result in acceptance of the resolution. The issuers of lesser importance may be resolved too but if they are not the resolution still moves forward. There is potential risk found in this approach and those risk may be dealt with in a timely manner through an operational risk management approach rather than the more formal, deliberate, and lengthy method PMI suggests.

Next week's post will be case construction. If you have any comments or ideas please feel free to forward them to me at james.bogden@gmail.com.

References:

Zarefsky, D. (2005) Argumentation: the study of effective reasoning. 2nd Ed. the Teaching Company. Chantilly, VA.

Thursday, August 25, 2011

Cognitive Bias Brief

Commentary: This is a discussion on cognitive bias which is important to understanding decision making. Understanding bias's and how they influences decisions and thought will aide in managing audiences and individuals alike. There are many bias's and types. Unfortunately, not all the bias will be included. I will highlight a few to demonstrate project decision making and argument formulation.  

Cognitive Bias Brief

Cognitive bias results in poor decision making as it is a tendency to acquire and process information by filtering it through one's own likes, dislikes, and experiences. In general, the term is used to describe effects within the human mind, some of which can lead to perceptual distortion, inaccurate judgment, or illogical interpretation. The bias's fall into generalized categories of decision-making and behavioral biases, biases in probability and belief, social biases, and biased memory errors. In all, there are over seventy-eight different bias's identified.

The purpose of this posting is to offer some background regarding cognitive bias's. All too often analysis paralysis can set in as the project manager attempts to sort through the real issues when the focus should be  effectiveness at moving projects forward.  I'll discuss a few of the prevalent bias's and put the focus on techniques used to move things forward.

Cognitive Bias Types

As commented earlier there are over seventy-eight different bias's identified that fall into four generalized categories of decision-making and behavioral biases, biases in probability and belief, social biases, and memory errors.  For a comprehensive list of biases please refer to Wikipedia: List of Cognitive Biases. I'll look at three biases and discuss approaches to handling these.

Confirmation Bias

The first bias I would like to highlight is the confirmation bias which is defined as the tendency to search for or interpret information in a way that confirms one's preconceptions. This is particularly of interest when stakeholders and principles come to the project with an agenda. The options they desire and choices they push for are justified based on the agenda and selective information. This can be of a particular challenge during a project. Especially, if they are ardently pressing for the agenda which conflicts with or constrains the project by driving changes to the scope or delaying schedules due to the change orders that may be whimsically directed through political connections without properly vetting the options.

Hyperbolic Discounting Bias

Another bias deserving attention is hyperbolic discounting. This is the tendency of people to have a stronger preference for more immediate payoffs relative to later payoffs. The tendency increases the closer to the present the pay off occurs. Humans are said to discount the value of the later reward, by a factor that increases with the length of the reward delay. This process has been modeled in form of exponential discounting and has been improved over the years to a hyperbolic discounting model.  In project management, there are organizations that have a technology project in which the catch phrase "Go Live" or "We got to show results" becomes pervasive. This is reflective of a short technological attention span within an organization and generally originates out of a desire for a return on the investment driven by accountants.

Planning Fallacy Bias

Planning fallacy bias is the tendency to be optimistic on task-completion times or set unrealistic premature times for one's own tasks completion while independent third party estimates for the same task are typically pessimistic or overstated. This is a common problem in information technology projects. Optimistic and unrealistic dates are set resulting in schedules that are constantly being pushed out and delivery dates that are rarely achieved. 

There are so many biases that apply but limited time and space to post. Given these three biases, let's look a little further into them and how project managers can minimize the effect of these biases and others.

A Deeper Look at What is Going On

Given all the different biases and combinations in which they may occur project managers may seem overwhelmed trying to learn the biases and responses to them. Fortunately, there are methods available that preclude learning individual responses to each bias. Although, it could be of benefit to learn how to handle a few of the more prevalent one's with more direct methods. In this post I'll simply defer to generalized methods of handling biases when the symptoms show up.

In projects many symptomatic traits surface that indicate the presence of a bias. People have been shown to have very brief technological attention spans; as short as 3 seconds. This may be observed as glassy eyed looks when discussing technical issues or a frustration when waiting on the system to respond. In the case of technological projects, studies by the Gartner Group have shown the attention span is about 90 days when the average technology project term is 18 months. This differential can cause planning fallacy bias contributing to overly optimistic delivery dates. Additionally, catch phrases will begin to surface that we got to show results or go live at about the 90 day mark.

Another symptomatic trait is the natural tendency to seek an ROI, return on investment. Financial and accounting staff have professionally formulated this bias into their financial decision making. The sooner the return the better the option.  In fact,  in accounting and financial management practices there is a discount rate that shows a lower value for more lengthy capital investments periods. This bias, hyperbolic discounting, can be detrimental to the success of technology and strategic projects which often seek long term results or outcomes. For example, some companies seek to leverage a technological advantage in thier operations. This advantage may improve responsiveness to volatile markets or expose emerging niche markets that operate on a time-based profit model in which money to be made is measured in very brief periods before the market evaporates.  However, the time to develop the systems and processes may take long periods before the benefits are realized in the time after delivery.

In short, nearly all information technology projects are strategic in nature.  Most technological projects support operations by improving the immediate concerns such as compressing information or decision processing time as a collateral benefit of a larger picture. The organization pursuing information technology projects is usually seeking to reduce latencies, exploit patterns and trends for profit, improve quality-cost relationships, or strengthen measurable organization value MOV).

Handling Biases Means Managing Personalities

Personality management is a term coined to describe the processes and means of rallying people to a common cause and focus. The term has been widely used in the military where a wild child, a maverick, shows character and needs to be focused.   Often the need for expediency relies on compromise, guidance, and leadership. More importantly, is the ability of the project manager to develop a sense of community and unity of purpose.

For the project manager this process begins by developing a strong stakeholder register that includes project principles; people who are not stakeholders yet affect the project such as artisans, technicians, administrative staff, etc... Project principles can have strong influence on the project and often have significant experience they bring into a project. Principles may see poor paths the project is taking or know that certain approaches have failed in the past. Principles and stakeholders also have career aspirations and desire to see their ideas instituted over others.  Rather than assessing which cognitive bias is in play, project managers can use simple and common methods to move things forward.

All to often people pay too much attention to prepackaged programs such as Win-Win or Highly Effective Habits. They have their merits and should be implemented with earnest interest and not as some sort of thespian running through the script.  Dale Carnegie offers basic guidance in his principles.  The core principles of Carnegie's approach involves sincere appreciation, not condemning or complaining, arousing eager want, smiling, and being genuinely interested in other people. Practicing these five core principles can win others over during argumentation as well as move projects forward. When people have career aspirations such that their drive influences decision making, project managers can listen and given people a good name which are other Carnegie principles.

Nonetheless, project managers should not yield fully to other people's whims but instead genuinely listen and incorporate thoughts of value that do not delay or increase cost. If a principal or stakeholder insists on an idea that impacts costs or schedules then formal processes of change managment should be invoked.  Having structured arguments and processes are a baseline from which to deviate and measure ideas. Informal argumentation offers more flexibility to maneuver and reason constructively. Occassionally, leadership or others want to work outside the system of processes in place. There are circumstances in which this is productive. The project manager at this point may choose to accept the decision or encourage a parallel process to corroborate the decision and work outside the system.  There are a myriad of  exceptions. Project managers fall back on thier personal and political captial during these times and rely on guiding principles to move things forward.

Principles set moral and ethical boundaries. Acting and managing in a principled manner allievates the project manager from having to memorize biases and approaches used to correct them based on predetermined and managed scripts. Using principles as guideposts will establish the project manager as aspiring to greater heights and trying to sincerely achieve results within a set of parameters. People will understand that principles are to be upheld and decisions are to made against them as well as the system or structures in place to manage change. In the end, the project manager will have lead people towards the vision and objectives, extacted value from the corporate knowledge within the project, and made people feel a part of the system.   

Sunday, August 21, 2011

The Non-Dimensionalized Methodology Brief

In business there is an almost an absolute single mindedness on financials. However, a financial bias does not account for the entire circumstance to the chagrin of accounts and financial analysts who often attempt fudge factors and assign financial values to other qualities. Other non-financial approaches include measurable organizational value (MOV) and non-dimensionalization methods. This post explores non-dimnensionalization.

The Non-Dimensionalized Methodology
by
JT Bogden, PMP

Non-dimensionalization is the removal of dimensions such as dollars, square feet, or other units of measure in order to observe the characteristic performance of a system in terms of a ratio or coefficient. In statistics the process is known as normalization. The technique is common to aerospace engineering where they use coefficients of lift, drags, and stability given specific aerodynamic bodies. The performance graphs can be easily redimensionalized for the differing flight environments found off planet. For example, Venus's atmosphere is sulfuric acid, carbon dioxide and nitrogen as opposed to Earth's atmosphere of oxygen and nitrogen but the flight performance of the aerodynamic body is basically the same in any fluid.  Project managers can take advantage of the same methodology in order to improve decision making.

Often project managers are confronted with complex decisions involving multiple options and interrelated characteristics such as rent and square footage or purchase price and yield rates. Many may look at these relationships in terms of ratios such as dollars per square foot or dollars per unit yields. This may suffice for simple decisions but what if there were a dozen of these characteristics? How should the highest performing option be reasonably vetted?

Non-dimensionalization is a method that reduces dimensionalized characteristics to a coefficient of performance for comparison purposes. The process makes use of a spreadsheet. Options are listed across column and established performance characteristic are shown in the rows. The coefficients are computed by rows and the option columns are averaged.  An organization may use project portfolio performance characteristics or formulate them for the specific decision as long as they are consistent across and supported by all the options.  The organization will also need to establish the vetting criteria such as maximizing benefit and minimizing cost. In this discussion maximization will always be 1 and minimization will always be 0. This results in the use of the  formulas shown in Figure 1. If cost is at its maximum the CPmin will be used. If benefits are to be maximized then CPmax will be used.
Figure 1: Non-Dimensionalizing Formulas
Please refer to Figure 2 as we discuss the process of non-dimensional analysis. When cost are at a maximum value, we desire to minimize the impact. Therefore, CPmin was used to minimize with an ideal value being 0. When the benefit is at a maximum, we desire to maximize the impact with the ideal value being 1. Thus, CPmax was employed. The decision making scheme is such that when the Average CP by option value closest to 1 the more desirable that coefficient of performance or CPn. In Figure 2, the most desirable option is Option 2 with an average CP2 value of 0.58 since it is the closest to 1. The CProw values is the average value of the characteristic which gives the decision maker a gauge to determine the degree of impact of the CP for the characteristic being reviewed.
Figure 2:  The Non-Dimensionalized Decision Matrix
As an example of this approach in use, when selecting drives for my personal data storage unit I identified the factors, collected the information and purchased the drive meeting the performance criteria. Overall, this is one tool the project manager may employ when making like decisions and arguing the reasons behind his selection. By removing the financial bias of the purchase costs and return on investment, ROI, better performing options become available. I hope this clarified the use of the method. 

Saturday, August 20, 2011

Argument Analysis

Argumentation: The Study of Effective Reasoning

Commentary: This is a series on effective reasoning as it applies to project management. Using proper argumentation in a project while vetting risk, options, objectives, strategies, and workaround solutions can strengthen a project's performance, improve communications, and develop a sense of unity. Effective argumentations comes down to building the strongest case for a claim. In this series I will be summarizing points made by David Zarefsky in his Teaching Company coursework as well as drawing on other resources.   This series of posts may be reviewed at the Argumentation Series Posts link.

In the last, posting we discussed the need for a transition from formal towards informal argumentation. I had several readers comment on the series. One reader desires a discussion on cognitive bias as it applies to argumentation in project management. Cognitive bias is a tendency to acquire and process information by filtering it through one's own likes, dislikes, and experiences. In general, the term of cognitive bias is used to describe effects within the human mind, some of which can lead to perceptual distortion, inaccurate judgment, or illogical interpretation. I have  provided a review on Cognitive Biases.. In the next few post, we will consider the argument's structure and process.

Argument Analysis and Structure

Arguments begin with a controversy or disagreement that is nontrivial. There is no easier means of resolving the disagreement through empirical methods or recognition of some authority and the outcome cannot be deduced from existing knowledge. New information must be introduced and supported. Assent of  the other party is desired or required to settle the dispute. Thus, the situation cannot be abandoned. Recall that respect for the other party and confidence in the outcome makes free assent essential. Thus, the argument is over significant controversies between the parties. The origin of the controversy arises from several possible sources. For example:
  1. A statement that was made during a conversation, presentation, or in a meeting. Sometimes people conflict information, do not make sense, or offer an opinion. The other party seeks clarification in order to accept the statement. Commentary: For example, blue is a better color than red. The challenge to this opinion is why?
  2. Given multiple choices, options, solutions, or outcomes only one selection can be made. The result is sometimes difficult to discern and requires discussion. The selection of a particular option or solution may be the consensus or assent of the group.Commentary: There are often tools to aid in making these selections for project managers. One methodology is to use a non-dimensionalized performance matrix.  In this approach a series of characteristics common to all the options are made then compared relative to each other using formulas in a spreadsheet that reduce the dissimilar dimensions to performance indexes in order to compare the apples to oranges.  The formulas seek to optimize toward the maximum or minimum values as determined in the formulation of the selection criteria. This kind of argument approach can be managed more discretely and formally reducing the conflict and disagreement in the decision making.  
  3. Cognitive bias affecting knowledge, methods, or judgments. Often people commingle their personal views with a claim. This is especially true when politics, money, religion, or power is involved.  They tend to align the view to their understanding rather than forming the view to the information available or presented. Commentary: For example, many Young Earth Creationist hold to a 24 hour solar creation day claiming a day is a day not a month or year but a day. However, empirically a solar day varies on the planet Earth and is defined as an evening-to-evening event in the immediate scriptural references. The evening-to-evening event at the Equator is 24 hrs and at the poles an evening-to-evening event is one year. Hence, based on a natural observation the support for a definitive 24 hour solar creation day is not present but instead what is supported is an evening-to-evening event which demonstrates temporal variance in the present.   
  4. Defense of a position taken.  Often the challenge arrives as "How do you know?" or "What do you mean?" as a means of further evaluation. The arguer must present valid reasons and free assent must accept those reasons in order to settle the position or claim. Commentary: For example, the claim may be that Global Warming effects are impacting a projects progress as PVC pipes are breaking down in the exposed sunlight and leaking. The arguer may show evidence of UV radiation and increased temperatures. The challenge to this argument may show an exception that the PVC pipe is not designed or rated for outdoor use. Thus, exceptions can re-examine a claim altogether by presenting the new information.
  5. A critical position taken in prose or public speaking as though engaged in dialogue. Often the speaker or author will assume both positions as they argue a point. They attempt to demonstrate strengthens and weaknesses of each position in a non-bias manner in order for a third party to make a reasonable decision. The speaker or author will make claims of generally four types; fact, definition, value, and policy.  Claims of fact involve descriptions that are verifiable. Claims of definition are a matter of interpretation that provide perspective. Claims of value are judgments that appraise as absolute or comparative as well as instrumental or terminal. Classifying the claim is important in order to determine the kind of proof necessary. Commentary: Although considered informal as inferential links are made this includes a formal approach as the sides are presented in third person in a structured manner. Critical writing or speaking methods attempt to remove bias and focus on the real outcomes. Readers and listeners have little opportunity if any to dispute the claims and justifications made at the time they review the argument. Although with the advent of YouTube and other social media channels dissection and commentary is becoming pervasive to the point that learning is no longer taking place. Instead, many are unmovable or pigeon holed while attempting to support a personal view often with abusive mannerism. The American Forefathers admired the notion of the public sphere yielding to common people the ability to debate topics of nobility through free speech and freedom of the press but not at the expense of civility.      
The structure and components of an argument are just as important as the claims made. The basic structure of the argument consists of the claim, evidence, an inferential link between the claim and evidence, and a warrant that justifies the inference. 

These components are not always apparent and the advocate advances the claim which is not immediately accepted. This results in the advocate producing evidence and support for the claim. If the truth of the claim is not accepted then a separate argument addresses the truth. If the truth and evidence is accepted but does not justify the claim then a warrant is made making the inferential link between the evidence and claim. If the warrant is not accepted then another argument is made to support the inferential link. An exception or other explanations may be noted and the claim may need to be further qualified. This process repeats until the dispute is settled.

David Zarefsky adapted the work of Stephen Toulmin who identified the major components of an argument in this review. Claims are the statements that we want listeners to believe and act upon. Evidence is the grounds for making a claim and supports it but is not the claim itself. Evidence must be accepted by the audience as truthful.  The inference is the main proof line leading from the evidence to the claim. The warrant is the license to make the inference and a general rule to recognize the possibility of exceptions. Exceptions to the warrant qualify the claim. Figure 1 illustrates the simple argument.

Source: The Teaching Company (C) 2005

Commentary: Informal argumentation is structured and makes an inferential linkage between the claim and the evidence.  In making this inferential linkage, the project manager must have a reasonable degree of technical knowledge in the supporting background to make sense to the audiences. This may mean the project manager should be well studied in many disciplines in order to  be able to converse reasonably well during the dialogues. Listening and skimming skills become important. Even small talk is important. The book series such as the "Intellectual Devotionals" aid in growing foundational knowledge from which to build.

In the next posting, we will look at complex structures of an argument.  If you have any comments or ideas please feel free to forward them to me at james.bogden@gmail.com.

References:

Zarefsky, D. (2005) Argumentation: the study of effective reasoning. 2nd Ed. the Teaching Company. Chantilly, VA.

Tuesday, August 16, 2011

COMPUTER ATTACKS: TCP/IP SECURITY

TCP/IP SECURITY

Through the years, computers have been subjected to many attacks. Users were advised to choose good passwords and not to share accounts/passwords with other users. They were to also obtain a strong virus checker and firewall. Resourceful hackers, malcontents, and other attackers have found ways to compromise computer systems by using protocols such as TCP/IP (Transmission Control Protocol/Internet Protocol). Many of these attacks were anticipated long ago. Yet the the Internet itself is not well protected against them. It is to the benefit of users to understand these methods and seek defenses and configurations against them.

As with most disciplines there is a host of jargon that goes along with it that tends to confuse the lay person. The following definitions will aide in your understanding TCP/IP security.

Chronograph (CRON): A command that executes a list of one or more commands in a computer operating system at a particular time.

Demilitarized Zone (DMZ): This is a virtual region that is used to isolate a private network from a public one. It is usually bracketed by an inside and outside router, one or more firewalls, and security monitoring software watching activity inside the DMZ. The concept is to prevent network access by buffering the network from general internet traffic. If someone makes unauthorized entry into a DMZ the presumption is they are hostile and measures are taken to prevent further access to the systems.

File Transfer Protocol (FTP): This is a command that allows for the transfer of files between systems.

Global/Regular Expression/Print (GREP): A utility that is used to search one or more files for a given character string or pattern and can replaces the character string with another one.

Host Tables: This table associates IP addresses with host names on the network. Typically, it can be found in the etc/host directory on UNIX machines. It is not commonly used since most systems have addressing assigned dynamically.

Internet Assigned Numbers Authority (IANA): An organization that oversees the allocation of IP addresses to Internet Service Providers (ISP).

Internet Protocol Security (IPSec): A series of standards that provide general-purpose security for any IP-based network including Intranets, extranets and the World Wide Web or Internet itself.

Network Address Translation (NAT): Originally intended to substitute official Internet addresses for private and unregistered IP addresses. However, it became an effective method to hide internal addresses from detection on the Internet. This is often targeted by hackers in order to footprint the network and enumerate.

Port: A logical connection point on a network device used in programming which allows originating devices or programs to communicate using TCP/IP across a network or the Internet to access a destination devices or program. Ports are managed by the operating system and TCP services resident on any network device.

Port Numbers: The number assigned to a specific port corresponding to one of the 65,536 possible ports on any specific device.

Request for Comment (RFC): Internet documents used for everything from general information to the definitions of the TCP/IP protocol standards. This is part of the world wide web consortium, 3WC, democratization of internet design.

Router Table: Directs packets toward their destination. It may be built by the system administrator or by routing protocols. Hackers target this table as part of their efforts to gain access to the system.

Routing Protocols: Programs that exchange the information through packets and are used to build routing tables.

Sockets: The combination of an IP address and a port number. Sockets are expressed in this manner: (IP Address):(Port Number).

Transmission Control Protocol (TCP): A reliable connection oriented message delivery service. This protocol manages the assembly of messages or a file into smaller packets that are transmitted over telecommunications telemetry (wires, fiber optics, microwave, etc...) and received by a similar TCP layer in the destination machine where the packets are reassembled  into the original message.

User Datagram Protocol (UDP): An unreliable connectionless delivery service that provides limited services across a network or the Internet.

Virtual Private Network (VPN): A method of point-to-point connections through firewalls to automatically encrypt packets sent between networks and specific devices on those networks. The VPN can use the Internet as a private wide area network without compromising the data. Considered a reliable and secure communication.

TCP/IP Commands:  These are some of the classic commands used in command line administration of network systems. It is not uncommon for administrators to remove these commands or severely restrict access to them. In Windows 7, launching the command console by typing CMD in the start menu's  Start Programs and Files feature, one can run some of these commands.

Address Resolution Protocol (ARP): Provides information about Ethernet / IP address translation. Used to detect bad IP addresses, incorrect subnet mask and improper broadcast addresses.
Command line Switches: arp -s [ip] [mac]
- a List the table of values
- ip Specifies an IP address in dotted decimal notation.
- d Deletes the entry specified by inet_addr.
- mac Hardware physical address.
- s Adds an entry in the ARP cache to associate the IP address inet_addr with the physical address ether_addr.

AT: Executes commands at a given time. Command line switches:

-l lists scheduled jobs
-r removes a scheduled job

CRON: Executes scheduled commands on a regular basis.

Find: Detects potential filesystem security problems and allows you to perform many logical tests on files.
Command line switches:
-name file name of a file or wild-carded filename
-links n any file that has n or more links is selected for processing
-size n any file that has 512-byte blocks is selected for processing.
-atime n select any file that has been accessed in the past n days.
-print prints out the name and location of any selected file

GREP: Searches the named input files for lines containing a match to a given pattern. Command line switches:
-G interprets pattern as a basic regular expression.
-E interprets pattern as an extended regular expression.
-F interprets pattern as a list of fixed strings, separated by newlines, for any match.

Last: Displays who has logged into a system in the past. It is useful for learning normal login patterns and detecting abnormal login activity.

LS: Shows the ownership, permissions, creation date, and size of every file on your computer. Command line switches:
-a lists all entries
-c use time of last edit (or last mode change) for sorting or printing
-C force multicolumn output with entries sorted down the column
-d if argument is a directory, list only its name
-i print each file’s node number in the first column of the report.
-l list in long format, giving mode, number of links, owner, size in bytes, & last modification
-n list the user and group ID numbers
-q display nongraphic characters in filenames
-r reverse the order of sort

Netstat: Displays statistics about each network interface, sockets, and routing tables.
Command line switches:
-a displays a list of all the ports that programs and users outside the network can use
-r displays the routing table
-n displays the IP address of the foreign machine

PS: Display the status of current processes.
Command line switches:
-aux or -ef Displays the user and command that started each process

Telnet: Provides remote login over the network.

Who: Provides information about who is currently logged on the system. It displays the login name, what device they are using, when logged in, and what remote host.
Command line switches:
-w show active processes started by the login name.
-d shows expired processes
-s shows name, line, and time


Network Security and Technologies

Security Planning involves a well-thought out security plan that will decide what needs to be protected, how much to invest, and who will be responsible for carrying out the security aspects. Security planning is the building block of security, but a plan must be formed before it can take effect. A strong plan usually entails a system of checks and balances. For example, system administrators manage the system on a day-to-day basis, users and support who operationally observe the network performance, and auditors who review and monitor activity and settings on the networks.  Network security also involves many technologies.

IP Security (IPSec) consists of a collection of RFC standards. It is not the only standard for Internet-related security. It is the solution when dependable, general-purpose security is needed for confidential communications via the public or private IP networks. IPSec provides three distinct forms of protection for the transfer of secure data. They are:
  • Authentication: The property of knowing that the data received is the same as the data that was sent and that the claimed sender is in fact the actual sender.
  • Integrity: The property of ensuring that data is transmitted from source to destination without undetected alterations
  • Confidentiality: The property of communicating such that the intended recipients know what was being sent but unintended parties can’t determine what was sent.
An advantage to IPSec is that it can be implemented entirely in shared network access equipment. Doing this eliminates the need to upgrade any network-attached resources.

Firewalls are systems that replaces an IP router with a multi-homed host that does not forward packets. There are four types of firewalls:
  • Packet Filtering, which is a simple static means of examining traffic based on addresses and/or packet type.
  • Circuit-level gateways that provide “openings” for all approved sessions based on an assortment of criteria.
  • Proxy or application gateways that perform a more in-depth analysis of traffic, including the higher-level application.
  • Stateful inspection, which combines features of the other types to achieve a truly dynamic way of adapting to changing traffic patterns.
Passwords are the simplest and most important part of network security. Passwords should be cleverly structured in order to avoid compromise.  Attackers enter most systems by simply guessing passwords. One form of password guessing is dictionary guessing. Dictionary guessing uses a program drawing words from a dictionary and compares each word to a password until a match is found.

Routing control is necessary for a system and requires a routing table entry for every network communicated with. Without the proper routes, the system cannot communicate with remote networks. Because of this, an attacker can control which remote sites are able to communicate with the a system by controlling the contents of the routing table. Therefore, controlling access to routing tables is essential. Administrators will typically, remove commands that give access to the routing table and hide it behind the DMZ making it difficult to find and exploit. 

Security Monitoring

Systems differ on how to monitor. High level monitoring, of course, involves firewall logs, intrusion detection system logs, router logs and establishing a DMZ. The logs are all checks and run in software that looks for patterns of activity. However, as an administrator there are many things that can be done to look for activity.

In some UNIX systems, use the command ls -a | grep0 '^\.' . This will enable the administrator to look for traces of a break in. Intruders create files that begin with a dot such as .mail or .xx that are used to help them in future break-ins.  

In most systems administrators should monitor the names of programs started. Ensure that no shell programs are started. Also check to ensure that the log file is not world writable. Check for other directories under the home directory. Look for entries from outside the trusted network. No files should be world-writable. Check for unaccounted for login names and changes to the UID or GID of any account. Monitor for abnormal login patterns.  Also check for file activity for files run by 'at' or 'cron', looking for new files or unexplained changes. Attackers create scripts to re-admit themselves to a network using these commands, even after being kicked off and previous access methods closed off. One should check and monitor executable files file sizes, date changes, and changes in rights. 

Defensive Tactics

Denial of Service (DOS) is any action that prevents users from accessing system resources. These resources may be stopped entirely, degraded or interrupted. UNIX provides few types of protection against accidental or intentional denial of service attacks. Some versions let you limit the maximum number of files or processes that a user is allowed. Others let you place limits on the amount of disk space consumed by any single account. Having a low time-to-live on port wait states also aides in reducing DOS attacks. Manufacturers are offering more sophisticated hardware that also identifies DOS attacks sources and ignores those request.  

There are two types of denial of service attacks: destructive and overload attacks. Destructive attacks attempt to damage or destroy resources so you can’t use them. Attackers can do this in a number of ways. For example, deleting critical files would be considered a destructive attack. Restricting access to critical accounts and files can prevent this attack.

Overload attacks flood resources with request to the point that it is unable to process another user’s request. If the attacker overwhelms the port, others won’t be able to access it.  The system can be setup for automatically detecting overloads and restarting the device or simply ignoring the source.

One of the simplest denial of service attacks is a process attack. With this attack, one user makes a computer unusable for others who happen to be using the computer at the same time. These types of attacks are generally of concern only with shared computers.

Conclusion

The Internet’s worldwide presence, combined with its affordable access and ever-expanding capabilities, is the most common reason for network security because it has no inherent security. Networks are invaluable, but they are also vulnerable to industrial espionage or disgruntled employees. The potential exposure is especially profound when the enterprise network is interfaced with the Internet.

References:

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley &Sons Inc.

McClure, S, (2009). Hacking Exposed 6, Mcgraw-Hill Company, ISBN 9780071613743

Slade, R. (1994). Computer viruses: how to avoid them, how to get rid of them, and how to get help.New York.Springer-Verlag.

Other posts in this series

COMPUTER ATTACKS: HACKERS

COMPUTER ATTACKS: VIRUSES

COMPUTER ATTACKS: PROBING PORTS

COMPUTER ATTACKS: TCP/IP SECURITY



COMPUTER ATTACKS: PROBING PORTS

PROBING PORTS

Port probes and scans are intrusive attacks against networks. On average these attacks are continuous in which a person or automated program seeks to discover and exploit the computer’s vulnerabilities. Computer attackers have developed an array of methods and means of probing ports which are becoming more advanced with time. Most methods tend to center around port scanning and still many other type of attacks exist. In order to maintain a strong information assurance front to combat attacks, you must learn the technology behind the attack methods. First, we need to define many terms and concepts for the not-so-computer savvy. 

The Basic Definitions and Concepts

Transmission Control Protocol (TCP) is a reliable connection oriented message delivery service. This protocol manages the assembly of messages or a file into smaller packets that are transmitted over telecommunications telemetry (wires, fiber optics, microwave, etc...) and received by a similar TCP layer in the destination machine where the packets are reassembled  into the original message.

Internet Protocol Address (IP address) is a numerical label assigned to a machine participating on a network such that it can be located electronically.  

Port: A logical connection point on a network device used in programming which allows originating devices or programs to communicate using TCP/IP across a network or the Internet to access a destination devices or program. Ports are managed by the operating system and TCP services resident on any network device. 

Port Numbers: The number assigned to a specific port corresponding to one of the 65,536 possible ports on any specific device. 

The three way handshake process is at the heart of computer communications and managed by TCP. The process begins with a request to synchronize (SYN) message to a destination machine. This is acknowledged by the destination with a synchronize-acknowledged (SYN-ACK) message. The originating machine then responds with an acknowledged (ACK) message and the connection between the two machines is complete. The two machine find each other using an IP address. The IP address is more detailed than a simple numerical label.

Media Access Control Address (MAC Address):   This is a unique numerical identifier assigned to hardware devices that facilitate connection to the network or Internet. MAC Addresses are in the hexadecimal format 00:00:00:00:00:00. The address identifies the manufacturer and provides a serialized device number at a minimum. 

Sockets: The combination of an IP address and a port number. Sockets are expressed in this manner: (IP Address):(Port Number).

In network architectures it is not uncommon for a single machine to have several network cards. Therefore,   a more identifiable addressing configuration the IP address is binded to a MAC address of a specific device. Thus, the port, IP Address, and MAC address ensure that the correct communication point is made. 

Other important definitions:

File Transfer Protocol (FTP): A method of file transfer used in TCP\IP protocol environments that allow complete transfer of files between systems.

Internet Assigned Numbers Authority (IANA): An organization that oversees the assignment of Internet Protocol (IP) addresses to Internet Service Providers (ISP).

Port Scan: One of the most popular reconnaissance techniques attackers use to discover services they can use to break into a system. This is the process of searching any number of or all the ports for an opportunity to enter a network, workstation, or server via the Internet using TCP/IP.

User Datagram Protocol (UDP): An unreliable connectionless delivery service that provides limited services across a network or the Internet.

TCP/IP Security and Exploitations

Most intrusive attacks on computers are currently proceeded with port probes and scans by attackers who are searching for vulnerabilities in computer systems to exploit. Port scanning is the most popular method of reconnaissance because victim computers run numerous services that listen on ports. By scanning these ports, attackers can locate a port that can be exploited.

The process of scanning a port is not complex. A client application desires to establish service with a server application across a network through the three way handshake process. Once the service is established then the data begins to pass through the port. An attacker will exploit this process to learn about the service, operating system, or if the port is listening through a variety of methods. There are many methods that computer attackers use while attempting to penetrate a computer network.

TCP/IP Methods of Exploitation

TCP Port Probing is the most common intrusion detected. It is so common because attackers do frequent widespread scans looking for one specific exploit they can use to break into systems. Some of the methods are listed below.

Network Mapper (also known as NMAP) is designed to allow system administrators to scan large networks to determine which hosts are up and what services they are offering. NMAP does three things: (1) ping a number of hosts to determine if they are alive, (2) portscan hosts to determine what services are listening, and (3) attempts to determine the OS of hosts. This type of scan always gives the port’s service name, number, state, and protocol. NMAP uses a lot of good ideas from its predecessors. Some useful features of NMAP are:
  1. Dynamic delay time calculations: Some scanners require a delay time between packets. NMAP tries to determine the best delay time. It also tries to keep track of packet retransmissions, so that it can modify this delay time during the course of the scan.
  2. Retransmission: Some scanners send out query packets and collect the responses. Doing this can lead to false positives and negatives when the packets are dropped. NMAP implements a configurable number of retransmissions for ports that don’t respond.
  3. Detection of down hosts: NMAP pings each host to make sure it is up before wasting time on it. It also does thin in parallel, to speed things up. NMAP is capable of bailing on hosts that seem down based on port scanning errors, and it is meant to be tolerant of people who accidentally scan network addresses.
  4. Detection of your IP address: NMAP tries to detect your address during the ping stage. It uses the address that the echo response is received on. If NMAP can’t do this, it will try to detect your primary interface and use that address.
Vanilla TCP scanning is an attempt to connect to all 65,536 ports. This is a frontal assault and the most basic type of TCP scanning. If the port is listening, the connect () command will succeed. If the port is not listening, it will not be reachable. One strong advantage to this technique is that you don’t need any special privileges. Another advantage is speed. Using non-blocking I/O allows you to set a low time-out period and watch all the sockets at once. This is the fastest scanning method supported by NMAP. The big downside is that this sort of scan is easily detectable and filterable. The target host logs will show a bunch of connection and error messages for the services. Defensive measures will take the connection and then have it immediately shutdown.
Half Open (also known as SYN scan or Stealth scan). This type of scan is known as the SYN scan because it only sends the SYN packet. This scan only partially opens a connection, but stops halfway through. As a result, the connection queue fills and service through the specified port is denied to legitimate users. This type of attack is difficult to trace because fake IP's are used and the SYN packet stops the service from being notified of an incoming connection to prevent logging of the scan.

Flags (also known as FIN, XMAS, and NULL scans) are attempts to close a connection that isn’t open. Some attackers try to use their scanners to “open” connections. Others may try to send error messages to “open” ports, hoping to get a message back from “closed” ports. The Flags scan attempts to close a connection that isn’t open. If no service is listening, the operating system will generate an error message. If a service is listening, the operating system will drop the incoming packet. Since packets can be dropped accidentally, this is not an effective scan.

UDP port probe is a method to scan for open UDP ports. The technique is to send UDP packets to each port on the target machine. Scanning this port is significantly more difficult. An open port will usually generate no response. A closed port will respond with an error. Most hosts do send an error when you send a packet to a closed UDP port. Most UDP scanners must also implement retransmission of packets that appear to be lost. If a firewall blocks access to ports, both open and closed ports produce the same results.

FTP Bounce attacks are attempts to imposter or disguise an attacker by passing through (bouncing off) an FTP server. FTP Bounce seems to be the most popular of the port scans. FTP Bounce uses the PORT command in FTP mode. This command can be misused to open a connection on a machine that an attacker could not have accessed directly. By using the PORT command, an attacker may be able to establish a connection and bypass access controls. Here is an example of how FTP Bounce works:

The attacker locates and finds an FTP server through a firewall. The server has an upload area that the attacker can use. The attacker sends a spoofed mail message to the server. The attacker then FTPs the server and sends a PORT command using the address of the victim’s computer. The FTP server opens a connection for the attacker. Once the connection is established, the attacker can send almost anything through the port, and the FTP server will dutifully transmit.

FTP Bouncing makes verifying of a address very difficult. The reason for that is because the attacker is able to make a connection between the FTP server and a port on another system.

Fragmented Packets is a method in which packet fragments are shot through simple packet filters on a firewall to determine vulnerable ports that may be exploited. This type of scan fragments the IP datagram within the TCP header. Attackers using this scan can bypass some firewalls by acting as “packet filters” because the victim can’t see a complete TCP header that matches their filter rules. Be careful of this method. Some programs have trouble handling these tiny packets. This method won’t get by packet filters and firewalls that queue all IP fragments.

Reverse Identification is a method of discovering the user of a port for exploitation. This type of scanner usually works on a UNIX-based computer to identify the user of a TCP connection. When a user connects to a server, it sends back a request to the identity service for verification. It can also work in the reverse way. When a user connects to a server, the user can query the server for identification.

Strobe is an attempt to connect to only known ports to exploit, typically 5 to 20 services such as FTP or Telnet ports.

Sweep is a type of scan that hits the same port across multiple machines, enabling an attacker to identify which hosts are offering a particular service.

Networks and Firewalls

When a network is young, it is strong and simple. As it matures, it becomes more complex. Eventually, the complexity gets to the point where the smallest change can upset the network. Firewalls are the same way. When they are new, they are strong and resistant. When new users come online or there are new needs to be met, the firewall becomes thinner. Holes can open up inside a firewall, and attackers will be free to do their damage.

No matter how secure a firewall is, nothing can stop an internal attack. The most successful scans come from people inside the network. Without a lot of centralized control, there is more opportunity for unhappy or criminal minded employees to exploit network weaknesses. They do this by planting a backdoor on the network, which they can access from home, plant malicious software that can destroy data or systems, or simply browse the network and obtain confidential information.

Attackers can easily exploit many other access points such as:
  1. Dial-Up Connections. Dial-Up connections often allow users direct access to the internal network, without firewalls or proxies. Through some social engineering, an attacker can find the numbers for a dial-up connection and begin banging away on the modem.
  2. Telecommuters. People with dial-up or ISDN access are especially prone to attacks at home. An attacker can break into a home computer, which is easier than breaking into a company’s server. Once an attacker has access, he can plant a worm or virus to infect the network or act as a probe and send network information back to the attacker.
  3. Remote Sites. It is not uncommon for small companies to have remote offices connected together via the Internet. When one company acquires another, security is overlooked to keep employees productive.
Common Attacks

Most attackers use programs such as NMAP and strobe to scan for services with well-known vulnerabilities. Some attackers search the Internet for IMAP servers. The attackers will compile lists of IMAP servers and trade with other hackers. It is a challenge between hackers to see who can crack a system first. Keeping your operating system current with the latest patches and service packs will stop most of these attacks.

TCP SYN Attack: In this attack, the attacker sends a SYN packet, as if he were going to open a connection and wait for a response. A SYN ACK indicates the port is listening. A RST (Reset) is an indication of a non-listener. If a SYN ACK is received, the attacker sends a RST to stop down the connection. This is a form of a denial of service attack.

Telnet: This is a very powerful Internet tool. A computer using TCP/IP has many ports open. While the ports are open, an attacker can attempt to access your computer through those ports. When the attacker finds an open port, he may be able to retrieve files, place files, or watch Internet communications on the victim computer without anything showing up on the screen.

Social Intrusion: This attack is commonly known as “tricking the user into revealing sensitive information”. The attacker poses as an administrator from the Internet. The user may receive an email that they think came from an administrator. In this email, the attacker may request to verify the user’s password because there was problem with the account or gather other information.

Always-On Connections: More and more people are installing always-on, high-speed Internet connections in their homes. Cable modems and ISDN connections are “always-on”. These types of connections are easier for attackers to target, and they also have a fixed system-addressing scheme, which makes it easier for an attacker to target the user specifically.

Distributed Denial of Service (DDOS) Attack: This attack is designed to overwhelm a site with request for service using TCP SYN handshakes. The attacker makes use of thousands of computers on the Internet to target a victim site. The process involves infecting Agent computers (a PC that generates a stream of service request packets) with a virus-like software package that is controlled over the Internet or networks by a Client or Attacker. The virus-like software spreads in an automated 4-stage manner as follows:
  1. Scan Phase: A large number of host machines probe target hosts machines for vulnerabilities using a variety of techniques including port probes and scans.
  2. Compromise Phase: A target host is accessed through a discovered vulnerability.
  3. Tooling Phase: The virus-like software is installed on the target or compromised host.
  4. Regenerative Phase: The compromised host begins to scan and probe a number of new target host machines.
In hacker jargon the attacking machines are called Zombies and the machines controlling the collection of Zombies are Managers.  Managers can resource as many as 10,000 Zombies an hour then direct the Zombies to attack once they receive the attack telemetry data. 

Stopping Attackers

There are many different ways to combat port scans and probes. Here are just a few ways that networks can be protected against attackers:

  • Firewalls. Firewalls are a good initial for protection. A well configured, well-maintained, and regularly tested firewall can stop most attackers. To cover the remaining holes and remain effective, a good network needs a pervasive intrusion detection system, regular maintenance and testing.


  • Centralized Reporting. Most networks have excellent protection where attacks are expected. Once inside the network, there is virtually nothing to stop an attacker. Each workstation, server, and router becomes intrusion detection enabled. Reporting tools allow network administrators to spot ping sweeps and port scans. With multiple systems reporting information to one central location, even less protected sites can start protecting the larger network.


  • Pervasive Intrusion Detection Systems (IDS). Pervasive intrusion detection stops attackers on multiple fronts and at many levels. A pervasive intrusion detection places alarms and defenses everywhere in your network. Pervasive measures make your network a tangled mess for attackers to navigate.


  • Protect Vulnerable Network Areas. Some areas of the network are particularly vulnerable to attacks. Dial-up servers, web servers, and remote sites are very prone to attack from attackers. Thus, these areas need specialized protection such as VPNs, MAC address controlled access, etc...


  • Install Network Firewalls, Defenders, and Virus Checking. These services can not only detect suspicious behavior, but it can stop the attacker before they can access anything on the computer. Some tools can “backtrack” attackers and provide you with information about the person or origin of an attack.


  • Conclusion

    Attackers today are more dedicated to causing trouble, conducting criminal activities, or terrorizing people. It costs billions of dollars to repair the damage that they do each year. The most powerful tool against attackers is knowledge. To make a network truly secure, intrusion detection and protection needs to be expanded from firewalls and routers to all devices on the network, especially user workstations.

    Reference:

    Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley &Sons Inc.

    McClure, S, (2009). Hacking Exposed 6, Mcgraw-Hill Company, ISBN 9780071613743

    Slade, R. (1994). Computer viruses: how to avoid them, how to get rid of them, and how to get help.New York.Springer-Verlag.

    Other posts in this series

    COMPUTER ATTACKS: HACKERS

    COMPUTER ATTACKS: VIRUSES

    COMPUTER ATTACKS: PROBING PORTS

    COMPUTER ATTACKS: TCP/IP SECURITY

    Monday, August 15, 2011

    COMPUTER ATTACKS: VIRUSES

    COMPUTER ATTACKS: VIRUSES
    Discussion: It is important for everyone to better understand computer viruses since they have become a real part of our computational life. Viruses are not fully autonomous but are programs written by people. These people often have a point to make, want to test their diabolical skills, make a mark in the world, or have malicious intent against an entity or state. Like biological viruses, computer viruses require a “host” to infect. Once a virus program is executed, it is able to do its “dirty work” to the local system, network, and peripheral devices or create vulnerabilities to be exploited later. It is necessary to take precautions in order to ensure the integrity of the system. Technological staff should become intimate with virus technology and methodologies in order to understand vulnerabilities and damage that these nuisance programs cause.
    DEFINITIONS:
    Cross-Scripting: A technique used by attackers to insert malicious code into a web page in place of mobile code or an application module known as an applet. This is a delivery mechanism that is capable of delivering and executing a virus program as the web page posts. Defense against this kind of assault is difficult. The end user may observe a blanked out area on the page where the applet or image should have occurred. Current virus checkers have heuristical algorithms monitoring for the behavior of cross-scripted programs running which aides in the identification of newer viruses.
    Logic, Time Bombs, and Easter Eggs: This is a focused virus that is designed to trigger when a specific event occurs or in a specified amount of time. The result of this type of virus is almost always catastrophic. These viruses are often scripted in login scripts, batch files, or buried in code placed by an individual on a target system. These are mostly the result of disgruntled employees. For example, computer coders have been known to place Easter eggs, malicious routines placed deep inside code, in order to assure job security. Should they be fired or their user account deleted the routine executes wiping drives, shutting the system down, or begins some other form of hostile action such as emailing sensitive information to the competitors. Currently, every version of Microsoft Word has an Easter Egg embedded. Placing the text=rand()in a word document and then pressing the enter key will cause three paragraphs to appear in the document. Microsoft has officially commented that this is not an Easter Egg but the process of evoking one can be demonstrated.
    Malicious Logic: This is hardware, software, or firmware intentionally included, installed, or delivered to an information system or network for unauthorized purposes.
    Polymorphic Virus: This describes a virus that infects each object differently in an attempt to fool virus scanners. These types of viruses can not be detected with a simple pattern match as is possible with most viruses. These viruses imposture as a legitimate program such as format.com. The process to imposture another legitimate program can be pre-pended, embedded, or appended to a legitimate file.
    Resident Virus: The virus is usually broken into multiple pieces such as main code and a small launcher application. This type of virus attaches itself to the operating system where it hides a small launcher application and loads on boot into memory. These viruses attempt to hide their main code between tracks or beyond the writeable area. In doing so, the code can survive either a low level or high level format of the drive. Low level formats re-establish the cylinders, tracks, and sector locations. High level formats zero out the file allocation tables and rewrite the addresses. The launcher may be polymorphic also infecting a legitimate program like format.com. So when someone formats the drive to eliminate the virus it preserves the main code and loads the small launcher again.
    Robots (BOTS): These are rogue programs that, when placed on the networks or Internet, explore the system by simulating human activity and then communicates its findings to a host. These programs often migrate or travel through the networks gaining access along the way to servers, routers, gateways, and workstations. These programs are most often used to gather intelligence data, aggregate information, or map pathways as spiders. On the Internet, the most universal BOTS are the programs that access web sites to gather content for search engines. Often these BOTS are called crawlers, aggregators, and spiders. Each has a different purpose.
    Trojan Horse: A program that has embedded virus code designed to trigger by an event or date and time. These programs appear to have benefit to the user but instead have intelligence gathering capabilities, create back doors, or are designed to cause damage to information stored on the system. The difference between a Trojan Horse and a logic, time bomb, or Easter Egg is the end users involvement. End users innocuously use a Trojan Horse thinking it is a legitimate application. The end user has no awareness or tactile contact with a logic, time bomb, or Easter Egg.
    Virus: Any program for the purpose of mal-intent when executed causes damage in some form and reproduces its own code or attaches to another program t that end.
    Worms: An independent self-reproducing virus program that is distinguished from other virus forms. They are not attached to another program file but are able to propagate over a network and increase their activity by gaining access to email contact list or access to routing tables.
    LESSON:
    Computer viruses may seem mysterious, but they are easy to understand. Viruses are nothing more than destructive software that spreads from program to program or disk to disk. If you have a virus, you are no longer in control of your personal computer (PC). When you boot your PC or execute a program, the virus may also be executing and spreading its infection. Even though some viruses are not as malicious as others are, they are all disastrous in their own ways.
    Characteristics of Viruses
    There are different ways to categorize viruses depending on their characteristics. They can be slow, fast, sparse, companion, or overwriting.
    Slow viruses - Viruses that take longer to detect because they spread very slowly and often do not cause havoc until they have sufficient numbers proliferated. They often bury themselves in network noise and attempt to disguise any pattern of their activity from any intrusion detection systems or virus detection systems.
    Fast viruses - Viruses that spread rapidly by aggressively infecting everything that they can access. When active in memory, it infects not only the programs that are executed, but also the programs that are opened.
    Sparse viruses - Viruses that infect files occasionally. It will infect files whose length falls within a certain range in order to prevent detection.
    Companion viruses - Viruses that create a new program. Companion viruses uses the fact that files have the same filename, but with different extensions, and switches these files. You will notice that there is a problem when you normal run an .EXE file and you end up running a .COM file.
    Overwriting viruses - Viruses that overwrite each file it infects with itself and the original program will no longer function. These files are consider to be impostures.
    Existence of Viruses
    The question on everyone’s mind when there is a discussion on computer viruses is: How do I know when I have a virus? Viruses have different characteristics, but there are little changes that you can look for and these changes will let you know that you have a virus. Some viruses display messages, music or pictures. The main indicators are the changes in size and content of your programs. Once you realize that you have a computer virus, you must stop it.Viruses are written to deliberately invade a victim’s computer, which makes them the most difficult to guard.
    Virus Behavior
    Computer viruses are known to be in different forms, but they all have two phases to their execution: the infection and the attack phases.
    a. Infection phase-When a user executes a program with a virus, the virus infects other programs. Some infect programs each time they’re executed and others infect upon a certain trigger such as a day or time. If the virus infects too soon, they can be discovered before they do their “dirty work”. Virus writers want their programs to spread as far as possible before detection or they begin to achieve their objective at which time they will be known.
    Many viruses go resident in memory just as a terminate and stay resident (TSR) program. This means that the virus can wait an extended period of time for something as simple as inserting a floppy before it actually infects a program. TSR programs are very dangerous since it’s hard to guess what trigger condition they use for their infection. Resident viruses occupy memory space and can cause the infamous Blue Screen of Death in MicroSoft operating systems.
    b. Attack phase-Many viruses do unpleasant things such as deleting files, simulating typing, warble video screens, or slowing down your PC. Others do less harmful things such as creating messages or animation on your screen. Just as the infection phase can be triggered, the attack phase also has its own trigger. Most viruses delay revealing their presence by launching their attack after they’ve had time to spread. This could be delayed for days, weeks, or even years.
    The attack phase is optional. Anything that writes itself to your disk without permission is stealing storage space. Many viruses simply reproduce without a trigger for an attack phase. These types of viruses damage the programs or disks they infect. This is not intentional on the part of the virus, but simply because the virus often contains very poor coding.
    Classes of Viruses
    There are four main classes of viruses: File Infectors, System or Boot Sector infectors, Macro viruses, and Stealth viruses.
    File Infectors- Out of all of the known viruses, these are the most common types. File infectors attach themselves to files that they know how to infect, usually .COM and .EXE, and overwrite part of the program. When the program is executed, the virus is executed and infects more files. Overwriting viruses do not tend to be very successful since the program rarely continues to function properly. When this happens, the virus is almost immediately discovered. The more sophisticated file viruses modify the programs so that the original instructions are saved and executed after the virus finishes. File infectors can also remain resident in memory and use “stealth” techniques to hide their presence.
    System or Boot Sector Infectors- These types of viruses plant themselves in your system sectors. System sectors are special areas on your disk containing programs that are executed when you boot your PC. These sectors are invisible to normal programs but are vital in the operation of your PC. There are two types of system sectors found on DOS PCs: DOS boot sectors and partition sectors (also known as Master Boot Records or MBRs).
    System sector viruses, commonly known as boot sector viruses, modify the program in either the DOS boot sector or partition sector. One example of this virus would be to receive a floppy from a trusted source that contains the boot disk virus. When your operating system is running, files on the floppy can be read without triggering the virus. Once you leave the floppy in the drive, and turn the computer off, the computer will look in your floppy drive first. It will find your floppy with its boot disk virus, load it, and make it temporarily impossible to use your hard drive.
    Macro Viruses- This particular virus seems to be the most misunderstood. This virus can also be classified as a file virus because they are from Microsoft Office applications. These applications have their own macro languages built in. These viruses execute because Microsoft has defined special macros that automatically execute. The mere act of opening an infected Word document or infected Excel spreadsheet can allow the virus to be executed. Macro viruses have been successful because most people regard documents as data and not as programs.
    Stealth Viruses- These viruses attempt to hide their presence. Some techniques include hiding the change in date and time and the increase in file size. Others can prevent anti-virus software from reading the part of the file where the virus is loaded. They can also encrypt the virus code using variable encryption techniques.
    WideSpread Myths
    Viruses are often misunderstood. They can only infect your computer if you execute an infected file or boot from an infected floppy disk. Here are a few other common myths being spread regarding viruses.
    You can get a virus from data. Data is not an executable program, so this is a myth. If someone sends you a data file that contained a virus, you would have to rename the file to execute it and become infected. In essence, the virus must be executable in order to be hostile. Data is inert and simply consumes space.

    Viruses can infect your CMOS memory. CMOS stands for Complimentary Metal Oxide Semiconductor. It is functionally different than the dynamic TTL (Transistor Transistor Logic) RAM used for executing programs. The CMOS memory is very small and is not designed for executable routines. CMOS contains system configuration, time and date information. Viruses can damage your CMOS, but the CMOS will not get infected. If your CMOS memory is corrupted, you may not be able to access your disks or boot your PC.

    You can write-protect your hard drive. There are some programs that claim to write-protect your hard drive. This will only be done by software. Write protecting will stop some viruses and will protect your disk from someone inadvertently writing to it. It also renders updates and functional operation of the computer ineffective as SWAP files and other temporary caching cannot be completed.

    Viruses come from online systems.Online systems are pinnacle in the spread of viruses.It is after downloading that there are innumerable methods of invoking the virus through macros, automatic reposting of a webpage, automatic views of emails, and other methods.Even loading a plug-n-play DvD, CDROM, or memory stick can invoke a virus.

    You can get a virus from graphic files. Graphic files, such as .JPG or .GIF, contain images. These images are displayed. In order to get a virus, a program has to be executed. Since graphic files are nothing more than data files, they pose no executable threat. However, through a technique of stegnography text and data including code can be embedded in an image file. A launcher program may know this and look for the code to call in and execute. However, the launcher is apart from the image file and executable.
    Virus Protection Software
    There are many techniques that can be used to detect viruses on computers. Each one has its own strengths and weaknesses. It would be great to actually stop viruses from infecting your computer. Since that can not be, we can do the next best thing: use anti-virus software and attempt to detect viruses. If you detect a virus, you can remove it and prevent it from spreading.
    Virus Scanners
    Scanning is the only technique that can recognize a virus while it is still active. Once a virus has been detected, it is important to remove it as quickly as possible. Virus scanners look for special code characteristics of a virus. The writer of a scanner extracts identifying pieces from code that the virus inserts. The scanner uses these pieces to search memory, files, and system sectors. If a match is found, the virus scanner will announce that a virus has been found and seek to isolate it.
    If scanning is your only defense against viruses, you can improve the odds of detecting a virus on your computer by using two or more scanners. You should also make sure that you get the latest version of virus scanners.
    Disinfector
    Most vendors that sell scanners also have a disinfector. A disinfector has the same limitations of a scanner, except it must be current to be safe to use. A disinfector also has an even bigger disadvantage: many viruses can’t be removed without damaging the infected file. There have also been many reports that files are still damaged even when the program claims to have disinfected the file. A disinfector is good to use, but use it with care.
    Another disadvantage with a disinfector is that some of your programs may no longer work after being disinfected. Many disinfectors will not tell you that it failed or to correctly restore the original program. You can safely use a disinfector if you have the capability to check and make sure the original file is restored.
    Interceptors
    Interceptors, also known as resident monitors, are particularly useful for deflecting logic bombs and Trojans. The interceptor monitors operating system requests that write to disk or do other things that the program considers threatening. If a request is found, it generally pops up and asks you if you want to allow the request to continue. There is no reliable way to intercept direct branches into low level code or to intercept direct input and output instructions done by the virus. Some viruses attempt to modify the interrupt vectors to disable any monitoring code. It is important to realize that monitoring is risky. An interception product would be useful to another protection program. There are many ways to bypass interceptors, so you should not depend on interceptors as a primary defense against viruses.
    Inoculators
    Inoculators are also known as immunizers. There are two types of inoculators. One type modifies your files or system sectors in an attempt to fool viruses into thinking that the user is already infected. It does this by making the same changes that the viruses use to identify the file or sector as infected. Presumably, the virus will not infect anything because it will think that everything is already infected. This works only for a small amount of viruses and is considered unreliable today.
    The second type is an attempt to make your programs self-check by attaching a small section of check code on your programs. When the program executes, the check code first calculates the check data and compares it to the stored data. The check code will warn you of any changes to the program. This can be a disadvantage because the self-checking code and check data can be modified or disabled. Another disadvantage would be that some programs refuse to run if they have been modified this way. Presumably, this creates alarms from other anti-virus programs since the self-check code changes the original program in the same way a virus would. Some products use this technique to substantiate their claim to detect unknown viruses. As a result, this would not be a reliable way to get rid of viruses.
    Integrity Checker
    Integrity checker reads your entire disk and records integrity data, which acts as a signature for the files, boot sectors, and other areas. A virus must change something on your computer. The integrity check identifies these changes and alerts you to a virus. This program is the only solution that can handle all the other threats to your data along with viruses. They also provide the only reliable way to find what damage a virus has done. A well-written integrity checker should be able to detect any virus, not just known viruses.
    An integrity checker won’t identify a virus by name unless it includes a scanner component. Many anti-virus software now incorporate this technique. Some older integrity checkers were simply too slow or hard to use to be truly effective. A disadvantage of a bare-bones integrity checker is that it can’t differentiate file corruption caused by a bug from corruption caused by a virus. You should make sure to verify that your product will read all files and system sectors in their entirety rather than just spot-checking.

    Other Threats to Computers

    There are many other threats to your computer. Problems with hardware, software, and typos are more likely to cause undetected damage to your data and may appear to be virus-like. It’s easy to understand the threat that disk failure represents. Even though viruses are a threat, we need to address other threats as well by fault tolerancing our systems, running multiple processor cores, and installing stable quality RAM. Even driver updates can cause damage and loss of data. Therefore, automatic updates should be turned off and all updates reviewed regularly.>/div>

    Conclusion

    There are many variants of viruses out in the real world today. No one is safe from being infected. That’s why it is so important to take precautions. If you receive anything from an unknown source, delete it. Always update your antivirus with the latest signature files. Most viruses do little damage, but there are still others that can delete important files from your hard drive causing your PC to become inoperable. A few minutes of prevention is better than several hours of frustration and lost data.

    Reference:

    Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

    McClure, S, (2009). Hacking Exposed 6, Mcgraw-Hill Company, ISBN 9780071613743

    Slade, R. (1994). Computer viruses: how to avoid them, how to get rid of them, and how to get help.New York.Springer-Verlag.

    Other posts in this series

    COMPUTER ATTACKS: HACKERS

    COMPUTER ATTACKS: VIRUSES

    COMPUTER ATTACKS: PROBING PORTS

    COMPUTER ATTACKS: TCP/IP SECURITY