Decision Making and IT
Decision Making and IT
Preface
Information systems are changing how we work and make decisions in organizations. More and more senior executives are relying on computers and the Internet to gather information to make sound decisions in a fast changing business environment.
Detailed in here are a few Information system related processes that are used by global businesses to make decisions and to avoid pitfalls. Outsourcing, perhaps the most controversial widely adopted business activity of the 21st century has been helped in no small measure by the Internet. Many small and medium scale businesses have been able to compete head on with global Multi Nationals simply by unleashing the power of the Internet.
Table of Contents 1.Information.. 4 2.The role of Information systems within organizations. 8 3. Aligning Information systems with Business strategy.10 4.IT enabled transformation13 5.Emerging IS trends in organizations..19 6.Cost and benefits of IS.26 7.System changeover methods..29 8.IS implementation- avoiding user resistance and non-usage.31 9.Outsourcing.38 10.Approaches to Outsourcing...45 11.Privacy and Security48
0 Information
1.1 Introduction
Information is different from data. Data consists of numbers, letters, symbols, raw facts, events and transactions which have been recorded but not yet processed into a form that is not suitable for making decisions. Information is data that has been processed in such a way that it has a meaning to the person who receives it, who may then us it to improve the quality of the decision making
Value of information Information may: reduce unnecessary costs eliminate losses result in better marketing strategies assist in attaining competitive advantage
Converting data into Information The process of turning data into information may include the following stages: 1. Data collection 2. Data evaluation: collected data is filtered for relevance 3. Data analysis: different dimensions of the data are analyzed e.g: comparison with a budget, with industry best 4. Data interpretation: meaning added to the data 5. Data reporting: information is disseminated to users
Levels of Information
Strategic
Tactical
Operational
Strategic information is mainly used by directors and senior management to choose between alternative courses of action, to plan the organizations overall objectives and strategy and to measure whether these are being achieved. e.g: o Profitability of main business segments o Prospects for present and potential markets Tactical information is used by managers at all levels, but mainly at the middle level for tactical planning and management control activities, such as pricing, purchasing, distribution and stocking. E.g.: o Sales analysis o Stock levels o Productivity measures
Operational information is used mainly by managers on the operational level such as foremen and section heads who have to ensure that routine tasks are properly planned and controlled. E.g.: o Listings of debtors and creditors o Payroll details o Raw material requirements and usage.
Sales Performance
Types of information systems o Transaction processing system(TPS)-these major applications carry out the essential, routine processing of day to day transactional data. o Management information systems (MIS)-these deliver routine and ad hoc reports to the management. such services are used extensively by operational managers. o Decision support systems (DSS)-a combination of technology used to support the decision making process of tactical and strategic management. o Expert system-expert related DSS can provide expert knowledge and advice. These include a database of decision rules that define expertise in a given area. o Executive information system-these are used in large organization to draw together data from various, usually global sources. Powerful forecasting and data analysis tools are then use prior to communicating decisions.
Illustration 1-How the Internet has affected travel agents. Demand for the services of traditional high street travel agents has been on the decline due to the growth in use of the internet. Travel agents have been cut out of many transactions as the public can book directly with hotels, airlines and rail companies. However, there has been growth in online travel agents such as lastminute.com and expedia.com, who make the online experience easier for customers, presenting a wide choice of products and services. Another interesting example is opodo.com, set up by a collaboration of European airlines to encourage customers to book flights directly with them rather than using cost comparison intermediaries such as lastminute.com
10
Cost Leadership
Differentiation
Focus
Cost leadership-this strategy seeks to achieve the position of the lowest cost producer in the industry as a whole. -Information systems can reduce the staff time spent on clerical work and thus increase time spent on business development -Information systems might allow a company to tie up its purchasing services directly with its suppliers, reducing stock holding costs or delays in processing orders.
11
Differentiation- this strategy assumes that competitive advantage can be gained through particular characteristics of a product or service. The organization seeks to become different to its competitors in a way that its customers value e.g: brand, design etc. -IS can enhance an organization s ability to compete by providing it with up to the minute information on customer needs allowing it to differentiate its products on factors other than price.
Focus-this involves the restriction of activities to only part of the market. -IT can facilitate the collection of sales and customer information that identifies targetable market segments. -It may enable a more customized product/service to be produced.
12
0 IT enabled transformation
4.1 Business transformation
Business transformation ids the process of a high level vision for the business into new services. A plan will: Set out new ways of working Show how components fit together to deliver strategic objectives Show how customers needs will be met Show how is/it will be used to support the business
2 Internal integration
1 Localized exploitation
13
Characteristics Use existing IT functionality to reengineer individual, high value areas of business operation.
Internal Integration
Use the capabilities inherent in IT to integrate business operations: reflects a seamless process
Redesign key processes to provide business capabilities for the future: use IT as an enabler
Strategic logic used to provide products and services from partners: exploitation of IT for learning.
Redefinition of corporate scope: what you do, what partners do, and what is enabled by IT
IT has a significant role in business transformation and has resulted in the emergence of new forms of organization. These include:
14
Illustration 3 Not on the High Street is an internet based business selling luxury home ware, clothing and gifts. It has enjoyed rapid growth and success since it launched in 2006. The founders work with over 900 small British businesses. These businesses design, produce and deliver the products to customers. This enables "Not on the High Street" to sell a unique range of products to fulfill the needs of the demanding modern customer and to keep costs low. Virtual companies allow executives, writers, researchers and other professionals to collaborate on new products and services without ever meeting face to face. The organizations feel real to the client and meet their needs at least adequately as the more traditional organizations. A virtual company will outsource most or all of its functions.
Illustration 4 A firm manufactures wedding dresses. It could outsource: The design to a wedding dress designer; Marketing to a specialist marketing firm; Manufacturing to a sub contractor; Delivery to a specialist logistics firm; Collection of money from customers to a specialist debt collection company; Tax returns and accounts to a specialist accountancy firm.
15
Drawbacks of virtual companies It may be difficult to negotiate a revenue sharing agreement between the different partners. Loss of control may result in a loss of quality the partners may also work for competitors thus reducing any competitive advantage
16
Information can be sent remotely-Email; fax and text messaging can be used to send information
Electronic meetings can be held-Tele conferencing and video conferencing enables virtual teams to listen and to talk to each other
Information can be shared remotely-The internet, shared databases and data tracking systems can be accessed by members of the virtual team -This enables information to be shared, e.g.: customer and product information, research and project work, stock and delivery information.
17
Challenges for virtual teams: Forming a team-it may be difficult to establish a cohesive and trusting team Knowledge sharing-sharing of knowledge may prove more difficult due to the absence of face to face contact. Processes and goals-it may be more difficult since employees will be working at different times, in different locations and in different ways Cultural differences-team members will be from different backgrounds and cultural difference may make working together more difficult. Morale-some team members may find this way of working isolating.
E-commerce E-commerce refers to conducting business electronically via some form of communication link. The effects of e-commerce on the organization. -Access to a much larger market -Reduced costs e.g.: less staff, fewer buildings, lower transaction costs. -Elimination of intermediary organizations e.g.: Dell use the internet to sell their computers directly o their customers -Reduced use of cash-reliance on debit and credit card transactions -Individualized marketing-information can be collected about individual consumers which will allow marketing to take place at an individual customer level.
18
19
Networks
A computer network, or simply a network, is a collection of computers and other hardware components interconnected by communication channels that allow sharing of resources and information. Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network. Simply, more than one computer interconnected through a communication medium for information interchange is called a computer network. Networks may be classified according to a wide variety of characteristics, such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope. Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. Well-known communications protocols include Ethernet, a hardware and link layer standard that is ubiquitous in local area networks, and the Internet protocol suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and applicationspecific data transmission formats. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines.
20
Databases
A database is an organized collection of data, today typically in digital form. The data are typically organized to model relevant aspects of reality (for example, the availability of rooms in hotels), in a way that supports processes requiring this information (for example, finding a hotel with vacancies). The term database is correctly applied to the data and their supporting data structures, and not to the database management system (DBMS). The database data collection with DBMS is called a database system. The term database system implies that the data is managed to some level of quality (measured in terms of accuracy, availability, usability, and resilience) and this in turn often implies the use of a general-purpose database management system (DBMS). A general-purpose DBMS is typically a complex software system that meets many usage requirements to properly maintain its databases which are often large and complex. The utilization of databases is now so widespread that virtually every technology and product relies on databases and DBMSs for its development and commercialization, or even may have DBMS software embedded in it. Also, organizations and companies, from small to large, depend heavily on databases for their operations. Well known DBMSs include Oracle, IBM DB2, Microsoft SQL Server, Microsoft Access, PostgreSQL, MySQL, and SQLite. A database is not generally portable across different DBMS, but different DBMSs can inter-operate to some degree by using standards like SQL and ODBC together to support a single application built over more than one database. A DBMS also needs to provide effective run-time execution to properly support (e.g., in terms of performance, availability, and security) as many database end-users as needed. A way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance.
21
22
Knowledge Management Software, in general, enables the combination of unstructured information sources, such as individual word processed documents and/or .pdf formats, email, graphic illustrations, unstructured notes, website links, invoices, and other information bearing collections, such as a simple thought, through to a combination of millions of interactions from a website, and through that combination enables the seeker to obtain knowledge that otherwise would not have been discovered.
Quality and efficiency Decrease in overall costs Increase Profitability Successful development, implementation, use and support of customer relationship management systems can provide a significant advantage to the user, but often there are obstacles that obstruct the user from using the system to its full potential. Instances of a CRM attempting to contain a large, complex group of data can become cumbersome and difficult to understand
23
for ill-trained users. The lack of senior management sponsorship can also hinder the success of a new CRM system. Stakeholders must be identified early in the process and a full commitment is needed from all executives before beginning the conversion. But the challenges faced by the company will last longer for the convenience of their customers.
Additionally, an interface that is difficult to navigate or understand can hinder the CRMs effectiveness, causing users to pick and choose which areas of the system to be used, while others may be pushed aside. This fragmented implementation can cause inherent challenges, as only certain parts are used and the system is not fully functional. The increased use of customer relationship management software has also led to an industry-wide shift in evaluating the role of the developer in designing and maintaining its software. Companies are urged to consider the overall impact of a viable CRM software suite and the potential for good or bad in its use.
Sales force automation Sales force automation (SFA) involves using software to streamline all phases of the sales process, minimizing the time that sales representatives need to spend on each phase. This allows a business to use fewer sales representatives to manage their clients. At the core of SFA is a contact management system for tracking and recording every stage in the sales process for each prospective client, from initial contact to final disposition. Many SFA applications also include insights into opportunities, territories, sales forecasts and workflow automation.
Marketing CRM systems for marketing help the enterprise identify and target potential clients and generate leads for the sales team. A key marketing capability is tracking and measuring multichannel campaigns, including email, search, social media, telephone and direct mail. Metrics monitored include clicks, responses, leads, deals, and revenue. Alternatively, Prospect Relationship Management (PRM) solutions offer to track customer behavior and nurture them from first contact to sale, often cutting out the active sales process altogether.
24
In a web-focused marketing CRM solution, organizations create and track specific web activities that help develop the client relationship. These activities may include such activities as free downloads, online video content, and online web presentations Customer service and support CRM software provides a business with the ability to create, assign and manage requests made by customers. An example would be Call Center software which helps to direct a customer to the agent who can best help them with their current problem. Recognizing that this type of service is an important factor in attracting and retaining customers, organizations are increasingly turning to technology to help them improve their clients experience while aiming to increase efficiency and minimize costs. CRM software can also be used to identify and reward loyal customers which in turn will help customer retention
25
Cost-savings benefits lead to reduction in administrative and operational costs. A reduction in the size of the clerical staff used in the support of an administrative activity is an example of a cost-saving benefit. Cost-avoidance benefits are those, which eliminate future administrating and operational costs. Not being required to hire additional staff in future to handle an administrative activity is an example of a cost-avoidance benefit. Improved-service-level benefits are those where the performance of a system is improved by a new computer-based method. Improved-information-benefit is where computer based methods lead to better information for decision-making. For example, a system that reports the mostimproved fifty customers as measured by an increase in sales is an improvedinformation. This information makes it easier to provide better service to major customers.
26
Categories of Costs and Benefits The costs associated with the system are expenses, outlays or losses arising from development and using a system. But the benefits are the advantages received from installing and using this system. 1. Tangible or Intangible Costs and Benefits Tangibility refers to the ease with which costs or benefits can be measure. An outlay of cash for any specific item or activity is referred to as a tangible cost. These costs are known and can be estimated quite accurately. Costs that are known to exist but their financial value cannot be exactly measured are referred to as intangible costs. The estimate is only an approximation. It is difficult to fix exact intangible costs. For example, employee morale problems arising due to the installation of a new system is an intangible cost. How much the moral of an employee has been affected cannot be exactly measured in terms of financial values. Benefits are often more difficult to specify exactly than costs. For example, suppliers can easily quote the cost of purchasing a terminal but it is difficult for them to tell specific benefits or financial advantages for using it in a system. Tangible benefits such as completing jobs in fewer hours or producing error free reports are quantifiable. Intangible benefits such as more satisfied customers or an improved corporate image because of using new system are not easily quantified. Both tangible and intangible costs and benefits should be taken into consideration in the evaluation process. If the project is evaluated on a purely intangible basis, benefits exceed costs by a substantial margin. We call such projects cost effective. On the other hand, if intangible costs and benefits are included, the total costs (tangibleintangible) exceed the benefits making the project an undesirable investment. Hence it is desirable that systems projects should not be evaluated on the basis of intangible benefits alone.
27
2. Direct or Indirect Costs and Benefits Direct costs are those which are directly associated with a system. They are- applied directly to the operator. For example, the purchase of DVD for Rs.400/- is a direct cost because we can associate the DVD with the money spent. Direct benefits also can be specifically attributed to a given project. For example, a new system that can process 30 per cent more transactions per day is a direct benefit. Indirect costs are not directly associated with a specific activity in the system. They are often referred to as overhead expenses. For example, the cost of preparing a suitable space to install a system, maintenance of the computer centre, heat, light and air-conditioning are all tangible costs, but it is difficult to calculate the proportion of each attributable to a specific activity such as a report. Indirect benefits are realized as a by-product of another system. For example, a system that tracks salescalls on customers provides an indirect marketing benefit by giving additional information about competition. In this case, competition information becomes an indirect benefit although its work in terms of money cannot be exactly measured. 3. Fixed or Variable Costs and Benefits Some costs and benefits remain constant, regardless of how a system is used. Fixed costs are considered as sunk costs. Once encountered, they will not recur. For example, the purchase of equipment for a computer centre is called as fixed cost as it remains constant whether the equipment is being used extensively or not. In contrast, variable costs are incurred on a regular basis. They are generally proportional to volume and continue as long as the system is in operation. For example, the cost of computer forms vary in proportion to the amount of processing or the length of the reports desired. Fixed benefits also remain constant by using a new system, if 20 percent of staff members are reduced, we can call it a fixed benefit. Variable benefits, on the other hand, are realized on a regular basis. For example a library information system that saves two minutes in providing information about particular book i.e. whether it is issued or not, compared with the manual system. The amount of time saved varies with the information given to the number of borrowers.
28
company converts accounts receivable, accounts payable, payroll, and so on. Advantages to phased changeovers are their low cost and isolated errors. The main disadvantage is the process takes a long time to complete because phases need to be implemented separately. Pilot Changeover With a pilot changeover, the new system is tried out at a test site before launching it company-wide. For example, a bank may first test the system at one of its branches. This branch is referred to as the pilot, or beta, site for the program. Since parallel changeovers tend to be expensive, using the pilot changeover technique allows companies to run the new system next to their old system but on a much smaller scale. This makes the pilot changeover method much more cost-effective. After the kinks are worked out of the system at the test site, companies usually opt to use the direct changeover technique to launch the system company-wide.
30
Inadequately trained and/or inexperienced project managers Failure to set and manage expectations Poor leadership at any and all levels Failure to adequately identify, document and track requirements Poor plans and planning processes Poor effort estimation Cultural and ethical misalignment Misalignment between the project team and the business or other organizations it serves Inadequate or misused methods Inadequate communication, including progress tracking and reporting
The reaction of the users is an important factor in causeing a new implementation to succeed or fail. Therefore, it is important to know how an organization should implement a new system while facing the least amount of resistance towards it. A critical factor for a project to succeed is the way that users interact with the Information Technology (IT) project. Human reaction to the project causes it to succeeded or fail. The achievement of the IT project depends on how much it is being used. An end-user is anyone who uses the system. Resistance is explained as a reaction to any kind of change. The end-user resistance has symptoms such as employees coming in late to work or being absent, not using the new system and insisting to work with the old one, making careless mistakes, avoiding work assignments, wasting time and making little effort to enhance work-related knowledge and skills,
31
requesting job transfer or leaving the job and accepting a low quality system. This imposes instabilities and huge costs into a project. There are some factors which affect and cause end-user resistance like inherent resistance to change, not participating in the development process, poor technical quality that makes the system not user friendly, lack of support, insufficient training, poor interaction between designers and users and also the if the benefits of the new system are not clear. Several terms and theories are developed to discuss users acceptance of computer based technology, these theories model the behavior of users:
Diffusion of Innovation (DOI) Theory is based on the conditions which show how, why and at what rate a new idea, product or practice will be accepted by members of a culture.
Technology Acceptance Model (TAM) consists of two beliefs, perceived utilities and perceived ease of application which explain attitudes to adopt a new technology.
Theory of Planned Behavior (TPB) is a theory about the link between attitude and behavior. Computer anxiety is a term for being afraid of using computers or computer software. Computer anxiety takes different forms such as the fear of breaking something, fear of health, fear of anything new, fear of having a short time to learn and a fear of technology.
Computer self-efficacy is used to illustrate a users ability to use computer based technology. Individuals, who have high self-efficacy, have more selfassurance to use new technology. So they have a lower level of computer anxiety. Self-efficacy can be improved by efficient training and support.
Approaches to deal with user resistance can be divided into three categories: Participative, Directive and Consultative. Participative approaches focus on training and making support structures. Consultative approaches provide moral support and information for users. Directive approaches are used more for the elimination of
32
individuals who dont use the new system and focus on financial incentives and power redistribution. According to surveys from IT managers in New Zealand, user participation, efficient training and support, communication and consultant contribution on a project are frequently used approaches to deal with end-user resistance:
User participation Participation is the activities which are done by users during the development of the system. User participation creates user empowerment by providing a sense of ownership. User empowerment is one of the cases which decreases end-user resistance in computer based technology development. It increases user adoption of a new system and technology and therefore reduces resistance toward changes.
End-user training Training is the key to overcome resistance towards new technology. It is the process in which knowledge and skills will be transferred to the users to enable them to utilize the new system. Training consists of theoretical training (explain to end-users how the system works), self-taught (learning new system by themselves with trial and error), just-in-time (prior to implementing the new system), staged training (break up training into smaller sessions).
End-user support Support is the way that helps end-users to work with computer systems or applications. To have a successful IT project, support is the important key before and after implementing the new system.
Communication Communication is the process of transferring information. It can take place in several forms such as oral, written, face to face contact and computer based
33
technology. It causes users feel important and overcome their resistance by increasing trust toward new technology.
Role of consultants Consultants provide users with experience, knowledge, information and moral support. Since consultants are not involved in organizations, they are not affected by organizational behavior and politics. Their decisions can provide better conditions for user adaptations and the system may become successful.
E-Health projects and other IT projects can use these approaches but depending on the requirement, scope and required organizational changes, specific approaches should be chosen for each project and organization. According to the Isabella lecture, since the term user friendly and software quality are unclear, the term usability can be used instead. It includes effectiveness, efficiency and user satisfaction. Usability can be an important factor to overcome end-user resistance. Training is very important but some of the resistance is because the users must have their own free time to learn about new technology. Training before implementing the new system is more important, since some organizations focus on getting a lot of work done during the day and if the new system change comes suddenly, the users will get confused and this causes more resistance. Therefore, the organization must set a proper schedule before implementing the new system to train the users completely. Also, they can use the trained users to train other individuals in the organization. If the users still resistance, the organization must act with directive approaches. For example, rewarding ideas that will improve the system, position changes to users with no interest in the new system; increasing job responsibility and so on. In the end, if there are some users who havent adopted, the organization will be left with no choice but to make them redundant.
34
It is important to explain to users that the new technology will increase speed and efficiency in their work and the organization should make it obvious to users that the new system has more benefits thereby reducing external costs. End-user resistance could be viewed as a positive or a negative outcome. For instance, in IT companies the user resistance might indicate a system fault, but in a hospital user resistance might indicate that the fault lies in the hands of the user.
Conclusions
User resistance is an important occurrence when information systems are implemented and can cause implementation failure. Some approaches to manage the end-user resistance and changes are participation, training, support, communication and consultant involvement. Participative and directive are approaches are used to deal with end-user resistance. End-user resistance could be viewed in positively or negatively. It can increase costs and the time of a project but enhance the quality and usability of the system.
constructs. The "field" is very dynamic, changing with time and experience. When fully constructed, an individual's "field" (Lewin used the term "life space") describes that person's motives, values, needs, moods, goals, anxieties, and ideals. Lewin believed that changes of an individual's "life space" depend upon that individual's internalization of external stimuli (from the physical and social world) into the "life space." Although Lewin did not use the word "experiential," (see experiential learning) he nonetheless believed that interaction (experience) of the "life space" with "external stimuli" (at what he calls the "boundary zone") were important for development (or regression). For Lewin, development (or regression) of an individual occurs when their "life space" has a "boundary zone" experience with external stimuli. Note, it is not merely the experience that causes change in the "life space," but the acceptance (internalization) of external stimuli. Lewin took these same principles and applied them to the analysis of group conflict, learning, adolescence, hatred, morale, German society, etc. This approach allowed him to break down common misconceptions of these social phenomena, and to determine their basic elemental constructs. He used theory, mathematics, and common sense to define a force field, and hence to determine the causes of human and group behavior. Force Field Analysis was developed by Kurt Lewin (1951) and is widely used to inform decision making, particularly in planning and implementing change management programmes in organizations. It is a powerful method of gaining a comprehensive overview of the different forces acting on a potential organizational change issue, and for assessing their source and strength. How to Use Force-Field Analysis 1. A facilitator writes a short description of a problem or situation where all members of the group can see it.
36
2. The facilitator directs the group to name forces that wither help or hinder the movement from a problem situation to its solution.
3. With the consensus of the group, the facilitator lists each force on the "help" or "hinder" area, above and below the description of the problem or current situation.
4. Arrows can be used to indicate the distance at which the forces help or hinder a solution as well as the strength at which each force or combination of forces drives toward or away from a solution.
5. After the group appears to have exhausted the list of forces, the facilitator leads a discussion, the aim of which is to find combinations of forces that will strengthen the positive and weaken the negative forces that act upon the situation.
37
0 Outsourcing
Outsourcing is the process of contracting out a business process, which an organization may have previously performed internally or which the company deems necessary or important, to an independent organization, where the process is purchased as a service. Though the practice of purchasing a business function instead of providing it internallyis a common feature of any modern economy, the term outsourcing became popular in America near the turn of the 21st century. An outsourcing deal may also involve transfer of the employees involved to the outsourcing business partner. Although the definition of outsourcing includes both foreign and domestic contracting, the term is sometimes used exclusively when referring to the former. The more clear term for this is offshoring, which can be described as a company taking a function out of their business and relocating it to another country, whether the external country is physically offshore or not. The opposite of outsourcing is called insourcing, and is sometimes accomplished via vertical integration. However, a business can provide a contract service to another business without necessarily insourcing that business process. Two organizations may enter into a contractual agreement involving an exchange of services and payments. Outsourcing is said to help firms to perform well in their core competencies and mitigate shortage of skill or expertise in the areas where they want to outsource. In the early 21st century businesses increasingly outsourced to suppliers outside their own country, sometimes referred to as offshoring or offshore outsourcing. Several related terms have emerged to refer to various aspects of the complex relationship between economic organizations or networks, such as nearshoring, crowdsourcing, multisourcing and strategic outsourcing. Outsourcing can offer greater budget flexibility and control. Outsourcing lets organizations pay for only the services they need, when they need them. It also
38
reduces the need to hire and train specialized staff, brings in fresh engineering expertise, and reduces capital and operating expenses. One of the biggest changes in the early 21st century came from the growth of groups of people using online technologies to use outsourcing as a way to build a viable service delivery business that can be run from virtually anywhere in the world. The preferential contract rates that can be obtained by temporarily employing experts in specific areas to deliver elements of a project purely online means that there is a growing number of small businesses that operate entirely online using offshore contractors to deliver the work before repackaging it to deliver to the end user. One common area where this business model thrives is in website creation, analysis and marketing services. All elements can be done remotely and delivered digitally and service providers can leverage the scale and economy of outsourcing to deliver high value services at reduced end-customer prices. The most common reasons why companies decide to outsource include cost reduction and cost savings, the ability to focus its core business, access to more knowledge, talent and experience, and increased profits. Many companies decide to outsource because it cuts costs such as labor costs, regulatory costs, and training costs. Foreign countries tend to have workers who will complete the same amount of work as in the United States, but for less than half the salary that an American employee will make. This motivates companies to outsource overseas to find foreign workers who are willing to work for these lower wages. The company can spend up to half the usual cost to train these workers to become experts in a different country. Lower regulatory costs are an addition to companies saving money when outsourcing. Comparing the costs to employing a worker in the United States to a worker in China, it is noticed that an employer in the U.S. has to pay higher taxes (social security, Medicare, safety protection (OSHA regulations) and also FICA (taxes)). Companies are able to focus their money and resources more towards improving the core aspects of its business when outsourced. For example an insurance company may outsource its landscaping functions to a service provider that specializes in
39
landscaping since it is irrelevant to the core operations of insurance. The landscaping is performed by an expert outsourced organization and the insurance company can focus on doing what it specializes in. This allows the outsourcing company to build onto its core functions that keep the business running smoothly. Another example is that companies and public entities such as a public school district that outsources functions, such as their payroll offices to companies like ADP or Ceridian, which specialize in payroll functions. In the case of outsourcing, firms may find that workers in other countries can provide better customer support than their domestic counterparts. For example, an online coffee shop owner who moved his calling center to the Philippines found that his customers received better customer support from workers in this country. Revenue and profit plays a large role in the reason for a company outsourcing. Since the costs are cheaper in different countries for a corporation to run it, as well as to train the employees, this saves the company a large sum of money. More profit comes in when the vendors are able to purchase products at a less expensive rate and continue to sell them at a reasonable price for consumers. The prices are reduced for services as well as products when purchased at a cheaper price. Risks When companies offshore services, even though it may not be the core parts of the business, those jobs leave the home country for foreign countries. Outsourcing may increase the risk of leakage, reduce confidentiality, as well as introduce additional privacy and security concerns. Advantages Companies are able to provide services and products to consumers at a cheaper price while still having a large margin for profit. This profit margin benefits both the company as well as the consumer. The cheaper prices lead to an increase a companys economy. Although losing jobs hurts the economy because more citizens become unemployed, the cheaper prices allows customers to purchase more products and services which helps to rebuild an economy.
40
Management processes
Greater physical distance between higher management and the production floor employees often requires a change in management methodologies, as inspection and feedback may not be as direct and frequent as in internal processes. This often requires the assimilation of new communication methods such as Voice over IP, Instant messaging, and Issue Tracking Systems, new Time management methods such as Time Tracking Software, and new cost and schedule assessment tools such as Cost Estimation Software.
Quality of service
Quality of service is best measured through customer satisfaction questionnaires which are designed to capture an unbiased view.
Language skills
In the area of call centers end-user-experience is deemed to be of lower quality when a service is outsourced. This is exacerbated when outsourcing is combined with off shoring to regions where the first language and culture are different. Foreign Call center agents may speak with different linguistic features such as accents, word use and phraseology, which may impede comprehension. The visual clues that are missing in a telephone call may lead to misunderstandings and difficulties.
Security
Before outsourcing an organization is responsible for the actions of all their staff and liable for their actions. When these same people are transferred to an outsourcer they may not change desk but their legal status has changed. They are no longer
41
directly employed or responsible to the organization. This causes legal, security and compliance issues that need to be addressed through the contract between the client and the suppliers. This is one of the most complex areas of outsourcing and requires a specialist third party adviser. Fraud is a specific security issue that is criminal activity whether it is by employees or the supplier staff. However, it can be disputed that the fraud is more likely when outsourcers are involved, for example credit card theft when there is scope for fraud by credit card cloning. In April 2005, a high-profile case involving the theft of $350,000 from four Citibank customers occurred when call center workers acquired the passwords to customer accounts and transferred the money to their own accounts opened under fictitious names. Citibank did not find out about the problem until the American customers noticed discrepancies with their accounts and notified the bank.
Qualifications of outsourcers
In the engineering discipline there has been a debate about the number of engineers being produced by the major economies of the United States, India and China. The argument centers around the definition of an engineering graduate and also disputed numbers. The closest comparable numbers of annual graduates of four-year degrees are United States (137,437) India (112,000) and China (351,537). Companies looking to outsource their engineering activities should evaluate the capabilities of the providers. There are many benchmarking reports by independent research and consulting firms which analyze the vendors' capabilities.
Diversification
The early trend in outsourcing was manifest in a financial construct where a function's associated capital and personnel were sold to a vendor and then rented back over a series of years. Early benefits were a boost in expertise and efficiency as outsource vendors had more focus and capability in their specialization. As time progressed, the year 0 benefit was off the books, customer needs evolved and contracts generally aged poorly. Rigid contracts hampered the ability of customers to
42
respond to emerging business drivers, and simultaneously tied the hands of the vendor's team who was focused on increased efficiencies for static problems. The result tended to be additional "project" contracts for incremental changes in a monopoly environment. Many deals became contentious, and many customers have become very uncomfortable surrendering so much power to a single vendor. As the contract aged, it became increasingly difficult to even negotiate with vendors with confidence, because the customer began to lack any real knowledge of the cost structure of the function, or the competitive situation of the vendor. Industry leaders turned to each other, trade journals and management consultants to try to regain control of the situation, and the next answer that grabbed hold of the industry was labor cost arbitration; leveraging cheap, offshore resources to replace or pressure increasingly expensive legacy outsource vendors. Pressure led incumbent vendors to move resources offshore, or to be replaced wholesale. As this renegotiation was under way, many customers seized the opportunity to restructure to gain more control, transparency and negotiating power. The end result has been fragmentation of outsource contracts and a decline in mega-deals. Many companies are now relying on several vendors who each offer specialization and / or lowest cost.
Insourcing
As mentioned above, outsourcing has gone through many iterations and reinventions. Some outsourcing deals have been partially or fully reversed citing an inability to execute strategy, lost transparency & control, onerous contractual models, a lack of competition, recurring costs, hidden costs, etc... Many companies are now moving to more tailored models where along with outsource vendor diversification, key parts of what was previously outsourced has been insourced. Insourcing has been identified as a means to ensure control, compliance and to gain competitive differentiation through vertical integration or the development of shared services [commonly called a 'center of excellence']. Insourcing at some level also tends to be leveraged to enable organizations to undergo significant transformational change.
43
Further, the label outsourcing has been found to be used for too many different kinds of exchange in confusing ways. For example, global software development, which often involves people working in different countries; it cannot simply be called outsourcing. The outsourcing-based market model fails to explain why these development projects are jointly developed, and not simply bought and sold in the marketplace. Recently, a study has identified an additional system of governance, termed algocracy that appears to govern global software projects along side bureaucratic and market-based mechanisms. The study distinguishes code-based governance system from bureaucracy and the market and underscores the prominent features of each organizational form in terms of its ruling mechanism: bureaucracy (legal-rational), the market (price), and algocracy (programming or algorithm). So, global software development projects, though not insourced, are not outsourced either; rather, they are developed together where a common software platform allows different teams around the world to work on the same project together. Standpoint of labor From the standpoint of labor, outsourcing may represent a new threat, contributing to worker insecurity, and reflective of the general process of globalization. On June 26, 2009, Jeff Immelt, the CEO of General Electric, called for the United States to increase its manufacturing base employment to 20% of the workforce commenting that the U.S. has outsourced too much and can no longer rely on consumer spending to drive demand. Standpoint of government Western governments may attempt to compensate workers affected by outsourcing through various forms of legislation. In Europe, the Acquired Rights Directive attempts to address the issue. The Directive is implemented differently in different nations. In the United States, the Trade Adjustment Assistance Act is meant to provide compensation for workers directly affected by international trade agreements. Whether or not these policies provide the security and fair compensation they promise is debatable.
44
0 Approaches to outsourcing
Selective Outsourcing
With this option we can select Best-of-Breed for an activity - not tied to one vendor. It creates a competitive environment to overcome organizational impediments & motivate performance. Selective outsourcing provides flexibility to adapt to changes and capitalizes on organizational learning with less risk than total outsourcing. On the other side it has higher transaction costs due to:
multiple endeavors multiple contract negotiations multiple vendors to manage and coordinate
Total Outsourcing In this option we have consistency (same vendor for many activities) providing stability with same vendor doing numerous activities. Total Outsourcing helps in lower transaction costs because there is only one vendor. But it is more vulnerable to vendor manipulation on pricing, maintenance costs, vendor support.
Joint Venture Multiple service providers form a collaborative business venture to serve one or more clients. Often, the first client may be a part of the joint venture.
In-sourcing A group within the client organization is selected as a service provider, but it largely managed as an external entity. Often this group must compete with external suppliers or service providers for work.
45
Alliance Multiple service providers collaborate to serve one or more clients. Often, one service provider has a primary role in interfacing with the client on behalf of the alliance.
Swiftness and Expertise: Most of the times tasks are outsourced to vendors who specialize in their field. The outsourced vendors also have specific equipment and technical expertise, most of the times better than the ones at the outsourcing organization. Effectively the tasks can be completed faster and with better quality output
Concentrating on core process rather than the supporting ones: Outsourcing the supporting processes gives the organization more time to strengthen their core business process.
Risk-sharing: one of the most crucial factors determining the outcome of a campaign is risk-analysis. Outsourcing certain components of your business process helps the organization to shift certain responsibilities to the outsourced vendor. Since the outsourced vendor is a specialist, they plan your risk-mitigating factors better.
Reduced Operational and Recruitment costs: Outsourcing eludes the need to hire individuals in-house; hence recruitment and operational costs can be minimized to a great extent. This is one of the prime advantages of offshore outsourcing.
Risk of exposing confidential data: When an organization outsources HR, Payroll and Recruitment services, it involves a risk if exposing confidential company information to a third-party.
Synchronizing the deliverables: In case you do not choose a right partner for outsourcing, some of the common problem areas include stretched delivery time frames, sub-standard quality output and inappropriate categorization of
46
responsibilities. At times it is easier to regulate these factors inside an organization rather than with an outsourced partner.
Hidden costs: Although outsourcing most of the times is cost-effective at times the hidden costs involved in signing a contract while signing a contract across international boundaries may pose a serious threat.
Lack of customer focus: An outsourced vendor may be catering to the expertise-needs of multiple organizations at a time. In such situations vendors may lack complete focus on your organizations tasks.
47
48
The field of information security has grown and evolved significantly in recent years. There are many ways of gaining entry into the field as a career. It offers many areas for specialization including: securing network(s) and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning and digital forensics science, etc. Key concepts The CIA triad (confidentiality, integrity and availability) is one of the core principles of information security. There is continuous debate about extending this classic trio. Other principles such as Accountability have sometimes been proposed for addition it has been pointed out that issues such as Non-Repudiation do not fit well within the three core concepts, and as regulation of computer systems has increased (particularly amongst the Western nations) Legality is becoming a key consideration for practical security installations. In 1992 and revised in 2002 the OECD's Guidelines for the Security of Information Systems and Networks proposed the nine generally accepted principles: Awareness, Responsibility, Response, Ethics, Democracy, Risk Assessment, Security Design and Implementation, Security Management, and Reassessment. Building upon those, in 2004 the NIST's Engineering Principles for Information Technology Security proposed 33 principles. From each of these derived guidelines and practices. In 2002, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian hexad are a subject of debate amongst security professionals.
Confidentiality
Confidentiality is the term used to prevent the disclosure of information to unauthorized individuals or systems. For example, a credit card transaction on the Internet requires the credit card number to be transmitted from the buyer to the
49
merchant and from the merchant to a transaction processing network. The system attempts to enforce confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in databases, log files, backups, printed receipts, and so on), and by restricting access to the places where it is stored. If an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred. Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information a system holds.
Integrity
In information security, integrity means that data cannot be modified undetectably. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of Consistency as understood in the classic ACID model of transaction processing. Integrity is violated when a message is actively modified in transit. Information security systems typically provide message integrity in addition to data confidentiality.
Accessibility
For any information system to serve its purpose, the information must be available when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks.
Authenticity
In computing, e-Business, and information security, it is necessary to ensure that the data, transactions, communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate that both parties involved are who they claim they are.
50
Non-repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. Electronic commerce uses technology such as digital signatures and public key encryption to establish authenticity and non-repudiation. Risk management A comprehensive treatment of the topic of risk management is beyond the scope of this article. However, a useful definition of risk management will be provided as well as some basic terminology and a commonly used process for risk management. The CISA Review Manual 2006 provides the following definition of risk management:
"Risk management is the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."
There are two things in this definition that may need some clarification. First, the
51
Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (manmade or act of nature) that has the potential to cause harm. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called residual risk. A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis. The research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be examined during a risk assessment:
security policy, organization of information security, asset management, human resources security, physical and environmental security, communications and operations management, access control, information systems acquisition, development and maintenance, information security incident management, business continuity management, and Regulatory compliance.
52
In broad terms, the risk management process consists of: 1. Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, and other), and supplies. 2. Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, and malicious acts originating from inside or outside the organization. 3. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, quality control, technical security. 4. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis. 5. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset. 6. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective protection without discernible loss of productivity. For any given risk, Executive Management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or out-sourcing to another business. The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. Controls When Management chooses to mitigate a risk, they will do so by implementing one or more of three different types of controls.
Administrative
Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on
53
how the business is to be run and how day to day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards and guidelines that must be followed the Payment Card Industry (PCI) Data Security Standard required by Visa and MasterCard is such an example. Other examples of administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary policies. Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls. Administrative controls are of paramount importance.
Logical
Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. For example: passwords, network and host based firewalls, network intrusion detection systems, access control lists, and data encryption are logical controls. An important logical control that is frequently overlooked is the principle of least privilege. The principle of least privilege requires that an individual, program or system process is not granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read Email and surf the Web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, or they are promoted to a new position, or they transfer to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges which may no longer be necessary or appropriate.
Physical
Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities. For example: doors, locks, heating and air conditioning, smoke and fire alarms, fire
54
suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and work place into functional areas are also physical controls. An important physical control that is frequently overlooked is the separation of duties. Separation of duties ensures that an individual can not complete a critical task by himself. For example: an employee who submits a request for reimbursement should not also be able to authorize payment or print the check. An applications programmer should not also be the server administrator or the database administrator these roles and responsibilities must be separated from one another. Defense in depth Information security must protect information throughout the life span of the information, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its lifetime, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own protection mechanisms. The building up, layering on and overlapping of security measures is called defense in depth. The strength of any system is no greater than its weakest link. Using a defense in depth strategy, should one defensive measure fail there are other defensive measures in place that continue to provide protection. Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of controls can be used to form the basis upon which to build a defense-in-depth strategy. With this approach, defense-in-depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense-in- depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, and network security, host-based security and application security forming the outermost layers of the onion. Both perspectives are equally valid and
55
each provides valuable insight into the implementation of a good defense-in-depth strategy. Security classification for information An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification. The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification. Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information. The type of information security classification labels selected and used will depend on the nature of the organization, with examples being:
In the business sector, labels such as: Public, Sensitive, Private, and Confidential. In the government sector, labels such as: Unclassified, Sensitive But Unclassified, Restricted, Confidential, Secret, Top Secret and their nonEnglish equivalents.
In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber and Red.?
56
All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification of a particular information asset has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place. Access control Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe (a claim of identity). The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. There are three different types of information that can be used for authentication: something you know, something you have, or something you are. Examples of
something you know include such things as a PIN, a password, or your mother's
57
maiden name. Examples of something you have include a driver's license or a magnetic swipe card. Something you are refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and retina (eye) scans. Strong authentication requires providing information from two of the three different types of authentication information. For example, something you know plus something you have. This is called two factor authentication. On computer systems in use today, the Username is the most common form of identification and the Password is the most common form of authentication. Usernames and passwords have served their purpose but in our modern world they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated authentication mechanisms. After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms - some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control or it may be derived from a combination of the three approaches. The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the Mandatory access
58
control approach, access is granted or denied basing upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include Role-based access control available in many advanced Database Management Systems, simple file permissions provided in the UNIX and Windows operating systems, Group Policy Objects provided in Windows network systems, Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. All failed and successful authentication attempts must be logged, and all access to information must leave some type of audit trail. Also, need-to-know principle needs to be in affect when talking about access control. Need-to-know principle gives access rights to a person to perform their job functions. This principle is used in the government, when dealing with difference clearances. Even though two employees in different departments have a top-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee least amount privileges to prevent employees access and doing more than what they are supposed to. Need-to-know helps to enforce the confidential-integrity-availability (CI-A) triad. Need-to-know directly impacts the confidential area of the triad. Cryptography Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user, who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage.
59
Cryptography provides information security with other useful applications as well including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older less secure applications such as telnet and ftp are slowly being replaced with more secure applications such as ssh that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU-T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and Email. Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction and they must be available when needed. PKI solutions address many of the problems that surround key management. Process The terms reasonable and prudent person, due care and due diligence have been used in the fields of Finance, Securities, and Law for many years. In recent years these terms have found their way into the fields of computing and information security. U.S.A. Federal Sentencing Guidelines now make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems. In the business world, stockholders, customers, business partners and governments have the expectation that corporate officers will run the business in accordance with accepted business practices and in compliance with laws and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent person takes due care to ensure that everything necessary is done to
60
operate the business by sound business principles and in a legal ethical manner. A prudent person is also diligent (mindful, attentive, and ongoing) in their due care of the business. In the field of Information Security, Harris offers the following definitions of due care and due diligence:
"Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees." And, [Due
diligence are the] "continual activities that make sure the protection mechanisms are
An enterprise-wide issue Leaders are accountable Viewed as a business requirement Risk-based Roles, responsibilities, and segregation of duties defined Addressed and enforced in policy Adequate resources committed Staff aware and trained A development life cycle requirement Planned, managed, measurable, and measured
61
Below is a partial listing of European, United Kingdom, Canadian and USA governmental laws and regulations that have, or will have, a significant effect on data processing and information security. Important industry sector regulations have also been included when they have a significant impact on information security.
UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to individuals, including the obtaining, holding, use or disclosure of such information. The European Union Data Protection Directive (EUDPD) requires that all EU member must adopt national regulations to standardize the protection of data privacy for citizens throughout the EU.
The Computer Misuse Act 1990 is an Act of the UK Parliament making computer crime (e.g. hacking) a criminal offence. The Act has become a model upon which several other countries including Canada and the Republic of Ireland have drawn inspiration when subsequently drafting their own information security laws.
EU Data Retention laws requires Internet service providers and phone companies to keep data on every electronic message sent and phone call made for between six months and two years.
The Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. 1232 g; 34 CFR Part 99) is a USA Federal law that protects the privacy of student education records. The law applies to all schools that receive funds under an applicable program of the U.S. Department of Education. Generally, schools must have written permission from the parent or eligible student in order to release any information from a student's education record.
Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires the adoption of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. And,
62
it requires health care providers, insurance providers and employers to safeguard the security and privacy of health data.
Gramm-Leach-Bliley Act of 1999 (GLBA), also known as the Financial Services Modernization Act of 1999, protects the privacy and security of private financial information that financial institutions collect, hold, and process.
SarbanesOxley Act of 2002 (SOX). Section 404 of the act requires publicly traded companies to assess the effectiveness of their internal controls for financial reporting in annual reports they submit at the end of each fiscal year. Chief information officers are responsible for the security, accuracy and the reliability of the systems that manage and report the financial data. The act also requires publicly traded companies to engage independent auditors who must attest to, and report on, the validity of their assessments.
Payment Card Industry Data Security Standard (PCI DSS) establishes comprehensive requirements for enhancing payment account data security. It was developed by the founding payment brands of the PCI Security Standards Council, including American Express, Discover Financial Services, JCB, MasterCard Worldwide and Visa International, to help facilitate the broad adoption of consistent data security measures on a global basis. The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design and other critical protective measures.
State Security Breach Notification Laws (California and many others) require businesses, nonprofits, and state institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or stolen.
Personal Information Protection and Electronics Document Act (PIPEDA) An Act to support and promote electronic commerce by protecting personal information that is collected, used or disclosed in certain circumstances, by providing for the use of electronic means to communicate or record information or transactions and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act.
63
64