0% found this document useful (0 votes)
123 views34 pages

System Development Life Cycle: Learning Objectives

The document discusses the system development life cycle (SDLC) which provides a framework for developing application systems. It describes the common phases of the SDLC process including planning, analysis, design, development, testing, implementation, and operations/maintenance. It then provides details on some of the key phases and considerations within the SDLC process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views34 pages

System Development Life Cycle: Learning Objectives

The document discusses the system development life cycle (SDLC) which provides a framework for developing application systems. It describes the common phases of the SDLC process including planning, analysis, design, development, testing, implementation, and operations/maintenance. It then provides details on some of the key phases and considerations within the SDLC process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter 8

System Development
Life Cycle

LEARNING OBJECTIVES
1. Discuss the system development life cycle (SDLC) and its common phases.
2. Discuss additional risks and associated controls related to the SDLC phases.
3. Explain common approaches used for software development.
4. Discuss the IT auditor’s involvement in the system development and implementation process.

Organizations are constantly building, replacing, and maintaining information systems. There are
many different approaches to systems development, but the most successful systems follow a well-
defined development methodology. The success of a systems development project is dependent
on the success of key processes: project management, analysis, design, testing, and implementa-
tion. Because development efforts can be costly, organizations have recognized the need to build
well-controlled quality systems. IT processes information that is integral to the financial stability
and profitability of organizations. Therefore, these systems must be built with adequate internal
controls to ensure the completeness and accuracy of transaction processing.

System Development Life Cycle


As discussed in the previous chapter, a project management life cycle provides guidelines to project
managers on the processes that must be followed to ensure the overall success of a project. In a
similar fashion, the system development life cycle (SDLC), also being referred to as the applica-
tion development life cycle, provides a framework for effectively developing application systems.
It specifically describes a standard process for planning, creating, testing, and deploying new
information systems (i.e., new development or modified system). Either developing a new system
or adding changes to an existing one, the SDLC provides the framework and steps necessary for
an adequate implementation. Although there are many variations of the traditional SDLC, they
all have the following common phases in one form or another (refer to Exhibit 8.1):

201
202  ◾  Information Technology Control and Audit

1. Planning

7. Operations and
2. System
maintenance
analysis and
requirements

6. Implementation 3. System design

5. Testing 4. Development

Exhibit 8.1  System development life cycle phases.

1. Planning
2. System Analysis and Requirements
3. System Design
4. Development
5. Testing
6. Implementation
7. Operations and Maintenance

Planning
The planning phase sets the stage for the success of the system development effort. It documents
the reasons to develop the new system (as opposed to purchase it from an external source) in
order to achieve the organization’s strategic goals and objectives. During planning, organizations
establish the scope of the work (considering costs, time, benefits, and other items), set initiatives
to acquire the necessary resources, and determine solutions. If planning is not done properly, the
budget and schedule may not be sufficient, the business problem or need to be addressed by the
new system may not be adequately defined, the final product may not solve the problem or need,
and the right people may not be involved. These are typical risks encountered by IT auditors and
organization personnel during this phase. To be effective, planning should include and describe
the following:

◾◾ Needs for a new system analysis. A study to determine whether a new system should either be
developed internally or purchased from external sources.
◾◾ Current system review. A study of the current system to identify existing processes and
­procedures that will continue in the new system.
◾◾ Conceptual design. Preparation and assessment of the proposed design alternatives, system
flows, and other information illustrating how the new system will operate.
◾◾ Equipment requirements. Identification of the hardware configuration needed to use the new
system (e.g., processing speed, storage space, transmission media, etc.).
◾◾ Cost/benefit analysis. Detailed financial analysis of the cost to develop and operate the new
system, the savings or additional expense, and the return on investment.
System Development Life Cycle  ◾  203

◾◾ Project team formation. Identification and selection of resources needed (e.g., programmers,
end users, etc.) to develop and implement the new system.
◾◾ Tasks and deliverables. Establish defined tasks and deliverables to monitor actual results and
ensure successful progress.

System Analysis and Requirements


In this phase, system analysts identify and assess the needs of end users with the ultimate purpose
of ensuring that, once developed, the new system will meet their expectations. During this phase,
end users and system analysts define functional requirements for the new software/system in
terms that can be measured and functionally tested. The functionality of the existing system is
matched with the new functionality and requirements are defined and validated with users so
that they can become the basis for the system design phase. This phase also identifies and docu-
ments resources who will be responsible for individual pieces of the system, as well as the timeline
expected. Common tools or practices used by organizations while on this phase include:

◾◾ Computer Aided Systems/Software Engineering (CASE)—software tool with methods to


design, develop, and implement applications and information systems;
◾◾ Requirements Gathering—practice of collecting the requirements of a system from users,
customers, and other stakeholders via meetings or interviews; and
◾◾ Structured Analysis—software engineering technique that uses graphical diagrams to
­analyze and interpret requirements, and to depict the necessary steps (and data) required to
meet the design function of the particular system or software.

System Design
In the system design phase, the systems analyst defines and documents all system interfaces,
reporting, screen layouts, and specific program logic necessary to build the new system consistent
with the requirements. The system design phase describes, in detail, the specifications, ­features,
and operations that will meet the requirements previously defined. At this phase, system analysts
and end users, once again, review the specific business needs and determine (or confirm) what
will be the final requirements for the new system. Technical details of the proposed s­ ystem are
also discussed with the various stakeholders, including the hardware/software needed, ­networking
capabilities, processing and procedures for the system to accomplish its objectives, etc. Other
more general and administrative topics within this phase include identifying existing risks,
­technologies to be used, capability of the team, project constraints, timing, and budget restrictions.
Consideration of the aforementioned will aid in selecting the best design approach.
At the systems design stage, controls should be defined for input points and processing. Screen
layouts, controls, and reports should be reviewed and approved by the end user before moving on
to the next phase. Programmers will use the detailed specifications and output from the design
phase to move on into the development or construction phase.

Development
In the development phase, programmers build or construct the new system based on analyses,
requirements, and design previously agreed. The construction or coding phase is final once the
programmer validates the new system code through individual unit testing (full testing of the
204  ◾  Information Technology Control and Audit

system is performed in the next phase). The code is tested for both syntax and logic flow. All
logic paths are exercised to ensure error routines work and the program terminates processing
normally.
When new systems are developed, appropriate security access controls need to be developed as
well to safeguard information against unapproved disclosure or modification, and damage or loss.
Logical access controls, for instance, are used to ensure that access to systems, data, and programs
are limited to appropriate users and IT support personnel.
Organizations must also keep in mind that development efforts generate code, and that this
is where security and control of any system starts. In March 2011, the United States Computer
Emergency Readiness Team (US-CERT) issued its top 10 Secure Coding Practices. These practices
should be adhered to as one starts, designs, develops, tests, implements, and maintains a system:

1. Validate input. Validate input from all untrusted data sources. Proper input validation can
eliminate the vast majority of software vulnerabilities. Be suspicious of most external data
sources, including command line arguments, network interfaces, environmental variables,
and user-controlled files.
2. Heed compiler warnings. Compile code using the highest warning level available for your
compiler and eliminate warnings by modifying the code. Use static and dynamic analysis
tools to detect and eliminate additional security flaws.
3.
Architect and design for security policies. Create a software architecture and design your
software to implement and enforce security policies. For example, if your system requires
different privileges at different times, consider dividing the system into distinct intercom-
municating subsystems, each with an appropriate privilege set.
4.
Keep the design as simple and small as possible. Complex designs increase the likelihood that
errors will be made in their implementation, configuration, and use. Additionally, the effort
required to achieve an appropriate level of assurance increases dramatically as security
­mechanisms become more complex.
5.
Default deny. Base access decisions on permission rather than exclusion. This means that, by
default, access is denied and the protection scheme identifies conditions under which access
is permitted.
6.
Adhere to the principle of least privilege. Every process should execute with the least set of
privileges necessary to complete the job. Any elevated permission should be held for a mini-
mum time. This approach reduces the opportunities an attacker has to execute arbitrary
code with elevated privileges.
7.
Sanitize data sent to other systems. Sanitize all data passed to complex subsystems such as
command shells, relational databases, and commercial off-the-shelf components. Attackers
may be able to invoke unused functionality in these components through the use of SQL,
command, or other injection attacks. This is not necessarily an input validation problem
because the complex subsystem being invoked does not understand the context in which the
call is made. Because the calling process understands the context, it is responsible for sanitizing
the data before invoking the subsystem.
8.
Practice defense in depth. Manage risk with multiple defensive strategies, so that if one layer of
defense turns out to be inadequate, another layer of defense can prevent a security flaw from
becoming an exploitable vulnerability and/or limit the consequences of a successful exploit.
For example, combining secure programming techniques with secure runtime ­environments
should reduce the likelihood that vulnerabilities remaining in the code at deployment time
can be exploited in the operational environment.
System Development Life Cycle  ◾  205

9. Use effective quality assurance techniques. Good quality assurance techniques can be effec-
tive in identifying and eliminating vulnerabilities. Fuzz testing, penetration testing, and
source code audits should all be incorporated as part of an effective quality assurance pro-
gram. Independent security reviews can lead to more secure systems. External reviewers bring
an independent perspective, for example, in identifying and correcting invalid assumptions.
10. Adopt a secure coding standard. Develop and/or apply a secure coding standard for your
­target development language and platform.

Other well-known practices referred to when developing and securing systems or applications
include the secure coding principles described in the Open Web Application Security Project
(OWASP) Secure Coding Guidelines. While OWASP’s secure coding principles below specifically
reference Web applications, such principles should be applied to non-Web applications as well.*

1. Input Validation
2. Output Encoding
3. Authentication and Password Management
4. Session Management
5. Access Control
6. Cryptographic Practices
7. Error Handling and Logging
8. Data Protection
9. Communication Security
10. System Configuration
11. Database Security
12. File Management
13. Memory Management
14. General Coding Practices

The Software Engineering Institute (SEI) has also developed US-CERT coding standards for
common programming languages like C++, Java, Perl, and the Android platform. They include
rules for developing safe, reliable, and secure systems. They identify sources for today’s software
vulnerabilities, and provide guidance on how to exploit them. Downloads of these standards are
available to the community online.†

Testing
Testing is by far the most critical part of any system development and implementation. However,
it is also the first to get short-changed when go-live dates get challenged. The primary purpose of
system testing is to validate that the system works as expected and that it identifies errors, flaws,
failures, or faults at an early stage because if discovered later, they will be costly to fix.
An overall testing strategy should be developed to define the individual test events, roles and
responsibilities, test environment, problem reporting and tracking, and test deliverables. The
­testing process should be based on existing testing methodologies established by the organization.
An effective testing process allows for documentation that will prevent duplicate testing efforts.

* https://security.berkeley.edu/secure-coding-practice-guidelines.
† www.securecoding.cert.org/confluence/display/seccode/SEI+CERT+Coding+Standards.
206  ◾  Information Technology Control and Audit

A testing plan should be made in accordance with the organization’s standards. The plan
should include test scenarios, the role of the test participants, acceptance criteria, and testing
logistics. It should also identify responsibility for documentation, review, and approval of tests
and test results. End users and system owners should perform the required testing rather than
programmers or developers. They should sign off that appropriate testing was performed with
expected results for all requirements. Senior management sign-off is also required before programs
are promoted to production environments.
Although each system may require different test events, in general, test events include
unit testing, integration testing, technical testing, functional testing, performance load test-
ing, and acceptance testing. Acceptance testing, for instance, verifies that acceptance criteria
defined during the system definition stage are tested. Test cases should include system usabil-
ity, ­management reports, performance measurements, documentation and procedures, training,
and ­system ­readiness (operations/systems sign-off ). Exhibit 8.2 summarizes the user acceptance
­testing event.

Exhibit 8.2  User Acceptance Testing


User Acceptance Testing

User acceptance testing (UAT) is key to a successful application system development and
implementation. It ensures that the application fulfills the agreed-upon functional
requirements (expectations) of the users, meets established usability criteria, and satisfies
performance guidelines before being implemented into production. UAT minimizes the
risks that the new application system will cause business interruptions or be disjointed with
business processes. UAT should include inspections, functional tests, and workload trials. It
should include all components of the application system (e.g., facilities, application
software, procedures, etc.), and involve having the right team, agreeing on the testing
requirements, and obtaining results approval from management.

Acceptance Team

The process owner should establish the acceptance team. The team is responsible for
developing and implementing the acceptance process. The acceptance team should be
composed of representatives from various functions including computer operators,
technical support, capacity planning, help desk personnel, and database administrators.

Agreed-Upon Requirements

Requirements for UAT need to be identified, agreed upon, and prioritized. Acceptance
requirements or criteria should be specific with detailed measures. Indirectly, the
acceptance requirements become the criteria for making the “go/no-go decisions” or
determining if the application system satisfies the critical requirements before being
implemented into the live environment.

Management Approval

Acceptance plans and test results need to be approved by the affected functional department
as well as the IT department. To avoid surprises, users should be involved in the application
system testing throughout the development and implementation processes. This minimizes
the risk of key functionality being excluded or not working properly.
Source: Adapted from Senft, S., Gallegos, F., and Davis, A. 2012. Information Technology Control
and Audit. Boca Raton: CRC Press/Taylor & Francis.
System Development Life Cycle  ◾  207

Each test event should have a plan that defines the test-scope resources (i.e., people and envi-
ronment) and test objectives with expected results. They should provide test case documentation
and a test results report. It is often desirable to have the end user participate in the functional
testing although all fundamental tests mentioned earlier should be applied and documented. At
minimum, the thoroughness of the level of testing should be completed and reviewed by the
development team and quality assurance staff. Quality of testing within each application and at
the integration stage is extremely important.
Test scenarios, associated data, and expected results should be documented for every condition
and option. Test data should include data that are representative of relevant business scenarios,
which could be real or generated test data. Regardless of the type of test data chosen, it should rep-
resent the quality and volume of data that is expected. However, controls over the production data
used for testing should be evaluated to ensure that the test data are not misused or compromised.
Testing should also include the development and generation of management reports. The man-
agement reports generated should be aligned with business requirements. The reports should be
relevant to ensure effectiveness and efficiency of the report development effort. In general, report
specifications should include recipients, usage, required details, and frequency, as well as the method
of generation and delivery. The format of the report needs to be defined so that the report is clear,
concise, and understandable. Each report should be validated to ensure that it is accurate and com-
plete. The control measures for each report should be evaluated to ensure that the appropriate con-
trols are implemented so that availability, integrity, and confidentiality are assured. Test events that
may be relevant, depending on the type of system under development, are described in Exhibit 8.3.

Exhibit 8.3  System Test Events


System Test Event Description

Unit testing Verifies that stand-alone programs match specifications. Test cases
should exercise every line of code.

Integration Confirms that all software and hardware components work well
testing together. Data are passed effectively from one program to the next. All
programs and subroutines are tested during this phase.

Technical testing Verifies that the application system works in the production
environment. Test cases should include error processing and recovery,
performance, storage requirements, hardware compatibility, and
security (e.g., screens, data, programs, etc.).

Functional Corroborates that the application system meets user requirements. Test
testing cases should cover screens, navigation, function keys, online help,
processing, and output (reports) files.

Performance load Defines and tests the performance expectations of the application
testing system in advance. It ensures that the application is scalable
(functionally and technically), and that it can be implemented without
disruption to the organization. The entire infrastructure should be
tested for performance load to ensure adequate capacity and
throughput at all levels: central processing, input and output media,
networks, and so on. The test environment should also reflect the
production/live environment as much as possible.
(Continued)
208  ◾  Information Technology Control and Audit

Exhibit 8.3 (Continued)  System Test Events


System Test Event Description

Black-box testing Software testing method that examines the overall operation and
functionality of an application system without looking into its internal
structure (e.g., design, implementation, internal paths, etc.). In other
words, testers are not aware of the application’s internal structure when
employing black-box testing. Although black-box testing applies most
to higher level testing, it can also cover virtually every level of software
testing (i.e., unit, integration, system, and acceptance).

White-box Software testing method that goes beyond the user interface and into
testing the essentials of a system. It examines the internal structure of an
application, as opposed to its operations and functionality. Contrary to
black-box testing (which focuses on the application’s operations and
functionality), white-box testing allows testers to know about the
application’s internal structure (e.g., design, implementation, internal
paths, etc.) when conducting tests.

Regression Software testing method that follows the implementation of a change or


testing modification to a given system. It examines implemented changes and
modifications performed to ensure that the existing system (and its
programming) is still functional and operating effectively. Once
changes and modifications have been implemented, regression testing
re-executes existing tests against the modified system’s code to ensure
the new changes or modifications do not break the previously working
system.

Automated Software testing tools or techniques that simplify the testing process by
software testing automating the execution of pre-scripted tests on software applications
before being implemented into the production environment.
Automated software testing, such as automating the tests of units (e.g.,
individual program, class, method, function, etc.) that currently
demands significant use of team’s resources can result in a more
effective and efficient testing process. Automated software testing can
also compare current test results against previous outcomes.

Software Software performance testing is key to determining the quality and


performance effectiveness of a given application. The testing method determines
testing how a system (i.e., computer, network, software program, or device)
performs in terms of speed, responsiveness, and stability under a
particular scenario.
Source: Adapted from Senft, S., Gallegos, F., and Davis, A. 2012. Information Technology Control
and Audit. Boca Raton: CRC Press/Taylor & Francis.

Implementation
This phase involves the actual deployment and installation of the new system, and its delivery to end
users. System implementation verifies that the new system meets its intended purpose and that the
necessary process and procedures are in place for production. Implementing a ­system involves incor-
porating several controls (i.e., implementation plan, conversion procedures, IT disaster/continuity
System Development Life Cycle  ◾  209

plans, system documentation, training, and support) to ensure a smooth installation and ­transition
to the users. To ensure a smooth implementation, it is also important that users and technical
­support both be aware and on board with these controls.

Implementation Plan
An implementation plan should be documented to guide the implementation team and users
in the implementation process. The documentation should cover the implementation schedule,
the resources required, roles and responsibilities of the implementation team, means of com-
munication between the implementation team and users, decision processes, issue management
­procedures, and a training plan for the implementation team and end users. In simple terms, the
plan should cover the who, what, when, where, and how of the implementation process.

Data Conversion and Cleanup Processes


Unless a process is new, existing information will need to be converted to the new system.
Conversion is the process where information is either entered manually or transferred program-
matically from an old system into the new one. In either case, the existence of procedures should
verify the conversion of all records, files, and data into the new system for completeness and
­accuracy purposes.
Data conversion procedures may fall into one of the following four generally recognized
­conversion methods:

◾◾ Direct conversion. Also referred to as “Direct cutover,” it is a conversion method that involves
shutting down the current system entirely and switching to a new system. The organization
basically stops using the old system, say overnight, and begins using the new one the next
day and thereafter. It is the riskiest of all methods because of the immediate learning curve
required by users to effectively interact with the new system. A second risk would be the
potential malfunction of the new system, which would significantly impact the organization
as the old system is no longer available.
◾◾ Pilot conversion. Method where a small group of users and participants is established to
interact with the new system whereas the rest continues to use the old/current one. This
method assists organizations in identifying potential problems with the new system, so
that they can be corrected before switching from the old one. Once corrected, the pilot/
new system is installed for good and the old one is switched off. Retail chains typically
benefit from this method. For example, installing a new point of sale system in one store for
trial purposes and (upon operating properly) rolling out the new working system into the
remaining stores.
◾◾ Phased conversion. Also referred to as the “Modular conversion,” it is a method that gradually
introduces the new system until the old system is completely replaced by the new system.
This method helps organizations to identify problems early in the specific phase or m ­ odule,
and then schedule resources to correct them before switching over to the new system. Given
that the current system is still partly operational when implementing this method, the risk
tends to be relatively low compared to other methods. In the case of unexpected performance
issues with the new system, the old system can still be used as it is still fully operational.
In terms of disadvantages, the gradual replacement to the new system may be considered
significant (i.e., implementation may take a longer period of time). Another disadvantage
210  ◾  Information Technology Control and Audit

would be training, which must be continuously provided to ensure that users understand
the new system while it is being converted.
◾◾ Parallel conversion. Method that involves running both, the old and new system simultane-
ously for some pre-determined period of time. In this method, the two systems perform
all necessary processing together and results are compared for accuracy and completeness.
Once all issues have been addressed and corrected (if any) and the new system operates prop-
erly as expected, the old system is shut down, and users start interacting merely with the new
system. The advantage of this conversion method is that it provides redundancy should the
new system does not work as expected and/or system failures occur. Switching to the new
system will only take place upon successfully passing all necessary tests, ensuring the new
system will likely perform as originally designed and intended for. Common disadvantages
of this method involve the financial burden of having two systems running simultaneously,
the double-handling of data and associated operations, and the potential for data entry
errors when users input data into the new system.

A conversion plan defines how the data are collected and verified for conversion. Before ­conversion,
the data should be “cleaned” to remove any inconsistencies that introduce errors during the
­conversion or when the data are placed in the new application.
Tests to be performed while converting data include comparing the original and converted
records and files, checking the compatibility of the converted data with the new system, and
ensuring the accuracy and completeness of transactions affecting the converted data. A detailed
verification of the processing with the converted data in the new system should be performed to
confirm successful implementation. The system owners are responsible for ensuring that data are
successfully converted.
The data conversion process often gets intermingled with data cleanup. Data cleanup is a
process that organizations embark upon to ensure that only accurate and complete data get trans-
ferred into the new system. A common example is company names in a vendor file. A company can
be entered into a vendor file multiple times in multiple ways. For example, “ABC Manufacturing”
can be “ABC mfg,” “abc Mfg.,” and so on. Many of these data cleanup changes can be dealt with
systematically because many errors happen consistently.
The data cleanup effort should happen before executing data conversion procedures. This allows
the conversion programmers to focus on converting the data as opposed to coding for data differ-
ences. However, in reality, the exemptions from data conversion become issues for the data cleanup
team to deal with. Data conversion and data cleanup teams should work closely with one another
to ensure that only the most accurate and complete data are converted. Management should sign
off on test results for converted data as well as approve changes identified by the data cleanup team.

IT Disaster Plan
This is another key review point for management and the IT auditor. As part of implementation,
requirements for the system’s recovery in the event of a disaster or other disruption should be
accounted for. The IT disaster plan should be reviewed to ensure that the organization incor-
porates procedures and resources necessary to recover the new application system. Significant
upgrades to existing applications may also require modification to disaster recovery requirements
in areas such as processor requirements, disk storage, or operating system versions. Recovery pro-
cedures related to the new system should be tested shortly after it is put into production. Such
recovery procedures must also be documented.
System Development Life Cycle  ◾  211

In the rush to implement a system, documentation can be the first to “slide.” However, the price
is paid when decisions to address problems become reactionary. Formalizing documentation and
procedures is the difference between delivering a technology versus delivering a service. The disaster
recovery plan should be in place at the point of implementation and carried through into operations.

System Documentation
System documentation ensures maintainability of the system and its components and minimizes
the likelihood of errors. Documentation should be based on a defined standard and consist of
descriptions of procedures, instructions to personnel, flowcharts, data flow diagrams, display or
report layouts, and other materials that describe the system. System documentation should pro-
vide programmers with enough information to understand how the system works to decrease the
learning cycle, as well as ensure effective and efficient analysis of program changes and trouble-
shooting. Documentation should be updated as the system is modified.
The processing logic of the system should be documented in a manner that is understandable
(e.g., using flowcharts, etc.), while containing sufficient detail to allow programmers to accurately
support the application. The system’s software must also include documentation within the code,
with descriptive comments embedded in the body of the source code. These comments should
include cross-references to design and requirements documentation. The documentation should
describe the sequence of programs and the steps to be taken in case of a processing failure.
User documentation should include automated and manual workflows for initial training and
ongoing reference. User reference materials (processes and procedures) should be included as part
of the development, implementation, and maintenance of associated application systems. They
should be reviewed and approved as part of the acceptance testing. User reference materials should
be designed for all levels of user expertise and should instruct them on the use of the application
system. Such documentation should be kept current as changes are made to the dependent systems.

Training
Training is an important aspect of any project implementation. Training provides users with the
necessary understanding, skills, and tools to effectively and efficiently utilize a system in their daily
tasks. Training is critical to deliver a successful implementation because it introduces users to the
new system and shows them how to interact with it. Delivering an effective training engages users,
motivates them to embrace change, and ultimately assists the organization in achieving its desired
business results. On the other hand, the cost of not training users may exceed the investment
organizations would make for training purposes in the new system. One reason for this paradox is
that it may take users longer times to learn the system on their own and become productive with it.
Effective training and education also enable organizations to realize financial gains in the long
term, reducing support costs significantly. This results from users making fewer mistakes and hav-
ing fewer questions. Training and education along with effective project management are critical
factors for a successful implementation of any system.

Support
Continuing user support is another important component needed to ensure a successful
­implementation. Support includes having a help desk to provide assistance to users, as well as
problem reporting solutions allowing the submission, search, and management of problem reports.
212  ◾  Information Technology Control and Audit

Effective support involves strategies to work closely with the users in order to ensure issues are
resolved promptly, ultimately enhancing productivity and user experience.
Help desk support ensures that problems experienced by the user are appropriately addressed.
A help desk function should provide first-line support to users. Help requests should be monitored
to ensure that all problems are resolved in a timely manner. Trend analysis should be conducted
to identify patterns in problems or solutions. Problems should be analyzed to identify root causes.
Procedures need to be in place for escalating problems based on inadequate response or level of
impact. Questions that cannot be resolved immediately should be escalated to higher levels of
management or expertise.
Organizations with established help desks will need to staff and train help desk personnel
to handle the new application system. Good training will minimize the volume of calls to the
help desk and thereby keep support costs down. Help desks can be managed efficiently with the
use of problem management software, automated telephone systems, expert systems, e-mail,
voicemail, etc.
Ongoing user support allows organizations to handle and address incoming user requests in a
timely and accurate fashion. For instance, support can be provided by establishing a centralized
call center (similar to having a help desk) that not only reports issues with the new system, but also
finds the right solution. By assisting users in the appropriate use of the new system, organizations
can ensure a successful system implementation.

Operations and Maintenance


No matter how well a system is designed, developed, and/or tested, there will always be problems
discovered or enhancements needed after implementation. In this phase, programmers maintain
systems by either correcting problems and/or installing necessary enhancements in order to fine-
tune the new system, improve its performance, add new capabilities, or meet additional user
requirements. Maintenance of systems can be separated into three categories:

◾◾ Corrective maintenance—involves resolving errors, flaws, failures or faults in a computer pro-


gram or system, causing it to produce incorrect or unexpected results. These are commonly
known as “bugs.” The purpose of corrective maintenance is to fix existing functionality
to make it work as opposed to providing new functionality. This type of maintenance can
occur at any time during system use and usually is a result of inadequate system testing.
Corrective maintenance can be required to accommodate a new type of data that were
inadvertently excluded, or to modify code related to an assumption of a specific type of data
element or relationship. As an example of the latter, it was assumed in a report that each
employee’s employment application had an employee requisition (or request to hire) associ-
ated with it in the system. However, when users did not see a complete listing of their entire
employee applications listed, they discovered that not every employee application had an
associated hiring request. In this case, the requirement for each application to be associated
with a hiring request was a new system feature provided in the latest software release. As a
result, employee applications entered into the system previous to the installation of the new
release did not have hiring requests associated with them.
◾◾ Adaptive maintenance—results from regulatory and other environmental changes. The
­purpose of adaptive maintenance is to adapt or adjust to some change in business conditions
as opposed to fix existing or provide new functionality. An example of adaptive mainte-
nance is modifications to accommodate changes in tax laws. Annually, federal and state
System Development Life Cycle  ◾  213

laws change, which require changes to financial systems and their associated reports. A past
example of this type of issue was the Year 2000 (Y2K) problem. Many software programs
were written to handle dates up to 1999 and were rewritten at significant costs to handle
dates beginning January 1, 2000. Although these changes cost organizations many millions
of dollars in maintenance effort, the goal of these changes was not to provide users with new
capabilities, but simply to allow users to continue using programs the way they are using
them today. Some people argue that fixing code to accommodate Y2K was actually correc-
tive maintenance, as software should have been designed to accommodate years beyond
1999. However, due to the expense and limitations of storage, older systems used two digits
to represent the year as a means to minimize the cost and limits of storage.
◾◾ Perfective maintenance—includes incorporation of new user needs and enhancements not
met by the current system. The goal of perfective maintenance is to modify software to
support new requirements. Perfective maintenance can be relatively simple, such as chang-
ing the layout of an input screen or adding new columns to a report. Complex changes can
involve sophisticated new functionality. In one example, a university wanted to provide its
students with the ability to pay for their fees online. A requirement for such a system involves
a number of complexities including the ability to receive, process, and confirm payment.
These requirements include additional requirements such as the ability to secure the infor-
mation and protect the student and institution by maintaining the integrity of the data and
information. Along with this, additional requirements are necessary to protect the process
in its ability to recover and continue processing, as well as the ability to validate, verify, and
audit each transaction.

A reporting system should be established for the users to report system problems and/or enhance-
ments to the programmers, and in turn for the programmers to communicate to the users when
they have been fixed or addressed. Such a reporting system should consist of audit trails for
problems, their solutions, and enhancements made. The system should document resolution, pri-
oritization, escalation procedures, incident reports, accessibility to configuration, information
coordination with change management, and a definition of any dependencies on outside services,
among others.
Reporting systems should ensure that all unexpected events, such as errors, problems, etc. are
recorded, analyzed, and resolved in a timely manner. Incident reports should be established in the
case of significant problems. Escalation procedures should also be in place to ensure that problems
are resolved in the most timely and efficient way possible. Escalation procedures include prioritiz-
ing problems based on the impact severity as well as the activation of a business continuity plan
when necessary. A reporting system that is also closely associated with the organization’s change
management process is essential to ensure that problems are resolved or enhancements being made
and, most importantly, to prevent their reoccurrence.
Maintaining a system also requires keeping up-to-date documentation related to the new
­system. Documentation builds at each phase in the SDLC. System documentation can be created
as flowcharts, graphs, tables, or text for organization and ease of reading. System documentation
includes:

◾◾ Source of the data


◾◾ Data attributes
◾◾ Input screens
◾◾ Data validations
214  ◾  Information Technology Control and Audit

◾◾ Data selection criteria


◾◾ Security procedures
◾◾ Description of calculations
◾◾ Program design
◾◾ Interfaces to other applications
◾◾ Control procedures
◾◾ Error handling
◾◾ Operating instructions
◾◾ Archive, purge, and retrieval
◾◾ Backup, storage, and recovery

There is a definite correlation between a well-managed system development process and a ­successful
system. A system development process provides an environment that is conducive to successful sys-
tems development. Such process increases the probability that a new system will be successful and
its internal controls will be effective and reliable.

Additional Risks and Associated Controls Related


to the SDLC Phases
Additional risks attributable to the SDLC phases just discussed, and that are significant to the
organization and the IT auditor are listed below. These may result in invalid or misleading data,
bypassed automated controls, and/or fraud.

◾◾ Developers or programmers with unauthorized access to promote incorrect or inappro-


priate changes to data, application programs, or settings into the production processing
environment.
◾◾ Changes to applications, databases, networks, and operating systems are not properly autho-
rized and/or their testing is not appropriately performed before implementation into the
production environment.
◾◾ Change management procedures related to applications, databases, networks, and operating
systems are inadequate, ineffective, or inconsistent, thus affecting the stability or manner in
which data are processed within the production environment.
◾◾ Existing controls and procedures related to data conversion are non-existent, inadequate,
or ineffective, thus affecting the quality, stability, or manner in which data are processed
within the production environment.

Relevant IT controls and procedures to assess the SDLC process just discussed include ensuring
that:

◾◾ Business risks and the impact of proposed system changes are evaluated by management
before implementation into production environments. Assessment results are used when
designing, staffing, and scheduling implementation of changes in order to minimize disrup-
tions to operations.
◾◾ Requests for system changes (e.g., upgrades, fixes, emergency changes, etc.) are properly
documented and approved by management before any change-related work is done.
◾◾ Documentation related to the change implementation is accurate and complete.
System Development Life Cycle  ◾  215

◾◾ Change documentation includes the date and time at which changes were (or will be)
installed.
◾◾ Documentation related to the change implementation has been released and communicated
to system users.
◾◾ System changes are successfully tested before implementation into the production environment.
◾◾ Test plans and cases involving complete and representative test data (instead of production
data) are approved by application owners and development management.

Additional controls over the change control process are shown in Appendix 3 from Chapter 3. The
appendix lists controls applicable to most organizations that are considered guiding procedures for
both, IT management and IT auditors.

Approaches to System Development


There are various approaches applicable to system development. Although each approach is unique,
they all have similar steps that must be completed. For example, each approach will have to define
user requirements, design programs to fulfill those requirements, verify that programs work as
intended, and implement the system. IT auditors need to understand the different approaches, the
risks associated with the particular approach, and help ensure that all the necessary components
(controls) are included in the development process. Following are descriptions of common sys-
tem development approaches starting from the traditional waterfall system development method.
Other modern and non-sequential methods, such as agile and lightweight methodologies (e.g.,
Scrum, Kanban, Extreme Programming (XP), etc.) are also discussed.

Waterfall System Development


The waterfall (also referred to as the traditional method) approach to system development is a
sequential process with defined phases beginning with the identification of a need and ending with
implementation of the system. The traditional approach uses a structured SDLC that provides a
framework for planning and developing application systems. Although there are many variations of
this traditional method, they all have the seven common phases just discussed: Planning, System
Analysis and Requirements, System Design, Development, Testing, Implementation, and operations
and Maintenance. Refer to Exhibit 8.4 for an illustration of the Waterfall development approach.
Although the waterfall development process provides structure and organization to systems
development, it is not without risks. The waterfall approach can be a long development process
that is costly due to the amount of resources and length of time required. The business environ-
ment may change between the time the requirements are defined and when the system is imple-
mented. The users may have a long delay before they see how the system will look and feel. To
compensate for these challenges, a project can be broken down into smaller subprojects where
modules are designed, coded, and tested. The challenge in this approach is to bring all the modules
together at the end of the project to test and implement the fully functional system.

Agile System Development


Agile system development practices are transforming the business of creating and/or main­
taining information systems. Agile means capable to move quickly and easily. The Agile System
216  ◾  Information Technology Control and Audit

Planning

System analysis and


requirements

System design

Development

Testing

Implementation

Operations and
maintenance

Exhibit 8.4  Waterfall system development.

Development methodology (ASD) is used on projects that need extreme agility in requirements
(e.g., to deliver products to the customer rapidly and continuously, etc.). ASD focuses on the
adaptability to changing situations and constant feedback. With ASD, there is no clearly defined
end product at the beginning stage. This is contrary to the traditional waterfall approach, which
requires end-product detailed requirements to be set at the starting phase. ASD’s key features
involve short-termed delivery cycles (or sprints), agile requirements, a dynamic team culture, less
restrictive project control, and emphasis on real-time communication. Even though ASD is most
commonly used in software development projects, the approach can also assists other types of
projects. The ASD approach is typically a good choice for relatively smaller software projects or
projects with accelerated development schedules. Refer to Exhibit 8.5 for an illustration of the
Agile development approach.
Agile practices are growing in use by industry. In a study performed by Protiviti in 2016, 44%
of companies overall, including 58% of technology companies and 53% of consumer products
and retail, were investing in and adopting these practices. Thus, it is safe to conclude that ASD
will continue to be a standard practice for a significant percentage of IT functions. Common ASD
methodologies include Scrum, Kanban, and Extreme Programming, and they are discussed next.

1.
Scrum. A derivative of the ASD approach, Scrum is an iterative and incremental software
development framework for managing product development. Its main goal is to engage a
flexible, holistic product development strategy that improves productivity by enabling small,
cross-functional, and self-managing teams work as a unit to reach a common goal. As an
iterative/agile approach, Scrum promotes various “sessions” (also referred to as “sprints”),
which typically last for 30 days. These sessions promote prioritization of tasks and ensure
they are completed on a timely manner. Because of this, teams switching to Scrum tend to
System Development Life Cycle  ◾  217

Start
Plan

Evaluate and System


provide analysis and
feedback requirements

Agile
system
development

Design and
Implement
develop

Deploy
Integrate and
test

Exhibit 8.5  Agile system development.

see great gains in productivity. Scrum’s project manager is referred to as a Scrum Master. The
Scrum Master’s main responsibility is to enable daily project communications and deal with
distractions between team members preventing the successful completion of the job at hand.
The Scrum Master conducts regular meetings with the teams to discuss status, progress,
results, and timelines, among others. These meetings are also very useful to identify either
new tasks or existing ones that need to be reprioritized. Scrum is applicable in certain types
of environments, particularly those with members located at the same physical location
where face-to-face collaboration among the team members is possible and practiced (i.e.,
co-located teams).
2.
Kanban. Kanban is also a type of agile methodology that is used to increase visibility of the
actual development work, allowing for better understanding of the work flow, as well as rapid
identification of its status and progress. Visualizing the flow of work is also useful in order to
balance demand with available capacity. With Kanban, development work items and tasks
are pictured to provide team members a better idea of “what’s going on” and “what’s left to
finish.” Kanban diagrams or graphs are also typically used to depict general categories of
activities or tasks, such as “activities-in-progress,” “activities-in-queue,” or “activities that
have been just completed.” This visualization allows team members, including management
personnel, to view current work and what is left to complete; reprioritize if necessary; and
assess the effect of additional, last-minute tasks should their incorporation become required.
Kanban focuses on the actual work from small, co-located project teams rather than on
individuals’ activities (though many individuals also promote the use of personal Kanban
boards). It is argued that Kanban exposes (visualizes) operational problems early, and stimu-
lates collaboration to correct them and improve the system. There are six general practices
used in Kanban: visualization, limiting work in progress, flow management, making policies
explicit, using feedback loops, and collaborative or experimental evolution.
3.
Extreme Programming. Extreme Programming (XP) is another type of agile software devel-
opment methodology intended to improve productivity and quality by taking traditional
218  ◾  Information Technology Control and Audit

software engineering basic elements and practices to “extreme” levels. For instance, incor-
porating continuous code review checkpoints (rather than having just the traditional, one-
time-only code review) on which new customer requirements can be evaluated, added, and
processed. Another example of reaching “extreme” levels would be the implementation of
automated tests (perhaps inside of software modules) to validate the operation and function-
ality of small sections of the code, rather than testing only the larger features. XP’s goal is
to increase a software organization’s responsiveness while decreasing development overhead.
Similar to Scrum and other agile methods, XP focuses on delivering executable code and
effectively and efficiently utilizing personnel throughout the software development process.
XP emphasizes on fine scale feedback, continuous process, shared understanding, and pro-
grammer welfare.

Adaptive Software Development


Adaptive Software Development (ASWD) is a development approach designed for building
complex software and systems. It is focused on rapid creation and evolution of software systems
(i.e., consistent with the principle of continuous adaptation). ASWD follows a dynamic lifecy-
cle instead of the traditional, static lifecycle Plan-Design-Build. It is characterized by constant
change, re-evaluation, as well as peering into an uncertain future and intense collaboration among
­developers, testers, and customers.
ASWD is similar to the rapid application development approach. It replaces the traditional
waterfall cycle with a repeating series of speculate, collaborate, and learn cycles. These dynamic
cycles provide for continuous learning and adaptation to the emergent state of the project. During
these cycles or iterations, knowledge results from making small mistakes based on false assump-
tions (speculate), re-organizing teams to work together in finding a solution (collaborate), and
finally correcting (and becoming proficient with) those mistakes (learn), thus leading to greater
experience and eventually mastery in the problem domain. Refer to Exhibit 8.6 for an illustration
of the ASWD approach.

Joint Application Development


Joint Application Development (JAD) is an approach or methodology developed in the late 1970s
that involves participation of either the client or end user in the stages of design and development
of an information system, through a succession of collaborative workshops called JAD sessions.
Through these JAD sessions, end users, clients, business staff, IT auditors, IT specialists, and
other technical staff, among others are able to resolve their difficulties or differences concerning

1. Speculate

3. Learn 2. Collaborate

Exhibit 8.6  Adaptive software development.


System Development Life Cycle  ◾  219

JAD Session – Information system design and development

Sessions or work-
shops conducted
by a JAD Facili-
tator to discuss
system’s design
and development.

Participants may
include:
Client, End User
Representa-
tive(s), IS Ana-
lyst(s), Business
Analyst(s), IS
Manager, Sys-
tems Architect,
Data Architect,
IT Auditors, etc.

Exhibit 8.7  Joint application design/development.

the new information system. The sessions follow a detailed agenda in order to prevent any mis-
communications as well as to guarantee that all uncertainties between the parties are covered.
Miscommunications can carry far more serious repercussions if not addressed until later on in the
process.
The JAD approach leads to faster development times and greater client satisfaction than the
traditional approach because the client is involved throughout the whole design and develop-
ment processes. In the traditional approach, on the other hand, the developer investigates the
system requirements and develops an application with client input typically consisting of an initial
interview.
A variation on JAD is prototyping and rapid application development, which creates applications
in faster times through strategies, such as using fewer formal methodologies and reusing software
components. In the end, JAD results in a new information system that is feasible and appealing to
both the client and end users. Refer to Exhibit 8.7 for an illustration of the JAD approach.

Prototyping and Rapid Application Development


In general, Prototyping and Rapid Application Development (RAD) includes:

◾◾ the transformation and quick design of the user’s basic requirements into a working model
(i.e., prototype);
◾◾ the building of the prototype;
◾◾ the revision and enhancement of the prototype; and
◾◾ the decision to whether accept the prototype as the final simulation of the actual s­ ystem
(hence, no further changes needed), or go back to redesign and work with the user
requirements.

Exhibit 8.8 illustrates this Prototyping and RAD process.


220  ◾  Information Technology Control and Audit

Gather user requirements

Transform requirements
and quick design

Build prototype

Review and enhance


prototype

Changes? Yes

No

Deploy and implement

Exhibit 8.8  Prototyping and RAD process.

Prototyping and RAD can facilitate interaction between the users, system analysts, and the IT
auditor. These techniques can be applied to production report development, a specific application
module, or the entire support system. Some advantages of prototyping and RAD include:

◾◾ Prototypes can be viewed and analyzed before commitment of large funding for systems.
◾◾ User approval and final satisfaction is enhanced because of increased participation in the
design of the project.
◾◾ The cost of modifying systems is reduced because users and designers can foresee problems
earlier and are able to respond to the users’ rapidly changing business environment.
◾◾ A rudimentary prototype can be redesigned and enhanced many times before the final form
is accepted.
◾◾ Many systems are designed “from scratch” and no current system exists to serve as a guide.

On the other hand, because prototypes appear to be final when presented to the users, program-
mers may not be given adequate time to complete the system and implement the prototype as the
final product. Often the user will attempt to use the prototype instead of the full delivery system.
The user must understand that the prototype is not a completed system. Risks associated with
prototyping and RAD include:

◾◾ Incomplete system design


◾◾ Inefficient processing performance
◾◾ Inadequate application controls
◾◾ Inadequate documentation
◾◾ Ineffective implementations

Lean Software Development


Lean Software Development (LSD) is a translation of lean manufacturing and lean IT principles
and practices to the software development domain. LSD is a type of agile approach that can be
System Development Life Cycle  ◾  221

summarized by seven key principles, which are very close in concept to lean manufacturing prin-
ciples. They are:

1. Eliminate waste—identify what creates value to the customer


2. Build quality in—integrate quality in the process; prevent defects
3. Create knowledge—investigate and correct errors as they occur; challenge and improve
standards; learn from mistakes
4. Defer commitment—learn constantly; perform only when needed, and perform fast
5. Deliver as fast as possible—deliver value to customer quickly; high quality and low cost
6. Empower team and respect people—engage everyone; build integrity; provide stable environment
7. Optimize the whole—deliver complete product; monitor for quality; continuous improvement

Refer to Exhibit 8.9 for an illustration of the LSD approach.

End-User Development
End-user development (EUD) (also known as end-user computing) refers to applications that are
created, operated, and maintained by people who are not professional software developers (end
users). There are many factors that have led the end user to build their own systems. First, and
probably foremost, is the shift in technology toward personal computers (PCs) and generation
programming languages (e.g., fourth-generation languages [4GL], 5GL, etc.). This shift has
been due, in part, to the declining hardware and software costs that have enabled individuals to
own computers. Because of this, individuals have become more computer literate. At the same
time, users are frustrated with the length of time that it takes for traditional systems development
efforts to be completed. Fourth-generation programming languages, for example, have provided
users with the tools to create their own applications. Examples of such tools include:

1. Eliminate
waste

7. Optimize 2. Build
whole quality

6. Empower 3. Create
team knowledge

5. Deliver 4. Defer
fast commitment

Exhibit 8.9  Lean software development.


222  ◾  Information Technology Control and Audit

◾◾ Mainframe-based query tools that enable end users to develop and maintain reports. This
includes fourth-generation languages such as EZ-TRIEVE and SAS or programmer-­
developed report generation applications using query languages.
◾◾ Vendor packages that automate a generic business process. This includes accounting pack-
ages for generating financial statements and legal packages for case management.
◾◾ EUD applications using PC-based tools, databases, or spreadsheets to fulfill a department or
individual information processing need.

Because PCs seem relatively simple and are perceived as personal productivity tools, their effect on
an organization has largely been ignored. In many organizations, EUD applications have limited
or no formal procedures. End users may not have the background knowledge to develop applica-
tions with adequate controls or maintainability. This becomes an issue when organizations rely on
user-developed systems for day-to-day operations and important decision making. Simultaneously,
end-user systems are becoming more complex and are distributed across platforms and organiza-
tional boundaries. Some of the risks associated with EUD applications include the following.

◾◾ Higher organizational costs


◾◾ Incompatible systems
◾◾ Redundant systems
◾◾ Ineffective implementations
◾◾ Absence of segregation of duties
◾◾ Incomplete system analysis
◾◾ Unauthorized access to data or programs
◾◾ Copyright violations
◾◾ Destruction of information by computer viruses
◾◾ Lack of back-up and recovery options

Exhibit 8.10 summarizes the EUD approach to system development.

End-user development
(EUD) promotes a culture
of user involvement and
participation.

EUD systems are created,


operated, and maintained by
people who are not profes-
sional software developers
(i.e., end users).

EUD applications have lim-


ited or no formal proce-
dures, and risks resulting
from end users not having
the background knowledge
to develop applications with
adequate controls and main-
tainability.

Exhibit 8.10  End-user development.


System Development Life Cycle  ◾  223

IT Auditor’s Involvement in System Development


and Implementation
IT auditors can assist organizations by reviewing their systems development and implementation
(SD&I) projects to ensure that new systems comply with the organization’s strategy and ­standards.
Each SD&I project will need to be risk assessed to determine the level of audit’s involvement. The
type of review will also vary depending on the risks of a particular project. IT auditors may only be
involved in key areas or the entire SD&I project. In any case, IT auditors need to understand the
process and application controls to add value and ensure adequate controls are built into the system.
SD&I audits are performed to evaluate the administrative controls over the authorization,
development, and implementation of new systems (i.e., applications), and to review the design of
the controls/audit trails of the proposed system. The scope of a SD&I audit includes an evaluation
of the overall SDLC approach or methodology. The audit also focuses on the evaluation of the
quality of the deliverables from each system development phase (e.g., evaluation of the controls
design and audit trails, system test plan and results, user training, system documentation, etc.).
Recommendations from SD&I audits might include improvements in user requirements, applica-
tion controls, or the need to document test plans and expected test results.
Developing and implementing new systems can be a costly and time-consuming endeavor. A
well-controlled environment with an overall strategy, standards, policies, and procedures helps
ensure the success of development efforts. There are many processes that need to be well controlled
to ensure the overall success of a system. Because of the significant cost to implement controls after
a system has already gone into production, controls should be defined before a system is built.
There are many opportunities for auditor involvement in the SD&I process. IT auditors need
to develop the skills and relationships to work with the SD&I team to ensure that controls are
built into the system. IT auditors can assist organizations by:

◾◾ reviewing the SD&I environment


◾◾ evaluating standards for SD&I
◾◾ evaluating phases in the SD&I process
◾◾ reviewing critical systems for input, processing, and output
◾◾ verifying that the new system provides an adequate audit trail

The IT auditor’s role in a SD&I project depends on the organization’s culture, maturity of the IS
function, and philosophy of the auditing department. Auditing SD&I requires specific knowledge
about the process (i.e., development and implementation) and application controls. Understanding
the process allows the auditor to identify key areas that would benefit from independent verifica-
tion. Understanding application controls allows the auditor to evaluate and recommend controls
to ensure complete and accurate transaction processing.
IT auditors can take on two different roles in a SD&I project: control consultant or indepen-
dent reviewer.

◾◾ As a control consultant, the auditor becomes a member of the SD&I team and works with
analysts and programmers to design application controls. In this role, the auditor is no
­longer independent of the SD&I team.
◾◾ As an independent reviewer, the auditor has no design responsibilities and does not report
to the team, but can provide recommendations to be acted on or not by the project/system
manager.
224  ◾  Information Technology Control and Audit

By becoming involved at strategic points, the auditor can ensure that a system is well controlled
and auditable. The following highlights some of the key responsibilities for the auditor when
involved in a SD&I project:

1. Review user requirements


2. Review manual and application controls
3. Check all technical specifications for compliance with company standards
4. Perform design walk-throughs at the end of each development phase
5. Submit written recommendations for approval after each walk-through
6. Ensure implementation of recommendations before beginning the next phase
7. Review test plans
8. Present findings to management
9. Maintain independence to remain objective

These can help minimize control weaknesses and problems before the system is implemented in
production and becomes operational rather than after it is in use.
IT auditors determine their level of involvement in a SD&I audit by completing a risk assess-
ment of the SD&I process. Results from the risk assessment also prompt the amount of time
necessary to allocate to the particular project, required resources, etc. Preparation of an audit plan
follows. The plan describes the audit objectives and procedures to be performed in each phase of
the SD&I process. IT auditors communicate not only the scope of his/her involvement, but also
findings and recommendations to development personnel, users, and management resulting from
the audit.

Risk Assessment
IT auditors may not have enough time to be involved in all phases of every SD&I project.
Involvement will depend on the assessment of process and application risks. Process risks may
include negative organizational climate, as well as the lack of strategic direction, development
standards, and of a formal systems development process. Application risks, on the other hand,
relate to application complexity and magnitude; inexperienced staff; lack of end-user involvement;
and lack of management commitment.
The level of risk may be a function of the need for timely information, complexity of the appli-
cation, degree of reliance for important decisions, length of time the application will be used, and
the number of people who will use it.
The risk assessment defines which aspects of a particular system or application are covered
by the audit. Depending on the risk, the scope of the assessment may include evaluating system
requirements, as well as reviewing design and testing deliverables, application controls, operational
controls, security, problem management, change controls, or the post-implementation phase.

Audit Plan
IT auditors may be involved in the planning process of a SD&I project in order to: develop an
understanding of the proposed system; ensure time is built into the schedule to define controls;
and verify that all the right people are involved. The audit plan will also detail the steps and pro-
cedures to fulfill the audit objectives. As in any audit, a SD&I audit begins with a preliminary
analysis of the control environment by reviewing existing standards, policies, and procedures.
System Development Life Cycle  ◾  225

During the audit, these standards, policies, and procedures should be assessed for completeness
and operational efficiency. The preliminary analysis should identify the organization’s strategy and
the responsibilities for managing and controlling applications.
The audit plan will further document the necessary procedures to review the SD&I process to
ensure that the system is designed consistent with user requirements, that management approves
such design, and that the system or application is adequately tested before implementation. An
additional focus of the audit plan is making certain that the end user is able to use the system
based on a combination of skills and supporting documentation.
A SD&I audit assesses the adequacy of the control environment for developing effective
­systems to provide reasonable assurance that the following tasks are performed:

◾◾ Comply with standards, policies, and procedures


◾◾ Achieve efficient and economical operations
◾◾ Conform systems to legal requirements
◾◾ Include the necessary controls to protect against loss or serious errors
◾◾ Provide controls and audit trails needed for management, auditor, and for operational review
purposes
◾◾ Document an understanding of the system (also required for appropriate maintenance and
auditing)

For any kind of a partnership involving IT auditors, users, and IS management, it is important that
the organization plans for and establishes a formal procedure for the development and implementa-
tion of a system. Auditor influence is significantly increased when there are formal procedures and
required guidelines identifying each phase and project deliverable in the SDLC and the extent of
auditor involvement. Without formal SDLC procedures, the auditor’s job is much more difficult and
recommendations may not be as readily accepted. Formal procedures in place allow IT auditors to:

◾◾ Review all relevant areas and phases of the SDLC


◾◾ Identify any missing areas for the development team
◾◾ Report independently to management on the adherence to planned objectives and procedures
◾◾ Identify selected parts of the system and become involved in the technical aspects based on
their skills and abilities
◾◾ Provide an evaluation of the methods and techniques applied in the SD&I process, as
defined earlier

The audit plan must also document the auditor’s activities and responsibilities (tasks) to be per-
formed within each remaining SD&I phase (i.e., System Analysis and Requirements, System
Design, Development, Testing, Implementation, and Operations and Maintenance). These are
described below.

Auditor Task: System Analysis and Requirements


The project team typically expends considerable effort toward the analysis of the business problem
and what the system is to produce without initially attempting to develop the design of the system.
The IT auditor should observe that the primary responsibility is not to develop a product but to
satisfy the user. Often, the user does not understand what is truly needed. Only by understanding
the user’s business, its problems, goals, constraints, weaknesses, and strengths can the project team
226  ◾  Information Technology Control and Audit

deliver the product the user needs. IT auditors can participate by reviewing requirements, and
verifying user understanding and sign-off. A good checkpoint for IT auditors is to ensure that in
the System Analysis and Requirements phase, defining security requirements is included. The IT
auditor should identify and document security requirements early in the development life cycle,
and make sure that subsequent development artifacts are evaluated for compliance with those
requirements. When security requirements are not defined, the security of the resulting system
cannot be effectively evaluated and the cost to retrofit into the system later in its life cycle can be
extremely costly.

Auditor Task: System Design


The IT auditor may review the design work for any possible exposures or forgotten controls, as
well as for adherence with company standards, policies, and procedures. Standards, policies, and
procedures should be documented as part of the SDLC methodology and defined before the
beginning of the project. If exposures, missing controls, and/or lack of compliance are identified,
the IT auditor should recommend the appropriate controls or procedures.
As seen earlier, a methodology or technique that brings users and project team members
together for an intensive workshop in which they create a system proposal into a detail design is
JAD. Usually a trained JAD facilitator, having some claim to neutrality, takes the group through
formatted discussions or sessions of the system. The IT auditor may be an active participant in
this process. The result of the JAD session is a user view of the system for further development.
This is an excellent setting for the discussion of the advantages and cost effectiveness of controls.
In addition, analysis time is compressed, discrepancies resolved, specification errors reduced, and
communications greatly enhanced. IT auditors can review deliverables and recommend applica-
tion controls. Application controls are discussed in more detail in a later chapter.

Auditor Task: Development


The IT auditor may review the new system’s programs to verify compliance with programming and
coding standards. These standards help ensure that the code is well-structured, tracks dependen-
cies, and makes maintenance easier. The IT auditor may review a sample of programs to verify that
the standards are being followed and that the programs conform to systems design. In addition,
programs may be checked for possible control exposures and for the placement of proper controls
per design. If it is determined that controls are needed, the IT auditor should make recommenda-
tions, following the same criteria that were used during the System Design phase. During this
Development phase, however, cost and time factors must be carefully considered because the cost
of changing programs to include controls increases as the project progresses.

Auditor Task: Testing


The IT auditor may be called on to assure management that both, developers and users, have
thoroughly tested the system to ensure that it:

◾◾ possesses the built-in controls necessary to provide reasonable assurance of proper operation;
◾◾ provides the capability to track events through the systems and, thus, supports audit review
of the system in operation; and
◾◾ meets the needs of the user and management.
System Development Life Cycle  ◾  227

If the level of testing does not meet standards, the IT auditor must notify the development team
or management who should then take corrective action.

Auditor Task: Implementation


System implementation is a key IT audit review point because implementation is often where
critical controls may be overwritten or deactivated to bring the system up and operational to meet
organizational needs and requirements. The IT auditor should review implementation materials
related to strategy, communication, training, documentation, and conversion procedures, among
others. Production readiness should also be reviewed, which may include evaluating the readiness
of the system in relation to the results of testing, the readiness of production support program-
mers, computer operations, and users in terms of training, and the readiness of the help desk with
trained staff and a problem-tracking process.
Once the system is implemented in production, the IT auditor may survey users to: evaluate
its effectiveness from a workflow perspective; review error detection and correction procedures
to confirm they are working as intended; and perform tests of data to confirm completeness of
transaction processing and audit trail.

Auditor Task: Operations and Maintenance


In this phase, the IT auditor evaluates post-implementation processes and procedures. For instance,
code modifications and testing procedures should be assessed in order to determine whether the
organization’s standards, policies, and/or procedures are being followed. The IT auditor also con-
ducts procedures to ensure systems are well maintained; that is, programmers correct problems or
make necessary enhancements in a timely and adequate fashion. When maintaining application
systems, the IT auditor must ensure that correction of problems and/or installation of enhance-
ments are both worked in a separate test environment, and upon s­ uccessful results, promoted to
the production environment. There are other common metrics that should be reviewed by the IT
auditor in order to evaluate the effectiveness and efficiency of the maintenance process:

◾◾ The ratio of actual maintenance cost per application versus the average of all applications.
◾◾ Requested average time to deliver change requests.
◾◾ The number of change requests for the application that were related to bugs, critical errors,
and new functional specifications.
◾◾ The number of production problems per application and per respective maintenance changes.
◾◾ The number of divergence from standard procedures, such as undocumented applications,
unapproved design, and testing reductions.
◾◾ The number of modules returned to development due to errors discovered in acceptance
testing.
◾◾ Time elapsed to analyze and fix problems.
◾◾ Percent of application software effectively documented for maintenance.

Another relevant procedure performed during this phase is the review and assessment of all related
system, user, or operating documentation. Such documentation should be evaluated for complete-
ness and accuracy. Documentation should be practical and easily understandable by all user types
(e.g., end users, programmers, senior management, etc.). Diagrams of information flow, samples
of possible input documents/screens, and output reports are some of examples of information that
enhance user understanding of the system and, therefore, should be documented.
228  ◾  Information Technology Control and Audit

Exhibit 8.11 illustrates a template of a standard audit checklist that can be used as a starting
point when assessing the SDLC phases for a project. The checklist is based on standard ISO/
IEC 12207:2013- Systems and Software engineering Software Life Cycle Processes, which provides
guidance for defining, developing, controlling, improving, and maintaining system/software
life cycle processes. The standard can also be adapted according to the particular system/soft-
ware project.

Exhibit 8.11  Sample SDLC Audit Checklist


Sample SDLC Audit Checklist: Development and Implementation of a Financial Application
System

Yes, No,
Task N/A Comments

Phase 1: Planning
1. Establish and prepare an overall project plan with defined tasks
and deliverables.

2. Plan includes scope of the work (e.g., period, name of the new
system, schedule, restrictions, etc.), necessary resources, and
required deadlines.

3. Plan describes the extent of the responsibilities of all involved


personnel (e.g., management, internal audit, end users, quality
assurance (QA), etc.)

4. Plan identifies equipment requirements, such as the hardware


configuration needed to use the new system (e.g., processing
speed, storage space, transmission media, etc.).

5. Plan includes detailed financial analyses, including costs to


develop and operate the new system and the return on
investment.

6. Plan is reviewed and approved at appropriate levels.

Phase 2: System Analysis and Requirements


1. Analysis includes a study to determine whether the new system
should be developed or purchased.

2. Analysis includes a study of the current system to identify


existing processes and procedures that may continue in the
new system.

3. Procedures for performing a needs analysis are appropriately


assessed, and conform to the organization’s standards, policies,
and/or procedures.

4. Expectations from end users and system analysts for the new or
modified system are clearly translated into requirements.

(Continued)
System Development Life Cycle  ◾  229

Exhibit 8.11 (Continued)  Sample SDLC Audit Checklist


Sample SDLC Audit Checklist: Development and Implementation of a Financial Application
System

Yes, No,
Task N/A Comments

5. Requirements of the new software/system are defined in terms


that can be measured.

6. Requirements are quantifiable, measurable, relevant, and


detailed.

Phase 3: System Design


1. Design describes the proposed system flow and other
information on how the new system will operate (i.e., conceptual
design).

2. System design specifications, features, and operations meet the


requirements previously defined.

3. System design specifications are approved and comply with the


organization’s standards, policies, and/or procedures.

4. The systems analyst defines and documents all system interfaces,


reporting, screen layouts, and specific program logic necessary to
build the system consistent with the requirements.

5. The systems analyst and end users review, and ensure that, the
specific business needs have been translated into the final
requirements for the new system.

6. Technical details of the proposed system (e.g., hardware and/or


software needed, networking capabilities, procedures for the
system to accomplish its objectives, etc.) have been discussed
with appropriate stakeholders.

7. The design of the system describes controls for input points,


processing, and screen layout/output.

8. The design of the system incorporates audit trails and


programmed controls.

Phase 4: Development
1. Obtain and review the source/program code for the new or
modified system or application.

2. Determine whether the source/program code meets the


organization’s programming standards, policies, and/or
procedures.

3. The source/program code is tested for both syntax and logic


flow.
(Continued)
230  ◾  Information Technology Control and Audit

Exhibit 8.11 (Continued)  Sample SDLC Audit Checklist


Sample SDLC Audit Checklist: Development and Implementation of a Financial Application
System

Yes, No,
Task N/A Comments

4. All logic paths within the source/program code are exercised to


ensure error routines work and the program terminates
processing normally.

5. Logical security access controls are configured and incorporated


within the source/program code.

6. Security controls configured are designed to address


requirements related to the confidentiality, integrity, and
availability of information.

7. Security controls configured are designed to address


authorization and authentication processes, as well as business
access requirements and monitoring.

8. The source/program code is validated through individual unit


testing (thorough system testing is assessed in the Testing phase).
The Development phase is final once the source/program code is
validated.

Phase 5: Testing
1. A plan for system testing is prepared that conforms with
organization’s standards, policies, and/or procedures.

2. The testing plan defines: individual test events and scenarios;


roles and responsibilities of the test participants; test
environments; acceptance criteria; testing logistics; problem
reporting and tracking; test deliverables; and personnel
responsible for review and approval of tests and test results.

3. Testing is based on existing testing methodologies established by


the organization.

4. Tests include (real or generated) test data that are representative


of relevant business scenarios.

5. Test scenarios, associated data, and expected results are


documented for every test condition.

6. Testing performed is documented to prevent duplicate testing


efforts.

7. End users perform the testing; not developers or programmers.

8. Testing is performed in separate/development environments; not


in production environments.
(Continued)
System Development Life Cycle  ◾  231

Exhibit 8.11 (Continued)  Sample SDLC Audit Checklist


Sample SDLC Audit Checklist: Development and Implementation of a Financial Application
System

Yes, No,
Task N/A Comments
9. Test results are signed off and approved, as applicable, to support
that appropriate testing was performed and ensure such testing
results are consistent with the requirements.
10. Systems with unsuccessful test results are not implemented in the
production environment.
11. Upon successful test results, management personnel sign-off and
approve promotion of new or modified systems into production
environments.
12. The system is assessed by a QA professional to verify that it works
as intended and that it meets all design specifications.
13. Documented testing procedures, test data, and resulting outputs
are reviewed to determine if they are comprehensive and follow the
organization’s standards, policies, and/or procedures.
14. System testing results validate that the system works as
expected and that all errors, flaws, failures or faults identified
have been corrected and do not prevent the system from operating
effectively.

Phase 6: Implementation
1. An implementation plan is documented and put in place to guide
the implementation team and users throughout the
implementation process.
2. The implementation plan covers the implementation schedule,
conversion procedures (if any), resources required, roles and
responsibilities of team members, means of communication,
issue management procedures, and training plans for the
implementation process.
3. The implementation plan includes the date and time at which the
new system or changes to an existing one will be installed.
4. Documentation related to system implementation has been
released and communicated to the users.
5. System was successfully tested before its implementation into the
production environment consistent with organization standards,
policies, and procedures.
6. Documentation related to implementation provides programmers
with enough information to understand how the system works,
and to ensure effective and efficient analysis of changes and
troubleshooting.
(Continued)
232  ◾  Information Technology Control and Audit

Exhibit 8.11 (Continued)  Sample SDLC Audit Checklist


Sample SDLC Audit Checklist: Development and Implementation of a Financial Application
System

Yes, No,
Task N/A Comments

7. Documentation related to implementation is updated as the


system is modified.

8. Training procedures have been incorporated in the system


implementation process.

9. Support (e.g., via a help desk, etc.) is provided to users following


implementation.

10. Determine if the organization’s standards, policies, and/or


procedures are followed and if documentation supporting
compliance with the standards is available.

Phase 7: Operations and Maintenance


1. Review and evaluate the procedures for performing post-
implementation reviews.

2. Review system modifications, testing procedures, and supporting


documentation to determine if the organization’s standards,
policies, and/or procedures have been followed.

3. A reporting system is established for users to communicate


problems or enhancements needed.

4. Procedures are in place for programmers to correct problems


with the new system or make necessary enhancements.

5. Correction of problems or enhancements to the new system are


worked in a separate/test environment, and upon successful
results, promoted to the production environment.

Communication
The first area to communicate is the IT auditor’s scope of involvement in the SD&I project. It is
very important to make sure that the management and development teams’ expectations of the IT
auditor’s role are understood and communicated to all participants. To influence the SD&I effort,
the IT auditor must develop an open line of communication with both management and users. If
a good relationship between these groups does not exist, information might be withheld from the
IT auditor. This type of situation could prevent the IT auditor from doing the best job possible.
In addition, the IT auditor must develop a good working relationship with analysts and program-
mers. Although the IT auditor should cultivate good working relationships with all groups with
design responsibilities, the IT auditor must remain independent.
Throughout the SD&I project, the IT auditor will be making control recommendations result-
ing from identified findings. Depending on the organization’s culture, these recommendations
System Development Life Cycle  ◾  233

may need to be handled informally by reviewing designs with the project team or formally by
presenting them to the steering committee. In either case, the IT auditor must always con-
sider the value of the control recommendation versus the cost of implementing the control.
Recommendations should be specific. They should identify the problem and not the symp-
tom, and allow for the proper control(s) to be implemented and tested. Findings, risks as a
result of those findings, and audit recommendations are usually documented in a formal letter
(i.e., Management Letter). Refer to Exhibit 3.9 on Chapter 3 for an example of the format of a
Management Letter from an IT audit.
On receipt of the Management Letter, IT management and affected staff should review the
document. Issues and matters not already completed should be handled and followed-up. Within
a relatively short time, the fact that all discrepancies have been corrected should be transmitted to
the audit staff in a formal manner. These actions are noted in the audit files, and such cooperation
reflects favorably in future audits.
Recommendations may often be rejected though because of a time and cost factor. Managers
may sometimes feel that implementing an auditor’s recommendations will delay their schedule.
The IT auditor must convince management of the value of the recommendations, and that if they
are not implemented, more time and money will be spent in the long run. Informing management
of the cost of implementing a control now, rather than shutting down the system later (leaving
potential exposures open), will help convince management of the need to take appropriate and
immediate action.

Conclusion
Developing new systems can be a costly and time-consuming endeavor. A well-controlled
environment with an overall strategy, standards, policies, and procedures in place helps ensure
the success of system development and implementation efforts. There are many processes that
need to be followed to ensure the overall success of a system. These processes or phases are
provided by a SDLC. The SDLC provides a framework for effectively developing applica-
tion systems. It specifically describes a standard process for planning, analyzing, designing,
creating, testing, deploying, and maintaining information systems (i.e., new development or
modified system).
Risks related to the SDLC phases should constantly be assessed by the organization. These
risks are significant to both the organization and the IT auditor and should prompt for the iden-
tification (and implementation) of controls that can mitigate them. Because of the cost to imple-
ment controls after a system has already been implemented into production, controls should be
defined before a system is built.
There are various approaches applicable to system development. Although each approach is
unique, they all have similar steps that must be completed. For example, each approach will
have to define user requirements, design programs to fulfill those requirements, verify that pro-
grams work as intended, and implement the system. IT auditors need to understand the different
approaches, the risks associated with the particular approach, and help ensure that all the neces-
sary procedures and controls are included in the development process.
There are many opportunities for auditor involvement in the SD&I process. IT auditors
can assist organizations by reviewing the SD&I environment; evaluating standards for SD&I;
monitoring project progress; evaluating phases in the SD&I process; reviewing critical systems
for input, processing, and output; verifying that the new system provides an adequate audit
234  ◾  Information Technology Control and Audit

trail; and by ensuring that risks are identified and proper controls are considered during the
implementation process.
SD&I audits, for example, are performed to evaluate the administrative controls over the
authorization, development, and implementation of new systems (i.e., applications), and to review
the design of the controls/audit trails of the proposed system. The scope of a SD&I audit includes
an evaluation of the overall SDLC approach or methodology. The audit also focuses on the evalu-
ation of the quality of the deliverables from each system development phase (e.g., evaluation of the
controls design and audit trails, system test plan and results, user training, system documentation,
etc.). Recommendations from SD&I audits might include improvements in user requirements,
application controls, or the need to document test plans and expected test results.

Review Questions
1. How does a system development life cycle (SDLC) provide an environment that is conducive
to successful systems development?
2. Describe the purpose of test data.
3. Explain what conversion procedures referred to as part of implementing a new system.
4. Why should disaster recovery plans be addressed during an implementation as opposed to
after?
5. Why is a help desk function critical to system development? Discuss its interrelationship
with the problem management and reporting system.
6. Why is it necessary for programmers to have good documentation as part of the operations
and maintenance phase of the SDLC?
7. Discuss how the IT auditor can benefit an organization’s system development and imple-
mentation process.
8. Differentiate between the two roles IT auditors can take on in a SD&I project.
9. What methodology or technique is used to bring users and project team members together
to create a detail design?
10. Throughout the system development and implementation project, the IT auditor will make
control recommendations to management resulting from identified findings. Explain why
recommendations from IT auditors may often be rejected.

Exercises
1. Summarize the common phases in the traditional system development life cycle (SDLC)
approach.
2. A company is developing a new system. As the internal IT auditor, you recommend that
planning for the new system development should be consistent with the SDLC framework.
IT personnel have identified the following as major activities to be completed within the
upcoming system development.
– Ensure Help desk is in place to provide support
– Integration of security access controls within the code
– Correct problems and implement enhancements

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy