1-14 Notlar
1-14 Notlar
Spring Semester-2021
CONTENTS
Aim Of Lesson
Software engineering is about imagination and creativity. The process of creating something
apparently tangible from nothing. Software engineering methods have not yet been completely
analyzed and systematized. Software engineering is about the creation of large pieces of
software that consist of thousands of lines of code and involve many person months of human
effort.
Books, Videos and Online Course Documents
1. Software Engineering, Ian Sommerville, 9th edition, Addison-Wesley
2. Software Engineering For Students, Douglas Bell, 4th edition, AddisonWesley
3. Object - Oriented Software Engineering, Timothy C. Lethbridge and Robert Laganière,
2nd edition, McGraw- Hill
Evaluations
1 Mid-term, 1 Final Exam and 1 Project
WEEK 1: DEFINITION AND IMPORTANCE of SOFTWARE ENGINEERING
➢ What is software?
❖ It includes all methods, tools, information and documents that can be used to
combine and manage logic, data, documents, human and program components
for a specific production purpose [1, 2].
❖
The software can be examined as General and Special for Customer Software [3].
➢ Software Deteriorates
Very large: 10 years, 1000s of programmers, 10M LOC, Ex: Air traffic control,
Telecommunications, space shuttle
Software Engineering term first emerged in 1968 at the NATO Software Engineering
conference in Germany. It emerged with the evolution of Computer Science
discipline.[4].
Software Engineering has been described in many ways, some are as follows:
➢ Software Engineer
Software Engineer is the software engineering job. However, person cannot do this job
without formal training. The software engineer is not just an encoder. The person who
knows best how to tell user requests to the computer. It is mostly related to people and
deals with the logical dimension of the software. Today, software engineering has
become a profession and has schools.
Computer software is now everywhere in our lives. For this, the goal of Software
Engineering;
Errors in software production show propagation. For this reason, error correction costs increase
gradually in the following stages. Its main goal is to realize the production with the lowest cost
and the highest quality. Therefore, the hardware cost is ineffective besides the cost of the
software. Table 1 shows error correction costs in software production [2].
Analysis 1
Design 5
Coding 10
Test 25
Acceptance
50
Test
Operating 100
The objectives of software quality assurance activities can be summarized as follows [2]:
➢ Maintainability
➢ Efficiency
➢ Acceptability
o Acceptable to the type of users for which it is designed
Management, customers and analysts are the most important stakeholders in the perception of
software.
Administration according to an old belief, a good manager manages all projects. But this is
not true for today's projects. Because, In a constantly evolving world, a good manager must be
well-off with the latest technology.
Customer is the person who directs the project to be developed and determines the
qualifications of the project in line with his own wishes. Before starting software development,
it has a great role in understanding and analyzing the subject thoroughly.
Operator should analyze the subject very well before coding begins, design it with all the
details and then coding should begin.
Case of roles:
➢ Why is software development difficult?
o The “Entropy” of a software system increases with each change: Each implemented
change erodes the structure of the system which makes the next change even more
expensive (“Second Law of Software Dynamics”).
o As time goes on, the cost to implement a change will be too high, and the system
will then be unable to support its intended task. This is true of all systems,
independent of their application domain or technological base.
1. Abstraction
1. Task Model:
o PERT Chart: What are the dependencies between tasks?
o Schedule: How can this be done within the time limit?
o Organization Chart: What are the roles in the project?
2. Issues Model:
o What are the open and closed issues?
o What constraints were imposed by the client?
o What resolutions were made?
2. Decomposition
3. Hierarchy
➢ Software Classification
It is possible to classify the software into a number of classes according to development and
design.
Produced to be sold to many different customers (Commercial Off The Shelf - COTS)
✓ Special Software
✓ System Software
It is the software that installs every time the computer is turned on and makes the computer
ready for use. The BIOS program on PCs does this task. This program is loaded into RAM
when the computer is started and remains in memory until it is turned off.
✓ Application Software
Usually, they are all programs outside of the system software. These softwares are programs
written to solve a certain problem using appropriate data processing techniques. Unlike system
and support software, it is written for a uniform application. For example; Microsoft Excel,
Microsoft Word, etc.
✓ Support Software
They are general-purpose computer programs that are not specific to any application and allow
certain commands to be performed. Sorting, copying, formatting, etc. like software that handles
transactions.
REFERENCES
Spring Semester-2023
followed throughout the software development life cycle for project success.
o Physical design; the components that contain the software and their
details.
Lifecycle data must be able to be updated, read, deleted, archived and, if necessary,
transferred with ownership rights. The main purpose of preparing a life cycle data is to:
Identifying and recording the information required throughout the life cycle of the software
product,
Life cycle data of projects are usually stored in documents. Thus, these documents are very
important for a healthy project and product maintenance. Because documents are regular
environments where information is collected. The characteristics that the information should
have are as follows:
➢ Contradiction
➢ Completeness
➢ Verifiability
➢ Consistency
➢ Interchangeability
➢ Traceability
➢ Exhibitability
➢ Privacy
➢ Preservation
➢ Sensitivity
Using appropriate software development models plays a very important role in the
development of the software as more secure, accurate, understandable, testable and
maintainable.
➢ What is Process?
1. Waterfall Model
The waterfall model is a classical model used in system development life cycle to create a
system with a linear and sequential approach. It is termed as waterfall because the model
develops systematically from one phase to another in a downward fashion. This model is
divided into different phases and the output of one phase is used as the input of the next
phase. Every phase has to be completed before the next phase starts and there is no
overlapping of the phases.
Requirements
Analysis
Design
Coding
Testing
Deployment
Maintenance
2. Analysis Read: The requirement and based on analysis define the schemas, models and
business rules.
4. Implementation- Coding: Development of the software in the small units with functional
testing.
5. Integration and Testing: Integrating of each unit developed in previous phase and post
integration test the entire system for any faults.
6. Deployment of System: Make the product live on production environment after all
functional and nonfunctional testing completed.
7. Maintenance: Fixing issues and release new version with the issue patches as required.
2. Easy to manage as each phase has specific outputs and review process,
3. Clearly-defined stages,
4. Works well for smaller projects where requirements are very clear,
5. Process and output of each phase are clearly mentioned in the document.
➢ Disadvantages:
1. It doesn’t allow much reflection or revision. When the product is in testing phase, it is very
difficult to go back and change something which is left during the requirement analysis phase.
5. As testing is done at a later phase. So, there is a chance that challenges and risks at earlier
phases are not identified.
2. V Model
V-Model also referred to as the Verification and Validation Model. In this, each phase of
SDLC (Software Development Life Cycle) must complete before the next phase starts. It
follows a sequential design process same as the waterfall model. Testing of the device is
planned in parallel with a corresponding stage of development.
SDLC: SDLC is Software Development Life Cycle. It is the sequence of activities carried out
by Developers to design and develop high-quality software.
STLC: STLC is Software Testing Life Cycle. It consists of a series of activities carried out by
Testers methodologically to test your software product.
Figure 3. V model process steps.
Verification: It involves a static analysis method (review) done without executing code. It is
the process of evaluation of the product development process to find whether specified
requirements meet.
V-Model contains Verification phases on one side of the Validation phases on the other side.
Verification and Validation process is joined by coding phase in V-shape. Thus it is known as
V-Model.
Business requirement analysis: This is the first step where product requirements understood
from the customer's side. This phase contains detailed communication to understand
customer's expectations and exact requirements.
System Design: In this stage system engineers analyze and interpret the business of the
proposed system by studying the user requirements document.
Architecture Design: The baseline in selecting the architecture is that it should understand all
which typically consists of the list of modules, brief functionality of each module, their
interface relationships, dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular phase.
Module Design: In the module design phase, the system breaks down into small modules.
The detailed design of the modules is specified, which is known as Low-Level Design
Coding Phase: After designing, the coding phase is started. Based on the requirements, a
suitable programming language is decided. There are some guidelines and standards for
coding. Before checking in the repository, the final build is optimized for better performance,
and the code goes through many code reviews to check the performance.
Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the module
design phase. These UTPs are executed to eliminate errors at code level or unit level. A unit is
the smallest entity which can independently exist, e.g., a program module. Unit testing
verifies that the smallest entity can function correctly when isolated from the rest of the codes/
units.
Integration Testing: Integration Test Plans are developed during the Architectural Design
Phase. These tests verify that groups created and tested independently can coexist and
communicate among themselves.
System Testing: System Tests Plans are developed during System Design Phase. Unlike Unit
and Integration Test Plans, System Tests Plans are composed by the client?s business team.
System Test ensures that expectations from an application developer are met.
Acceptance Testing: Acceptance testing is related to the business requirement analysis part.
It includes testing the software product in user atmosphere. Acceptance tests reveal the
compatibility problems with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like load and performance
defects within the real user atmosphere.
➢ When to use V-Model?
• When the requirement is well defined and not ambiguous.
• The V-shaped model should be used for small to medium-sized projects where
requirements are clearly defined and fixed.
• The V-shaped model should be chosen when sample technical resources are available
with essential technical expertise.
➢ Advantage of V-Model:
• Easy to Understand.
• Testing Methods like planning, test designing happens well before coding.
• This saves a lot of time. Hence a higher chance of success over the waterfall model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.
➢ Disadvantage of V-Model:
• Very rigid and least flexible.
• Not a good for a complex project.
Software is developed during the implementation stage, so no early prototypes of the software
are produced.
3. Prototype Model
The prototype model requires that before carrying out the development of actual software, a
working prototype of the system should be built. A prototype is a toy implementation of the
system. A prototype usually turns out to be a very crude version of the actual system, possible
exhibiting limited functional capabilities, low reliability, and inefficient performance as
compared to actual software.
In many instances, the client only has a general view of what is expected from the software
product. In such a scenario where there is an absence of detailed information regarding the
input to the system, the processing needs, and the output requirement, the prototyping model
may be employed.
Figure 4. Prototyping model activity steps.
4. Spiral model
The spiral model, initially proposed by Boehm, is an evolutionary software process model that
couples the iterative feature of prototyping with the controlled and systematic aspects of the
linear sequential model. It implements the potential for rapid development of new versions of
the software. Using the spiral model, the software is developed in a series of incremental
releases. During the early iterations, the additional release may be a paper model or prototype.
During later iterations, more and more complete versions of the engineered system are
produced.
Figure 4. Spiral model.
Objective setting: Each cycle in the spiral starts with the identification of purpose for that
cycle, the various alternatives that are possible for achieving the targets, and the constraints
that exists.
Risk Assessment and reduction: The next phase in the cycle is to calculate these various
alternatives based on the goals and constraints. The focus of evaluation in this stage is located
on the risk perception for the project.
Development and validation: The next phase is to develop strategies that resolve
uncertainties and risks. This process may include activities such as benchmarking, simulation,
and prototyping.
Planning: Finally, the next step is planned. The project is reviewed, and a choice made
whether to continue with a further period of the spiral. If it is determined to keep, plans are
drawn up for the next step of the project.
The development phase depends on the remaining risks. For example, if performance or user-
interface risks are treated more essential than the program development risks, the next phase
may be an evolutionary development that includes developing a more detailed prototype for
solving the risks.
The risk-driven feature of the spiral model allows it to accommodate any mixture of a
specification-oriented, prototype-oriented, simulation-oriented, or another type of approach.
An essential element of the model is that each period of the spiral is completed by a review
that includes all the products developed during that cycle, including plans for the next cycle.
The spiral model works for development as well as enhancement projects.
RAD is a concept that products can be developed faster and of higher quality through:
6. Iterative Model
In this Model, you can start with some of the software specifications and develop the first
version of the software. After the first version if there is a need to change the software, then a
new version of the software is created with a new iteration. Every release of the Iterative
Model finishes in an exact and fixed period that is called iteration.
The Iterative Model allows the accessing earlier phases, in which the variations made
respectively. The final output of the project renewed at the end of the Software Development
Life Cycle (SDLC) process.
1. Requirement gathering & analysis: In this phase, requirements are gathered from
customers and check by an analyst whether requirements will fulfil or not. Analyst checks that
need will achieve within budget or not. After all of this, the software team skips to the next
phase.
2. Design: In the design phase, team design the software by the different diagrams like Data
Flow diagram, activity diagram, class diagram, state transition diagram, etc.
4. Testing: After completing the coding phase, software testing starts using different test
methods. There are many test methods, but the most common are white box, black box, and
grey box test methods.
5. Deployment: After completing all the phases, software is deployed to its work
environment.
6. Review: In this phase, after the product deployment, review phase is performed to check
the behaviour and validity of the developed product. And if there are any error found then the
process starts again from the requirement gathering.
7. Maintenance: In the maintenance phase, after deployment of the software in the working
environment there may be some bugs, some errors or new updates are required. Maintenance
involves debugging and new addition options.
7. Incremental Model
1. Requirement analysis: In the first phase of the incremental model, the product analysis
expertise identifies the requirements. And the system functional requirements are understood
by the requirement analysis team. To develop the software under the incremental model, this
phase performs a crucial role.
2. Design & Development: In this phase of the Incremental model of SDLC, the design of
the system functionality and the development method are finished with success. When
software develops new practicality, the incremental model uses style and development phase.
3. Testing: In the incremental model, the testing phase checks the performance of each
existing function as well as additional functionality. In the testing phase, the various methods
are used to test the behavior of each task.
After completion of this phase, the number of the product working is enhanced and upgraded
up to the final system product.
➢ When we use the Incremental Model?
• When the requirements are superior.
• A project has a lengthy development schedule.
• When Software team are not very well skilled or trained.
• When the customer demands a quick release of the product.
• You can develop prioritized requirements first.
Spring Semester-2021
CONTENTS
Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.
1. Product Metrics: These are the measures of various characteristics of the software product.
➢ Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC)
measure.
External metrics: External metrics are the metrics used for measuring properties that are
viewed to be of greater importance to the user, e.g., portability, reliability, functionality,
usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost, and
time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be
improved. As quality improves, the number of errors and time, as well as cost required, is also
reduced.
It is one of the earliest and simpler metrics for calculating the size of the computer program.
It is generally used in calculating and comparing the productivity of programmers. These
metrics are derived by normalizing the quality and productivity measures by considering
the size of the product as a metric.
Based on the LOC/KLOC count of software, many other metrics can be computed:
1. Errors/KLOC.
2. $/ KLOC.
3. Defects/KLOC.
4. Pages of documentation/KLOC.
5. Errors/PM.
6. Productivity = KLOC/PM (effort is measured in person-months).
7. $/ Page of documentation.
➢ Advantages of LOC
1. Simple to measure
➢ Disadvantage of LOC
1. It is defined on the code. For example, it cannot measure the size of the specification.
2. It characterizes only one specific view of size, namely length, it takes no account of
functionality or complexity
3. Bad software design may cause an excessive line of code
4. It is language dependent
5. Users cannot easily understand it
Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has been
further modified by the International Function Point Users Group (IFPUG). FPA is used to
make estimate of the software project, including its testing in terms of functionality or function
size of the software product. However, functional point analysis may be used for the test
estimation of the product. The functional size of the product is measured in terms of the function
point, which is a standard of measurement to measure the software application.
o Objectives of FPA
The basic and primary purpose of the functional point analysis is to measure and provide the
software application functional size to the client, customer, and the stakeholder on their request.
Further, it is used to measure the software project development along with its maintenance,
consistently throughout the project irrespective of the tools and the technologies.
1. FPs of an application is found out by counting the number and types of functions used
in the applications. Various functions used in an application can be put under five types,
as shown in Table:
3. The effort required to develop the project depends on what the software does.
5. FP method is used for data processing systems, business systems like information systems.
6. The five parameters mentioned above are also known as information domain
characteristics.
7. All the parameters mentioned above are assigned some weights that have been
experimentally determined and are shown in Table.
The functional complexities are multiplied with the corresponding weights against each
function, and the values are added up to determine the UFP (Unadjusted Function Point) of the
subsystem.
Here that weighing factor will be simple, average, or complex for a measurement parameter
type.
9. FP metrics is used mostly for measuring the size of Management Information System (MIS)
software.
10. But the function points obtained above are unadjusted function points (UFPs). These (UFPs)
of a subsystem are further adjusted by considering some more General System Characteristics
(GSCs). It is a set of 14 GSCs that need to be considered. The procedure for adjusting UFPs is
as follows:
a. Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5. (b)
If a particular GSC has no influence, then its weight is taken as 0 and if it has a strong
influence then its weight is 5.
b. The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
c. Then Value Adjustment Factor (VAF) is computed from TDI by using the
formula: VAF = (TDI * 0.01) + 0.65
Remember that the value of VAF lies within 0.65 to 1.35 because
Example: Compute the function point, productivity, documentation, cost per function for the
following data:
Solution:
1. Number of 24 * 4 = 96
external inputs (EI)
2. Number of 46 * 4 = 184
external outputs
(EO)
3. Number of 8 * 6 = 48
external inquiries
(EQ)
4. Number of 4 * 10 = 40
internal files (ILF)
5. Number of 2 * 5 = 10
external interfaces 378
(EIF) Count-total →
➢ Differentiate between FP and LOC
FP LOC
Spring Semester-2021
Week 4: Planning
The first stage of the software development process is the planning stage. In order
to develop a successful project, the whole picture of the project must be taken.
This picture is produced as a result of the project planning phase. Project plan
components are as follows:
o Project Scope
o Project time-Work plan
o Project team structure
o Technical definitions of the proposed system, Special development
tools and environments
o Project standards, methods and methodologies
o Quality assurance plan
o Environmental management plan
o Resource management plan
o Education plan
o Test plan
o Maintenance plan
The project plan, which is the main output of the planning phase, is a document
that will be used, reviewed and updated throughout the project. Therefore, the
Planning stage is different from other stages.
The resources to be used when planning a software project should be:
➢ Human Resources: It is determined who will take place for which period
and at which stages of the project.
Project manager Hardware Team Leader
➢ Project Classes
Information such as the total duration of the project, the total cost of the
project, the total number of lines, the number of staff-quality-working time,
the cost of a person-month gives important information about the cost
estimation for other projects after the project is finished or most of the
project is finished. The most commonly used cost estimation methods are
shown in Table 1.
COCOMO is a cost estimation model that has received a lot of attention since it
was published by Boehm in 1981. The application can be done in three different
model formats depending on the level of detail to be used:
✓ Basic model
✓ Intermediate model
✓ Detail model
All COCOMO models take line number estimation as basic input and output
workforce and time as output. By dividing the workforce value by the time value,
approximately the number of people is estimated.
All COCOMO models use nonlinear exponential formulas for workforce and time
values. The formulas used in Figure 2 can be seen. The COCOMO formulas used
different project types vary.
Figure 2. COCOMO model formulas.
✓ Basic model: It is used for fast estimation for small and medium projects.
Formulas used;
Discrete projects:
Semi-Embedded projects:
Embedded projects:
o Project Control Unit: Consists of top executives who are responsible for
developing the project. As high level problems are highly collected, it is
necessary to keep the interest of the top management with the project
constantly and be included in the project.
o Education Unit: This unit is responsible for any training related to the
project.
Application Support Unit: For example, the unit that provides instant support
by phone.
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
Request Definition: The requirements of a system are the definition of the services and functional
constraints provided by that system..
IEEE 729: Situation or support needed by the user to solve a problem or achieve a goal.
The requirement is not related to how the system or its functions will be fulfilled. It is about what it is.
• which database,
• which tables,
• Identify,
• Analyzing,
• Certification
• Checking
Mistakes arising from requirements are noticed in the late stages. Often wrong information is caused by
negligence and inconsistency. In this case, correction costs will be high.
➢ Change of Request
Although the analysis of the requirements is very good, changes may occur during the process. Fort
his reasons:
It should not be forgotten that no matter how much changes, all requests must be informed of the
customer and must be certified.
Requirements analysis, also called requirements engineering, is the process of determining user
expectations for a new or modified product. Requirements engineering is a major software engineering
action that begins during the communication activity and continues into the modeling activity. It must
be adapted to the needs of the process, the project, the product, and the people doing the work.
Requirements engineering builds a bridge to design and construction. It has four step process:
Feasibility Study: When the client approaches the organization for getting the desired product
developed, it comes up with rough idea about what all functions the software must perform and which
all features are expected from the software.
Referencing to this information, the analysts does a detailed study about whether the desired system and
its functionality are feasible to develop.
This feasibility study is focused towards goal of the organization. This study analyzes whether the
software product can be practically materialized in terms of implementation, contribution of project to
organization, cost constraints and as per values and objectives of the organization. It explores technical
aspects of the project and product such as usability, maintainability, productivity and integration ability.
The output of this phase should be a feasibility study report that should contain adequate comments and
recommendations for management about whether or not the project should be undertaken.
Requirement Gathering: If the feasibility report is positive towards undertaking the project, next phase
starts with gathering requirements from the user. Analysts and engineers communicate with the client
and end-users to know their ideas on what the software should provide and which features they want the
software to include.
Software Requirement Specification (SRS): SRS is a document created by system analyst after the
requirements are collected from various stakeholders.
SRS defines how the intended software will interact with hardware, external interfaces, speed of
operation, response time of system, portability of software across various platforms, maintainability,
speed of recovery after crashing, Security, Quality, Limitations etc.
The requirements received from client are written in natural language. It is the responsibility of system
analyst to document the requirements in technical language so that they can be comprehended and useful
by the software development team.
Software Requirement Validation: After requirement specifications are developed, the requirements
mentioned in this document are validated. User might ask for illegal, impractical solution or experts may
interpret the requirements incorrectly. This results in huge increase in cost if not nipped in the bud.
Requirements can be checked against following conditions:
Requirements gathering: The developers discuss with the client and end users and know their
expectations from the software.
Organizing Requirements: The developers prioritize and arrange the requirements in order of
importance, urgency and convenience.
Negotiation & discussion: If requirements are ambiguous or there are some conflicts in requirements
of various stakeholders, if they are, it is then negotiated and discussed with stakeholders. Requirements
may then be prioritized and reasonably compromised.
The requirements come from various stakeholders. To remove the ambiguity and conflicts, they are
discussed for clarity and correctness. Unrealistic requirements are compromised reasonably.
Documentation: All formal & informal, functional and non-functional requirements are documented
and made available for next phase processing.
Requirements Elicitation is the process to find out the requirements for an intended software system
by communicating with client, end users, system users and others who have a stake in the software
system development.
1. Interviews
Interviews are strong medium to collect requirements. Organization may conduct several types of
interviews such as:
• Structured (closed) interviews, where every single information to gather is decided in advance,
they follow pattern and matter of discussion firmly.
• Non-structured (open) interviews, where information to gather is not decided in advance, more
flexible and less biased.
• Oral interviews
• Written interviews
• One-to-one interviews which are held between two persons across the table.
• Group interviews which are held between groups of participants. They help to uncover any
missing requirement as numerous people are involved.
2. Surveys
Organization may conduct surveys among various stakeholders by querying about their expectation and
requirements from the upcoming system.
3. Questionnaires
A document with pre-defined set of objective questions and respective options is handed over to all
stakeholders to answer, which are collected and compiled.
A shortcoming of this technique is, if an option for some issue is not mentioned in the questionnaire, the
issue might be left unattended.
4. Task analysis
Team of engineers and developers may analyze the operation for which the new system is required. If
the client already has some software to perform certain operation, it is studied and requirements of
proposed system are collected.
5. Domain Analysis
Every software falls into some domain category. The expert people in the domain can be a great help
to analyze general and specific requirements.
6. Brainstorming
An informal debate is held among various stakeholders and all their inputs are recorded for further
requirements analysis.
7. Prototyping
Prototyping is building user interface without adding detail functionality for user to interpret the features
of intended software product. It helps giving better idea of requirements. If there is no software installed
at client’s end for developer’s reference and the client is not aware of its own requirements, the developer
creates a prototype based on initially mentioned requirements. The prototype is shown to the client and
the feedback is noted. The client feedback serves as an input for requirement gathering.
8. Observation
Team of experts visit the client’s organization or workplace. They observe the actual working of the
existing installed systems. They observe the workflow at client’s end and how execution problems
are dealt. The team itself draws some conclusions which aid to form requirements expected from the
software.
Gathering software requirements is the foundation of the entire software development project. Hence
they must be clear, correct and well-defined.
A complete Software Requirement Specifications must be:
• Clear
• Correct
• Consistent
• Coherent
• Comprehensible
• Modifiable
• Verifiable
• Prioritized
• Unambiguous
• Traceable
• Credible source
➢ Software Requirements
We should try to understand what sort of requirements may arise in the requirement elicitation phase
and what kinds of requirements are expected from the software system.
1. Functional Requirements
Requirements, which are related to functional aspect of software fall into this category. They define
functions and functionality within and from the software system.
Examples:
2. Non-Functional Requirements
Requirements, which are not related to functional aspect of software, fall into this category. They are
implicit or expected characteristics of software, which users make assumption of. Non-functional
requirements include:
• Security
• Logging
• Storage
• Configuration
• Performance
• Cost
• Interoperability
• Flexibility
• Disaster recovery
• Accessibility
Could have: Software can still properly function with these requirements.
While developing software, ‘Must have’ must be implemented, ‘Should have’ is a matter of debate with
stakeholders and negation, whereas ‘could have’ and ‘wish list’ can be kept for software updates.
UI is an important part of any software or hardware or hybrid system. A software is widely accepted if
it is:
• easy to operate
• quick in response
• effectively handling operational errors
• providing simple yet consistent user interface
User acceptance majorly depends upon how user can use the software. UI is the only way for users to
perceive the system. A well performing software system must also be equipped with attractive, clear,
consistent and responsive user interface. Otherwise the functionalities of software system can not be
used in convenient way. A system is said be good if it provides means to use it efficiently. User interface
requirements are briefly mentioned below:
• Content presentation
• Easy Navigation
• Simple interface
• Responsive
• Consistent UI elements
• Feedback mechanism
• Default settings
• Purposeful layout
• Strategical use of color and texture.
• Provide help information
• User centric approach
• Group based view settings
System analyst in an IT organization is a person, who analyzes the requirement of proposed system and
ensures that requirements are conceived and documented properly & correctly. Role of an analyst starts
during Software Analysis Phase of SDLC. It is the responsibility of analyst to make sure that the
developed software meets the requirements of the client.
Software Measures can be understood as a process of quantifying and symbolizing various attributes
and aspects of software. Software Metrics provide measures for various aspects of software process and
software product.
Software measures are fundamental requirement of software engineering. They not only help to control
the software development process but also aid to keep quality of ultimate product excellent.
According to Tom DeMarco, a (Software Engineer), “You cannot control what you cannot measure.”
By his saying, it is very clear how important software measures are.
Size Metrics: LOC (Lines of Code), mostly calculated in thousands of delivered source code lines,
denoted as KLOC.
Function Point Count is measure of the functionality provided by the software. Function Point count
defines the size of functional aspect of software.
Complexity Metrics: McCabe’s Cyclomatic complexity quantifies the upper bound of the number of
independent paths in a program, which is perceived as complexity of the program or its modules. It is
represented in terms of graph theory concepts by using control flow graph.
Quality Metrics: Defects, their types and causes, consequence, intensity of severity and their
implications define the quality of product.
The number of defects found in development process and number of defects reported by the client after
the product is installed or delivered at client-end, define quality of product.
Process Metrics: In various phases of SDLC, the methods and tools used, the company standards and
the performance of development are software process metrics.
Resource Metrics: Effort, time and various resources used, represents metrics for resource
measurement.
The requirements modeling action results in one or more of the following types of models:
➢ Scenario-based models of requirements from the point of view of various system
“actors”
➢ Data models that depict the information domain for the problem
➢ Class-oriented models that represent object-oriented classes (attributes and operations)
• Flow-oriented models that represent the functional elements of the system and how they
• Behavioral models that depict how the software behaves as a consequence of external
“events”.
The requirements model as a bridge between the system description and the design model:
3. to define a set of requirements that can be validated once the software is built.
The analysis model bridges the gap between a system-level description that describes overall system or
business functionality as it is achieved by applying software, hardware, data, human, and other system
elements and a software design that describes the software’s application architecture, user interface, and
component-level structure.
1. Scenario-Based Modeling
Scenario-based elements depict how the user interacts with the system and the specific sequence of
activities that occur as the software is used.
A use case describes a specific usage scenario in straightforward language from the point of view of a
defined actor. These are the questions that must be answered if use cases are to provide value as a
requirements modeling tool.
(3) how detailed to make your description, and (4) how to organize the description?
To begin developing a set of use cases, list the functions or activities performed by a specific actor.
• The typical outline for formal use cases can be in following manner
• The goal in context identifies the overall scope of the use case.
• The precondition describes what is known to be true before the use case is initiated.
• The trigger identifies the event or condition that “gets the use case started”
• The scenario lists the specific actions that are required by the actor and the appropriate system
responses.
• Exceptions identify the situations uncovered as the preliminary use case is refined Additional
headings may or may not be included and are reasonably self-explanatory.
Every modeling notation has limitations, and the use case is no exception. A use case focuses on
functional and behavioral requirements and is generally inappropriate for nonfunctional requirements.
However, scenario-based modeling is appropriate for a significant majority of all situations that you will
encounter as a software engineer. There is a simple use case diagram in the following figüre.
2. Data Modeling Concepts
Data modeling is the process of documenting a complex software system design as an easily understood
diagram, using text and symbols to represent the way data needs to flow. The diagram can be used as a
blueprint for the construction of new software or for re-engineering a legacy application. The most
widely used data Model by the Software engineers is Entity Relationship Diagram (ERD), it addresses
the issues and represents all data objects that are entered, stored, transformed, and produced within an
application.
• Data Objects
A data object is a representation of composite information that must be understood by software. A data
object can be an external entity (e.g., anything that produces or consumes information), a thing (e.g.,
a report or a display), an occurrence (e.g., a telephone call) or event (e.g., an alarm), a role (e.g.,
salesperson), an organizational unit (e.g., accounting department), a place (e.g., a warehouse), or a
structure (e.g., a file).
For example, a person or a car can be viewed as a data object in the sense that either can be defined in
terms of a set of attributes. The description of the data object incorporates the data object and all of its
attributes.
A data object encapsulates data only there is no reference within a data object to operations that act on
the data. Therefore, the data object can be represented as a table as shown in following table. The
headings in the table reflect attributes of the object. Tabular representation of data objects in the
following figüre.
• Data Attributes
Data attributes define the properties of a data object and take on one of three different characteristics.
They can be used to (1) name an instance of the data object, (2) describe the instance, or (3) make
reference to another instance in another table.
• Relationships
Data objects are connected to one another in different ways. Consider the two data objects, person and
car. These objects can be represented using the following simple notation and relationships are 1) A
person owns a car, 2) A person is insured to drive a car. There is a Relationships between data objects
in the following figure.
3. Class-Based Modeling
Class-based modeling represents the objects that the system will manipulate, the operations that will be
applied to the objects to effect the manipulation, relationships between the objects, and the
collaborations that occur between the classes that are defined. The elements of a class-based model
include classes and objects, attributes, operations, class responsibility collaborator (CRC) models,
collaboration diagrams, and packages.
We can begin to identify classes by examining the usage scenarios developed as part of the requirements
model and performing a “grammatical parse” on the use cases developed for the system to be built.
• External entities (e.g., other systems, devices, people) that produce or consume information to
be used by a computer-based system.
• Things (e.g., reports, displays, letters, signals) that are part of the information domain for the
problem.
• Occurrences or events (e.g., a property transfer or the completion of a series of robot
movements) that occur within the context of system operation.
• Roles (e.g., manager, engineer, salesperson) played by people who interact with the system.
• Organizational units (e.g., division, group, team) that are relevant to an application.
• Places (e.g., manufacturing floor or loading dock) that establish the context of the problem and
the overall function of the system.
• Structures (e.g., sensors, four-wheeled vehicles, or computers) that define a class of objects or
related classes of objects.
Attributes describe a class that has been selected for inclusion in the requirements model. The attributes
that define the class that clarify what is meant by the class in the context of the problem space.
To develop a meaningful set of attributes for an analysis class, you should study each use case and select
those “things” that reasonably “belong” to the class.
Operations define the behavior of an object. Although many different types of operations exist, they
can generally be divided into four broad categories: (1) operations that manipulate data in some way
(e.g., adding, deleting, reformatting, selecting), (2) operations that perform a computation, (3) operations
that inquire about the state of an object, and (4) operations that monitor an object for the occurrence of
a controlling event. There is a Class diagram for the system class in the following figure.
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
Solution Domain (Design, Implementation): The technologies used to build the system Modeling
space of all possible systems.
Both domains contain abstractions that we can use for the construction of the system model.
Example:
➢ visualizing
➢ specifying
➢ constructing
➢ documenting the artifacts of a software-intense system
General Definition: The UML offers a standard way to write a system's blueprints, including
conceptual things such as business processes and system functions as well as concrete things such as
programming language statements, database schemas, and reusable software components.
The goal of UML is to provide a standard notation that can be used by all object-oriented methods
and to select and integrate the best elements of precursor notations. UML has been designed for a
broad range of applications. Hence, it provides constructs for a broad range of systems and activities
2. Booch [Grady Booch 1994] - was excellent for design and implementation. Grady Booch
had worked extensively with the Ada language, and had been a major player in the
development of Object Oriented techniques for the language. Although the Booch method
was strong, the notation was less well received (lots of cloud shapes dominated his models -
not very tidy)
used to. The reason for this is that it is possible to look at a system from many different viewpoints.
For Example:
• Analysts
• Designers
• Coders
• Testers
• QA
• The Customer
• Technical Authors
All of these people are interested in different aspects of the system, and each of them require a
different level of detail. For example, a coder needs to understand the design of the system and be
able to convert the design to a low level code. By contrast, a technical writer is interested in the
behavior of the system as a whole, and needs to understand how the product functions. The UML
attempts to provide a language so expressive that all stakeholders can benefit from at least one UML
diagram.
The class diagram is a central modeling technique that runs through nearly all object-oriented
methods. This diagram describes the types of objects in the system and various kinds of static
In the Unified Modeling Language, a component diagram depicts how components are wired together
to form larger components or software systems. It illustrates the architectures of the software
components and the dependencies between them. Those software components including run-time
The Deployment Diagram helps to model the physical aspect of an Object-Oriented software system.
software artifacts to deployment targets. Artifacts represent concrete elements in the physical world
that are the result of a development process. It models the run-time configuration in a static view and
visualizes the distribution of artifacts in an application. In most cases, it involves modeling the
hardware configurations together with the software components that lived on.
An object diagram is a graph of instances, including objects and data values. A static object diagram
is an instance of a class diagram; it shows a snapshot of the detailed state of a system at a point in
time. The difference is that a class diagram represents an abstract model consisting of classes and their
concrete in nature. The use of object diagrams is fairly limited, namely to show examples of data
structure.
Class Diagram vs Object Diagram - An Example
Some people may find it difficult to understand the difference between a UML Class Diagram and a
UML Object Diagram as they both comprise of named "rectangle blocks", with attributes in them, and
with linkages in between, which make the two UML diagrams look similar. Some people may even
think they are the same because in the UML tool they use both the notations for Class Diagram and
Object Diagram are put inside the same diagram editor - Class Diagram.
But in fact, Class Diagram and Object Diagram represent two different aspects of a code base. In this
article, we will provide you with some ideas about these two UML diagrams, what they are, what are
create classes like 'User', 'Account', 'Transaction', etc. In a classroom management system you may
In each class, there are attributes and operations that represent the characteristic and behavior of the
class. Class Diagram is a UML diagram where you can visualize those classes, along with their
UML Object Diagram shows how object instances in your system are interacting with each other at a
particular state. It also represents the data values of those objects at that state. In other words, a UML
Object Diagram can be seen as a representation of how classes (drawn in UML Class Diagram) are
If you are not a fan of those definition stuff, take a look at the following UML diagram examples. I
upload multiple attachment so the two classes are connected with an association, with 0..* as
The following Object Diagram example shows you how the object instances of User and Attachment
class "look like" at the moment Peter (i.e. the user) is trying to upload two attachments. So there are
Package diagram is UML structure diagram which shows packages and dependencies between the
packages. Model diagrams allow to show different views of a system, for example, as multi-layered
A use-case model describes a system's functional requirements in terms of use cases. It is a model of
the system's intended functionality (use cases) and its environment (actors). Use cases enable you to
relate what you need from a system to how the system delivers on those needs.
Think of a use-case model as a menu, much like the menu you'd find in a restaurant. By looking at the
menu, you know what's available to you, the individual dishes as well as their prices. You also know
what kind of cuisine the restaurant serves: Italian, Mexican, Chinese, and so on. By looking at the
menu, you get an overall impression of the dining experience that awaits you in that restaurant. The
Because it is a very powerful planning instrument, the use-case model is generally used in all phases
Activity diagrams are graphical representations of workflows of stepwise activities and actions with
support for choice, iteration and concurrency. It describes the flow of control of the target system,
such as the exploring complex business rules and operations, describing the use case also the business
process. In the Unified Modeling Language, activity diagrams are intended to model both
A state diagram is a type of diagram used in UML to describe the behavior of systems which is based
on the concept of state diagrams by David Harel. State diagrams depict the permitted states and
transitions as well as the events that effect these transitions. It helps to visualize the entire lifecycle
The Sequence Diagram models the collaboration of objects based on a time sequence. It shows how
the objects interact with others in a particular scenario of a use case. With the advanced visual
modeling capability, you can create complex sequence diagram in few clicks. Besides, some modeling
tool such as Visual Paradigm can generate sequence diagram from the flow of events which you have
Timing Diagram shows the behavior of the object(s) in a given period of time. Timing diagram is a
special form of a sequence diagram. The differences between timing diagram and sequence diagram
are the axes are reversed so that the time are increase from left to right and the lifelines are shown in
Spring Semester-2021
We can say that use cases are nothing but the system functionalities written in an
organized manner. The second thing which is relevant to use cases are the actors.
Actors can be defined as something that interacts with the system.
Actors can be a human user, some internal applications, or may be some external
applications. When we are planning to draw a use case diagram, we should have
the following items identified.
• Actors
Use case diagrams are drawn to capture the functional requirements of a system.
After identifying the above items, we have to use the following guidelines to
draw an efficient use case diagram
• The name of a use case is very important. The name should be chosen in
such a way so that it can identify the functionalities performed.
• Do not try to include all types of relationships, as the main purpose of the
diagram is to identify the requirements.
The SpecialOrder and NormalOrder use cases are extended from Order use case.
Hence, they have extended relationship. Another important point is to identify
the system boundary, which is shown in the picture. The actor Customer lies
outside the system as it is an external user of the system.
• Does the system store information? What actors will create, read, update
or delete this information?
• Does the system need to notify an actor about changes in the internal
state?
• Are there any external events the system must know about? What actor
informs the system of those events?
Notation Description Visual Representation
Actor
• Named by noun.
• For example:
Use Case
• i.e. Do something
• Each Actor must be linked to a use
case, while some use cases may not
be linked to actors.
Communication Link
Extends
• Indicates that
an "Invalid
Password" use case
may include (subject
to specified in the
extension) the
behavior specified
by base use
case "Login
Account".
• Depict with a
directed arrow
having a dotted line.
The tip of arrowhead
points to the base
use case and the
child use case is
connected at the
base of the arrow.
• The stereotype
"<<extends>>"
identifies as an
extend relationship
Include
• A uses relationship
from base use case
to child use case
indicates that an
instance of the base
use case will include
the behavior as
specified in the child
use case.
• An include
relationship is
depicted with a
directed arrow
having a dotted line.
The tip of arrowhead
points to the child
use case and the
parent use case
connected at the
base of the arrow.
• The stereotype
"<<include>>"
identifies the
relationship as an
include relationship.
Generalization
• A generalization
relationship is a
parent-child
relationship between
use cases.
• Generalization is
shown as a directed
arrow with a triangle
arrowhead.
• The child use case is
connected at the
base of the arrow.
The tip of the arrow
is connected to the
parent use case.
The use case model also shows the use of extend and include. Besides, there are
associations that connect between actors and use cases.
6. Use-Case Examples
7. Simple example on Use case scenarios
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into
a form, i.e., easily implementable using programming language.
The software design phase is the first step in SDLC (Software Design Life Cycle), which
moves the concentration from the problem domain to the solution domain. In software design,
we consider the system to be a set of components or modules with clearly defined behaviors &
boundaries.
8.1. Objectives of Software Design
For small problem, we can handle the entire problem at once but for the significant problem,
divide the problems and conquer the problem it means to divide the problem into smaller pieces
so that each piece can be captured separately.
For software design, the goal is to divide the problem into manageable pieces.
These pieces cannot be entirely independent of each other as they together form the system.
They have to cooperate and communicate to solve the problem. This communication adds
complexity.
2. Abstraction
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.
3. Modularity
Modularity specifies to the division of software into separate modules which are differently
named and addressed and are integrated later on in to obtain the completely functional software.
It is the only property that allows a program to be intellectually manageable. Single large
programs are difficult to understand and read due to a large number of reference variables,
control paths, global variables, etc.
o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.
Advantages of Modularity
Disadvantages of Modularity
Modular Design
Modular design reduces the design complexity and results in easier and faster implementation
by allowing parallel development of various parts of a system.
The independent modules are easier to maintain, test, and reduce error propagation and can be
reused in other programs as well. Thus, functional independence is a good design feature which
ensures software quality.
2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do
not need for such information.
The use of information hiding as design criteria for modular system provides the most
significant benefits when modifications are required during testing's and later during software
maintenance. This is because as most data and procedures are hidden from other parts of the
software, inadvertent errors introduced during modifications are less likely to propagate to
different locations within the software.
4. Strategy of Design
A good system design strategy is to organize the program modules in such a method that are
easy to develop and latter too, change. Structured design methods help developers to deal with
the size and complexity of programs. Analysts generate instructions for the developers about
how code should be composed and how pieces of code should fit together to form a program.
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing
system.
8.3. Coupling and Cohesion
1. Module Coupling
A good design is the one that has low coupling. Coupling is measured by the number of
relations between the modules. That is, the coupling increases as the number of calls
between modules increase or the amount of shared data is large. Thus, it can be said that
a design with high coupling will have more errors.
o Types of Module Coupling
2. Data Coupling: When data of one module is passed to another module, this is called
data coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using
composite data items such as structure, objects, etc. When the module passes non-global
data structure or entire structure to another module, they are said to be stamp coupled.
For example, passing structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one
module is used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally
imposed data format, communication protocols, or device interface. This is related to
communication to external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information
through some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code,
e.g., a branch from one module into another module.
2. Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.
Coupling Cohesion
Coupling shows the relationships between Cohesion shows the relationship within the
modules. module.
In coupling, modules are linked to the other In cohesion, the module focuses on a single
modules. thing.
Function Oriented design is a method to software design where the model is decomposed into
a set of interacting units or modules where each unit or module has a clearly defined function.
Thus, the system is designed from a functional viewpoint.
4. Design Notations
Design Notations are primarily meant to be used during the process of design and are used to
represent design or design decisions. For a function-oriented design, the design can be
represented graphically or mathematically by the following:
1. Data Flow Diagram
Data-flow design is concerned with designing a series of functional transformations that convert system
inputs into the required outputs. The design is described as data-flow diagrams. These diagrams show how
data flows through a system and how the output is derived from the input through a series of functional
transformations.
Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They show end-to-
end processing. That is the flow of processing from when data enters the system to where it leaves the
system can be traced.
Data-flow design is an integral part of several design methods, and most CASE tools support data-flow
diagram creation. Different ways may use different icons to represent data-flow diagram entities, but their
meanings are similar.
2. Data Dictionaries
A data dictionary lists all data elements appearing in the DFD model of a system. The data items listed
contain all data flows and the contents of all data stores looking on the DFDs in the DFD model of a
system.
A data dictionary lists the objective of all data items and the definition of all composite data elements in
terms of their component data items. For example, a data dictionary entry may contain that the
data grossPay consists of the parts regularPay and overtimePay.
For the smallest units of data elements, the data dictionary lists their name and their type.
A data dictionary plays a significant role in any software development process because of the following
reasons:
o A Data dictionary provides a standard language for all relevant information for use by engineers
working in a project. A consistent vocabulary for data items is essential since, in large projects,
different engineers of the project tend to use different terms to refer to the same data, which
unnecessarily causes confusion.
o The data dictionary provides the analyst with a means to determine the definition of various data
structures in terms of their component elements.
3. Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the user without
the knowledge of internal design.
Entity-Relationship Model
Entity-Relationship model is a type of database model based on the notion of real world entities
and relationship among them. We can map real world scenario onto ER database model. ER
Model creates a set of entities with their attributes, a set of constraints and relation among
them.
ER Model is best used for the conceptual design of database. ER Model can be represented as
follows :
• Entity - An entity in ER Model is a real world being, which has some properties
called attributes. Every attribute is defined by its corresponding set of values,
called domain.
For example, Consider a school database. Here, a student is an entity. Student has
various attributes like name, id, age and class etc.
• Relationship - The logical association among entities is called relationship.
Relationships are mapped with entities in various ways. Mapping cardinalities define
the number of associations between two entities.
Mapping cardinalities:
o one to one
o one to many
o many to one
o many to many
3. Messages: Objects communicate by message passing. Messages consist of the integrity of the
target object, the name of the requested operation, and any other action needed to perform the
function. Messages are often implemented as procedure or function calls.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and
operations are linked to a single unit. Encapsulation not only bundles essential information of an
object together but also restricts access to the data and methods from the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or
sub-classes can import, implement, and re-use allowed variables and functions from their
immediate superclasses.This property of OOD is called an inheritance. This makes it easier to
define a specific class and to create generalized classes from specific ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing similar tasks
but vary in arguments, can be assigned the same name. This is known as polymorphism, which
allows a single interface is performing functions for different types. Depending upon how the
service is invoked, the respective portion of the code gets executed.
Text-Based User Interface: This method relies primarily on the keyboard. A typical example of this is
UNIX.
Advantages
o Many and easier to customizations options.
Disadvantages
o Relies heavily on recall rather than recognition.
Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example of this
type of interface is any versions of the Windows operating systems.
Characteristics Descriptions
Windows Multiple windows allow different information to
be displayed simultaneously on the user's screen.
Advantages
o Less expert knowledge is required to use it.
o Easier to Navigate and can look through folders quickly in a guess and check manner.
o The user may switch quickly from one task to another and can interact with several different
applications.
Disadvantages
o Typically decreased options.
o Usually less customizable. Not easy to use one button for tons of different variations.
5. UI Design Principles
Structure: Design should organize the user interface purposefully, in the meaningful and usual based on
precise, consistent models that are apparent and recognizable to users, putting related things together and
separating unrelated things, differentiating dissimilar things and making similar things resemble one
another. The structure principle is concerned with overall user interface architecture.
Simplicity: The design should make the simple, common task easy, communicating clearly and directly in
the user's language, and providing good shortcuts that are meaningfully related to longer procedures.
Visibility: The design should make all required options and materials for a given function visible without
distracting the user with extraneous or redundant data.
Feedback: The design should keep users informed of actions or interpretation, changes of state or
condition, and bugs or exceptions that are relevant and of interest to the user through clear, concise, and
unambiguous language familiar to users.
Tolerance: The design should be flexible and tolerant, decreasing the cost of errors and misuse by allowing
undoing and redoing while also preventing bugs wherever possible by tolerating varied inputs and
sequences and by interpreting all reasonable actions.
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
Week 8: Coding
The coding is the process of transforming the design of a system into a computer language
format. This coding phase of software development is concerned with software translating
design specification into the source code. It is necessary to write source code & internal
documentation so that conformance of the code to its specification can be easily verified.
Coding is done by the coder or programmers who are independent people than the designer.
The goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later
stage. The cost of testing and maintenance can be significantly reduced with efficient coding.
➢ Goals of Coding
To translate the design of system into a computer language format: The coding is the process
of transforming the design of a system into a computer language format, which can be executed
by a computer and that perform tasks as specified by the design of operation during the design
phase.
To reduce the cost of later phases: The cost of testing and maintenance can be significantly
reduced with efficient coding.
Making the program more readable: Program should be easy to read and understand. It increases
code understanding having readability and understandability as a clear objective of the coding
activity can itself help in producing more maintainable software.
For implementing our design into code, we require a high-level functional language. A
programming language should have the following characteristics:
Readability: A good high-level language will allow programs to be written in some methods
that resemble a quite-English description of the underlying functions. The coding may be done
in an essentially self-documenting way.
Generality: Most high-level languages allow the writing of a vast collection of programs, thus
relieving the programmer of the need to develop into an expert in many diverse languages.
Brevity: Language should have the ability to implement the algorithm with less amount of
code. Programs mean in high-level languages are often significantly shorter than their low-level
equivalents.
Error checking: A programmer is likely to make many errors in the development of a computer
program. Many high-level languages invoke a lot of bugs checking both at compile-time and
run-time.
Cost: The ultimate cost of a programming language is a task of many of its characteristics.
Quick translation: It should permit quick translation.
Modularity: It is desirable that programs can be developed in the language as several separately
compiled modules, with the appropriate structure for ensuring self-consistency among these
modules.
Widely available: Language should be widely available, and it should be feasible to provide
translators for all the major machines and all the primary operating systems.
A coding standard lists several rules to be followed during coding, such as the way variables
are to be named, the way the code is to be laid out, error return conventions, etc.
➢ Coding Standards
General coding standards refers to how the developer writes code, so here we will discuss some
essential standards regardless of the programming language being used.
➢ Coding Guidelines
General coding guidelines provide the programmer with a set of the best methods which can be
used to make programs more comfortable to read and maintain. Most of the examples use the
C language syntax, but the guidelines can be tested to all languages.
The following are some representative coding guidelines recommended by many software
development organizations.
1. Line Length: It is considered a good practice to keep the length of source code lines at or
below 80 characters. Lines longer than this may not be visible properly on some terminals and
tools. Some printers will truncate lines longer than 80 columns.
2. Spacing: The appropriate use of spaces within a line of code can improve readability.
Example:
Bad: cost=price+(price*sales_tax)
fprintf(stdout ,"The total cost is %5.2f\n",cost);
3. The code should be well-documented: As a rule of thumb, there must be at least one
comment line on the average for every three-source line.
4. The length of any function should not exceed 10 source lines: A very lengthy function is
generally very difficult to understand as it possibly carries out many various functions. For the
same reason, lengthy functions are possible to have a disproportionately larger number of bugs.
5. Do not use goto statements: Use of goto statements makes a program unstructured and very
tough to understand.
7. Error Messages: Error handling is an essential aspect of computer programming. This does
not only include adding the necessary logic to test for and handle errors but also involves
making error messages meaningful.
➢ Programming Style
Programming style refers to the technique used in writing the source code for a computer
program. Most programming styles are designed to help programmers quickly read and
understands the program as well as avoid making errors. (Older programming styles also
focused on conserving screen space.) A good coding style can overcome the many deficiencies
of a first programming language, while poor style can defeat the intent of an excellent language.
2. Naming: In a program, you are required to name the module, processes, and variable, and so
on. Care should be taken that the naming style should not be cryptic and non-representative.
3. Control Constructs: It is desirable that as much as a possible single entry and single exit
constructs used.
4. Information hiding: The information secure in the data structures should be hidden from
the rest of the system where possible. Information hiding can decrease the coupling between
modules and make the system more maintainable.
5. Nesting: Deep nesting of loops and conditions greatly harm the static and dynamic behavior
of a program. It also becomes difficult to understand the program logic, so it is desirable to
avoid deep nesting.
6. User-defined types: Make heavy use of user-defined data types like enum, class, structure,
and union. These data types make your program code easy to write and easy to understand.
7. Module size: The module size should be uniform. The size of the module should not be too
big or too small. If the module size is too large, it is not generally functionally cohesive. If the
module size is too small, it leads to unnecessary overheads.
9. Side-effects: When a module is invoked, it sometimes has a side effect of modifying the
program state. Such side-effect should be avoided where as possible.
➢ Structured Programming
In structured programming, we sub-divide the whole program into small modules so that the
program becomes easy to understand.
The purpose of structured programming is to linearize control flow through a computer program
so that the execution sequence follows the sequence in which the code is written.
The dynamic structure of the program than resemble the static structure of the program. This
enhances the readability, testability, and modifiability of the program.
This linear flow of control can be managed by restricting the set of allowed applications
construct to a single entry, single exit formats.
We use structured programming because it allows the programmer to understand the program
easily.
If the entry conditions are correct, but the exit conditions are wrong, the error must be in the
block. This is not true if the execution is allowed to jump into a block. The error might be
anywhere in the program. Debugging under these circumstances is much harder.
A sequence of blocks is correct if the exit conditions of each block match the entry conditions
of the following block. Execution enters each block at the block's entry point and leaves through
the block's exit point. The whole series can be regarded as a single block, with an entry point
and an exit point.
Rule 2 of Structured Programming: Two or more code blocks in the sequence are structured,
as shown in the figure.
o Structured Rule Three: Alternation
If-then-else is frequently called alternation (because there are alternative options). In structured
programming, each choice is a code block. If alternation is organized as in the flowchart at
right, then there is one entry point (at the top) and one exit point (at the bottom). The structure
should be coded so that if the entry conditions are fulfilled, then the exit conditions are satisfied
(just like a code block).
An example of an entry condition for an alternation method is: register $8 includes a signed
integer. The exit condition may be: register $8 includes the absolute value of the signed number.
The branch structure is used to fulfill the exit condition.
Iteration (while-loop) is organized as at right. It also has one entry point and one exit point. The
entry point has conditions that must be satisfied, and the exit point has requirements that will
be fulfilled. There are no jumps into the form from external points of the code.
Rule 4 of Structured Programming: The iteration of a code block is structured, as shown in
the figure.
In flowcharting conditions, any code block can be spread into any of the structures. If there is
a portion of the flowchart that has a single entry point and a single exit point, it can be
summarized as a single code block.
Rule 5 of Structured Programming: A structure (of any size) that has a single entry point and
a single exit point is equivalent to a code block. For example, we are designing a program to go
through a list of signed integers calculating the absolute value of each one. We may (1) first
regard the program as one block, then (2) sketch in the iteration required, and finally (3) put in
the details of the loop body, as shown in the figure.
The other control structures are the case, do-until, do-while, and for are not needed. However,
they are sometimes convenient and are usually regarded as part of structured programming. In
assembly language, they add little convenience.
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
• Static Methods: This method is done without running the code. (IEEE Std 1028-
2008 Software Review)
IEEE Std 1028-2008: IEEE Standard for Software Reviews and Audits “A process or meeting
during which a software product, set of software products, or a software process is presented to
project personnel, managers, users, customers, user representatives, auditors or other interested
parties for examination, comment or approval.”
HUMAN
PRODUCT INFORMATION
REVIEW ABOUT PRODUCT
OR
OR
PROCESS
PROCESS QUALITY
➢ Why review is done?
✓ Unit Test
✓ Integration Test
✓ System Test / Functional Test / Qualification Test
✓ Acceptance Test
➢ What is Quality?
We can ensure quality using two methods: Traditional mentality or sophisticated mentality.
✓ All planned and systematic activities required to secure the defined requirements of a
product or service.
✓ The understanding of ensuring quality through the system that creates the product /
service.
1. The quality of the software depends largely on how we develop the software.
3. So, We have to put the quality into the software product during the software
development stages.
4. Therefore, Attempting to ensure quality at the end of the software is both difficult
and costly.
Review and Test completes one by one and both are used in the Verification and Validation
process.
Verification: “Are we developing the product correctly?”
➢ Management and technical reviews are made according to the needs of the project.
➢ The status and products of the effectiveness of a process are evaluated with review
activities.
➢ Review results are announced to all affected units.
➢ Corrective actions resulting from reviews are monitored until they are closed.
➢ Risks and problems are identified and recorded.
➢ Why Review?
• It ensures that the product or the process is systematically evaluated from different
perspectives.
• Improves project schedule and cost.
• It supports the test efficiency and reduces the cost.
• Return on investment is high.
• It is a kind of training method.
➢ Technical reviews
Technical reviews will be conducted to evaluate software products or services and provide
evidence of:
➢ Review Process
Roles
a. Review leader ("review leader")
b. Reviewer ("reviewer")
c. Registrar ("recorder")
D. Author ("author")
Steps
a. Planning ("planning")
b. Opening meeting ("kickoff meeting")
c. Individual review ("individual checking")
D. Collective review ("logging meeting")
to. Correction and follow up ("rework and follow up"
Review Checklists
The ISO 9000 series of standards is based on the assumption that if a proper stage is followed
for production, then good quality products are bound to follow automatically. The types of
industries to which the various ISO standards apply are as follows.
1. ISO 9001: This standard applies to the organizations engaged in design, development,
production, and servicing of goods. This is the standard that applies to most software
development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products
but are only involved in the production. Examples of these category industries contain
steel and car manufacturing industries that buy the product and plants designs from
external sources and are engaged in only manufacturing those products. Therefore,
ISO 9002 does not apply to software development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the
installation and testing of the products. For example, Gas companies.
➢ How to get ISO 9000 Certification?
An organization determines to obtain ISO 9000 certification applies to ISO registrar office for
registration. The process consists of the following stages:
The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.
The model defines a five-level evolutionary stage of increasingly organized and consistently
more mature processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a research
and development center promote by the U.S. Department of Defense (DOD).
Methods of SEICMM
Capability Evaluation: Capability evaluation provides a way to assess the software process
capability of an organization. The results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the results of the software process
capability assessment can be used to select a contractor.
Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few or
no processes are described and followed. Since software production processes are not limited,
different engineers follow their process and as a result, development efforts become chaotic.
Therefore, it is also called a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are
used.
Level 3: Defined
At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured.
ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size,
reliability, time complexity, understandability, etc.
Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the
average number of failures detected during testing per LOC, etc. The software process and
product quality are measured, and quantitative quality requirements for the product are met.
Various tools like Pareto charts, fishbone diagrams, etc. are used to measure the product and
process quality. The process metrics are used to analyze if a project performed satisfactorily.
Thus, the outcome of process measurements is used to calculate project performance rather than
improve the process.
Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product measurement data
are evaluated for continuous process improvement.
Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas
(KPAs) that contains the areas an organization should focus on improving its software process
to the next level. The focus of each level and the corresponding key process areas are shown in
the figure.
SEI CMM provides a series of key areas on which to focus to take an organization from one
level of maturity to the next. Thus, it provides a method for gradual quality improvement over
various stages. Each step has been carefully designed such that one step enhances the capability
already built up.
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
The Software Test was carried out for debugging when it first appeared. Over time, it began to
be made to verify that the software is working correctly. The criteria for performing software
tests began to become more evident after the 1980s. The tests started to be carried out gradually
within the scope of the entire software development process. Today, besides this, tests are
carried out to prevent errors.
• Trial Tests
These are the pretests performed to understand whether the software is working properly. Trial
tests sought to answer the following questions:
Acceptance tests question whether the software is working correctly on the target system.
Answers to the following questions are sought.
5. How much is the performance of the program affected when the system is under
heavy load?
➢ Software Testing
Test scenarios are then applied individually. The results are collected, recorded; records are
compared with expected values and concluded.
When the results are obtained, software with acceptable errors is delivered to the user.
➢ Test Methods
This test is done at the software interface level. These are the tests performed to test
whether the software performs its functions. Input values are provided to the software,
the application is considered as a closed box and the output is directly looked at. If the
software gives the expected output, the test is successful.
These are the tests in which the entire internal structure of the software is tested.
It is the test of the program by considering the inner faces of the modules as black
boxes. For example, the evaluation of whether the components in the program's interface are
suitable for the expected result, according to the different input-output values.
Design based testing is not sufficient when the internal structure of the modules needs
to be tested. In this case, code-based testing is done. In order to test the internal structure of the
modules, in the source code; is the search for errors such as logic errors, coding errors, spelling
errors.
2. Coding errors: exceeding limits during transport of data with dynamic memory.
3. Flow path assumption: Assuming an input has a value between 1 and 10, but the input takes
the value 'a'.
Embedded system software tests need to be done with the hardware, as they control the
hardware. For example, testing a printer.
Since real-time systems are not easy to find, tests are carried out with the possibilities at hand.
This means that if we cannot find the system, we use the like, if we cannot create the conditions,
we use the laboratory, if we cannot carry out the testing stages, we make assumptions.
3. Security Systems
Security systems should be working without interruption. Receivers, sensors etc. located on the
system. modules must be tested to be active all the time and make sure they do not generate
false alarms.
The full version of the packet software should be released after intense testing. In the first place,
versions such as alpha and beta should be released.
Database systems are important because they provide facilities such as record keeping, query
query, and access to records. They can lead to irrecoverable losses. For this reason, these
systems must be tested with great precision before they are put into use.
1. Smart Compilers: They scan the source code, check the type and produce machine code.
The audit process can be tight or flexible. Strict control smart compilers generate more reliable
code.
2. Stationary Analyzers: By examining the source code, they find weak points in their
structure and give a warning.
3. Simulation Environments: They enable the software to run on a virtual system.
4. Test Software: Tools that provide data or event input to a software unit that needs to be
tested.
5. Environmental Simulators: Specifically, they are simulators that can enter and exit from
the system through simulation, in order to dynamically test embedded, dedicated and real-time
control systems under real operating conditions.
6. Display Software: They are software that represent the measurement and calculation results
on the chart. They provide an easy evaluation of the results.
➢ Test Strategies
The major tests in system development are "V Model" and "W Model". These test models are
formed by combining different tests that appeal to different parts of the system. Those tests are
given below:
➢ Trial Tests
1. Unit Test
The smallest units of computer software are executables. The unit test is the testing of these
units.
2. Integration Test
Integration testing is done to bring together software units that work smoothly on hardware to
check if they are running smoothly on the system as a whole. For example, unexpected results
are determined based on the operation of one unit affecting the other unit.
a. Top-down Integration
First, the main control unit is tested, and then it is tested together with the units closest to it.
b. Down-top Integration
It starts by running and testing atomic units. Lower-level units are combined into clusters.
3. Proficiency Tests
After the software's errors are corrected, it is to check whether the software is sufficient or not.
Tests for system requirements are based on the Software Requirements Specification document.
It usually consists of two stages.
a. Verificitation
It is the verification that the software performs all the functions it needs to perform properly.
For example, if the output time of a procedure is given as 2 seconds, that procedure also calls 3
different subroutines and the run time of each subroutine is 1 second, the desired output is
produced in 3 seconds, which is an example of a situation that cannot pass the validation test.
b. Validation
Validation is the test of whether the results in the validation phase are really correct. The
prepared software is tested on real systems. For example, it may take 2 seconds for a module to
complete the function. This function may be completed in 2.5 seconds when integrated into the
system. This means that our 2 second run time available in theory is actually 2.5 seconds in the
current system.
A software that passes all tests is used unconsciously and arbitrarily. An attempt is made to
generate an error. If the error does not occur, this is good news.
4. System Test
System concept includes hardware as well as software. Therefore, when it comes to system
testing, the tests performed on computer-based systems for verification and integration
purposes should come to mind. For example, we can consider stages such as controlling
hardware connections, investigating software's possible interface problems, following the data
flow and preparing debugger test designs, preparing mechanisms to report potential errors.
5. Loading Test
The purpose of this test is to measure the data processing capacity by pushing the boundaries
of the system. It also allows us to calculate what can happen in case of overloading and to
prepare measures to control the situation beforehand. Generally, loading tests are performed for
systems with dense data flow. This test can be done in different ways; such as loading the
system with high amount of data, forcing the memory and disk usage of the system and loading
all inputs with high speed data.
6. Stretching Test
It is done to determine how software and hardware behave when abnormal conditions are
created. It is also possible to compare it to the loading test. Examples of these abnormal
conditions are as follows :
1. When some of the system hardware crashes, can the software survive the rest?
3. To measure the response time of the system to sudden effects in case of overload.
7. Recovery Test
These are the tests carried out to ensure that the system is able to recover itself and bring it back
to the last working condition against errors that occur in the software and hardware units.
Usually, this recovery is achieved in two different ways.
The first way, a backup software unit works continuously with the main software. When the
main software crashes, the utility software takes over, thus preventing the user from losing data.
The second way is to design fault tolerant software. Namely, the software is designed in
different modules. If any module crashes, the crashed module will be restarted and the software
will be restored to its former healthy state. Do you think it is possible ?
8. Safety Test
Some computerized systems must perform their functions safely. In other words, a job is carried
out or not, there is no middle way.
For example, if the password is correct, connect to the database, if the room temperature
exceeds 27 degrees, such as operate the alarm system. In such systems, testing how the behavior
of the system is changed when a software or hardware defect occurs is a safety test.
9. Success Test
It is done to evaluate the performance of the system. For example, how long is the time between
data entry and exit? How much information does our system have in total capacity to process?
Answers such as questions are searched.
Performance tests are sometimes performed with stretching tests and the performance of the
system is measured in case of overloads.
➢ Acceptance Tests
These are tests that inform the designer about whether the system is acceptable to the customer.
It is a test based on testing the manufactured software with artificial data on a defined test
equipment at the manufacturer's own facilities. This test is also called Factory Sufficiency Test
or Factory Acceptance Test. It is one of the first line acceptance test applied to equipment to be
put into mass production and production line tests.
This test is also called campus tests. These are the tests performed with real data, under the
conditions of the ratio, to the hardware where the system will be used. Conditions such as
whether the system is working, electrical connections are ok, the software installed on the
system can control the peripherals of the system, the system can communicate with the
peripherals.
3. Trial Tests
In the field of application, tests are made with real data during use to try out possible situations
in real runs. Any extraordinary situation imaginable in these tests should be tried.
If the software package is being prepared for a large number of users, official acceptance cannot
be made for each customer individually. The software is released under the trial process. These
processes are alpha and beta processes.
Alfa Test: The software developer presents the product to the user in a controlled environment.
The user uses the product and conveys their impressions to the developer.
Beta Test: The difference from the alpha test is that there is no obligation to use the product in
an environment controlled by the developer. For example, the customer takes the system,
integrates it into his own system and uses it there. Then he communicates with the developer
and shares his experiences.
➢ Acceptance Restricted
The criteria for system acceptance must be determined in advance, an agreement must be
reached and officially documented. The test scenario should be valid and invalid, and a Test
Result Report should be prepared.
Figural Errors: Errors in colors and shapes, fonts, abbreviations and alignments in the user
interface.
Minor Errors: They do not affect the system, they are easy to fix.
Oversized Errors: These are major errors that may require some part of the development
process to be redone.
Fatal Errors: Errors that cause the system to malfunction. Important functions cannot be
performed.
➢ Test Methods
Test management is important for large projects. There should be a test manager in the project.
A test group should be created from relevant and enthusiastic people who will assist this
manager. Task distribution should be made among these people. A system test plan should be
prepared, stating when, how and in what order. Customers should also participate in the
acceptance tests of the system, monitor and declare their thoughts. At the end of the test, the
results are announced. The product quality is evaluated by looking at the results. Cost, quality
and necessary improvement options are discussed. If there is a necessary improvement, task
distribution is made. Improvements are made or it is declared to the customer that it will be
made in the next version and the product is delivered to the customer.
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
Software maintenance is not periodic but occurs according to the developing conditions. For
example, adapting a previously prepared software to new computer architectures and operating
systems due to changing technology is an important maintenance task.
• Types of Care
Maintenance work covers not only the correction of errors but also the various types of work
that need to be done after delivery of the product.
o Corrective Care
Software testing does not always ensure that all defects are found and removed. There is always
a possibility of errors in a running software. Some software defects only occur during use. The
developer is informed to eliminate these defects. Works to investigate and eliminate the cause
of the defect are called corrective maintenance. Correction of the software defects found and
reported during use is done immediately or depending on the importance of the error,
corrections are applied with a few of them and a new version is released. This maintenance is
an activity that the developer must definitely carry out during the maintenance phase.
o Adaptive Care
Due to the rapid changes and technological developments in the world of information
processing, adaptive maintenance is called to adapt the software to new hardware, operating
systems, upgrade and update functions.
Businesses may need to make changes in the way things are run and the methods used over
time. They may want to update the software.
By looking at the developments in recent years, we see that the average life of a software is 10
years, but the hardware units have 1-2 years to remain in effect. In this case, either software
will continue to be used with old technology or current technology will be used by keeping
portability feature in the foreground. The main problem of using outdated technology is that it
encounters difficulties in supplying additional hardware requirements, spare parts or adding
new useful software tools to the system.
o Healing Care
After the software is developed and tested and presented to the user successfully, it is among
the perfective maintenance works to add new functions, make adjustments that increase the
performance and efficiency of the existing ones. In this way, new versions of the software are
created and made available to the user.
For example, if a software with an average database access of 5 seconds can reduce the search
time to 3 seconds using a new search algorithm, the implementation of this change is a healing
maintenance. Giving new functions to the software according to the demands of new users is
under the scope of healing care.
o Preventive Maintenance
In order to increase the reliability of the software as a better basis for future changes,
preliminary measures are included in the scope of preventive maintenance. For example,
making the design of a module that needs frequent changes more flexible is a preventive
maintenance that makes subsequent changes easier. This type of maintenance may be covered
by the developer's long-term maintenance agreement.
• Maintenance Team
In general, software developers do not allocate a dedicated team for maintenance work.
However, a specific team needs to be identified to evaluate and prioritize software problems or
new requirements.
The structure of the maintenance team is important. In this structure, there is at least one
technical advisor who has knowledge about the previously developed system. This person is
someone who knows the components and technical features of the system. The user's
maintenance requests are forwarded to a maintenance inspector. The maintenance supervisor
creates a change proposal using the technical advisor who has been involved with that project.
This proposal is discussed in the Change Control Board. If the suggestion is not accepted, the
user is informed about the situation. If the proposal is accepted, a responsible personnel is
appointed by the board for that change. Responsible personnel make a work plan and assign the
maintenance personnel. A new version of maintenance personnel is created in coordination with
the configuration management. After the new version has been tested, it is given to the user by
the delivery staff with the training.
Instead of the maintenance controller, the developer's technical support unit, product manager,
or manager of that project can collect problems and new requests for software reported by the
user.
In a small-scale development group, the Maintenance Supervisor and the Change Control Board
may be the same person. This task can also be done by the project manager or product manager.
In large projects, this board consists of managers and senior technical staff.
If the responsibilities are distributed before the maintenance phase of the project and the
appropriate team structure is provided, the confusion that can occur when a maintenance request
actually occurs is prevented. By defining the responsibilities in advance, while the staff is in
another development job, the negative situations that may occur when it is necessary to take it
from there and assign it for an emergency maintenance job is prevented.
➢ Maintenance Steps
Bakım evresinde bulunan yazılım için bir bakım işi ortaya çıktığında geliştirici tarafından
standart bir süreç izlenmelidir. Bu sürecin aşamaları tıpkı bir yazılım geliştirme sürecinde
benzer. Tek farkı, hazır olan bir belge ve kod yapısı üzerinde değişikliklerin uygulanmasıdır.
Bakım aşamalarını şu şekilde özetleyebiliriz.
Requeriment Analysis: At this stage, the problem or change is identified and classified.
Requirements for new arrangements and functions expected in the system are defined.
Accordingly, the existing Software Requirements Specification and Software Test
Identification document is updated.
Design: The current design is reviewed, new requests are added and the Software Design
Definition document is updated.
Implementation: The new design is reflected in the code and the necessary code change or
module development is made. If necessary, unit tests are made with the newly added tests and
made ready for integration. The software elements are integrated with each other and then
with the hardware.
Test: In addition to the newly added tests, the tests of the entire software are performed by
repeating certain tests that are systematically selected. Tests for this purpose are called
regression tests. Acceptance tests are conducted in front of the customer to prove the
reliability of the overall system.
• Reporting
Generally, requests for a change in the software are reported with the Change Proposal and the
problems are reported with the Software Trouble Report. The Change Proposal can be prepared
by the user to explain what kind of change the software wants, or it can be prepared as a
recommendation by the developer staff.
The Change Suggestion provided to meet the user's new request must contain at least the
following information:
• System or subsystem name, item name
• Definition of change
• The item, component or unit where the change will be made
• Other items, components or units that may be affected by the change made
• Estimated workforce to spend on making changes
• The priority level of the request
• Number of the request (to be able to track)
• Other explanatory information
• Signatures and dates of the authorities reviewing the proposal
• Decision (created at the end of the review)
The Software Problem Report used to report any software defect should contain the following
information:
Sometimes these two documents are combined and a single template is used in the form of a
table in which blanks will be filled. These documents are filled in on a computer or manually.
➢ Ease of Maintenance
One of the important features of a qualified software is its ease of maintenance. We can define
this feature as the ease of understanding, correcting, improving and adapting the software.
Technically, it is possible to carry out any change requests. However, the important thing is to
do this at the lowest cost, in the shortest time, correctly and without destroying the software's
qualities. It should not be forgotten that once the software requests to change, it may change in
the future. Therefore, various factors affecting maintenance should be taken into account,
maintenance work should be carried out within a certain quality assurance framework and
should be collected in the future for quantitative values.
• Control Factors
The importance of software maintenance is better understood by both the developer and the
customer in the long run. A good understanding of this importance also depends on several
factors that control software maintenance. It is possible to group them as follows.
➢ Quality of Care
In order for a software project to have a qualified maintenance phase, the following points
should be given importance at the beginning of the work:
o Quantitative Measurements
The software is very difficult to measure, such as ease of maintenance, quality and reliability.
However, it is possible to make some quantitative measurements taking into account the
characteristics of the maintenance procedures. The important and common ones are:
Almost all of the above can be recorded without difficulty during maintenance work. The
recorded data are given to managers about the effectiveness of the methods and tools they use,
and the possible cost estimation.
In order for software maintenance works to be evaluated, these data must be collected and
recorded. In particular, this information should be collected in every maintenance work and
stored according to the program or project identifier (name or code number) so that the
maintenance phases of software intended for long-term use can continue in a healthy manner.
Personnel information and experiences gained at the end of the maintenance work should also
be among the things to be recorded.
➢ Maintenance Issues
Knowing the problems encountered during maintenance will be beneficial to be prepared for.
Generally, the lack of maintainability feature of the developed software and the fact that it has
not been developed in a discipline constitute the source of the problems. We can list these
problems as follows:
• If too many versions of the software emerge, maintenance will be more difficult. It
is not even possible to maintain it if the necessary changes are not fully recorded.
• It is often impossible to keep track of the software development process in terms of
time and labor.
• It usually takes a lot of time to understand the code someone else wrote. If the
documentation and explanations in the code are insufficient, serious problems arise.
• Documentation may be inadequate, incomplete, inaccurate or absent. In this case, it
will only be necessary to read and understand the code.
• Staff continuity is a general problem that is likely to be encountered at all times.
o Rules for the Developer
Among the rules that developers aiming to produce high quality software should apply during
the maintenance phase are:
One problem that groups tasked with software development is difficult to deal with is to
maintain older software that has no registration and documentation. Due to various reasons, it
may be necessary to reuse the codes written ten or twenty years ago, adapt them to new
environments or continue to make changes. However, those who have developed these software
may no longer be in that group or even that company or organization. During the development,
a certain methodology has not been applied, and the certification that has to be made may have
been done very little or not at all. In such cases, the methods for maintenance can be:
• Reverse Engineering
Reverse engineering applied in the software is also considered the same. The product being
studied may belong to another developer, or it may be previously produced software from the
same developer. However, the source code is a mystery because no identifying documents are
available. The software, which is dominated at the time of its development, turns into a
complete chaos after years and when no developers can be found. Reverse engineering applied
in such cases is the process of trying to redefine the software with a higher level of abstraction
than the source code. This is actually the recovery of the design. It is possible to obtain data,
architectural and procedural design from software source code with the help of various tools
developed for this purpose. After that, the necessary maintenance can be done.
In reverse engineering, the aim is to get the source code as input and get the full design
documentation as output. However, in practice it is not always possible for it to handle it like
this. The reason is whether the level of abstraction that the auxiliary tools necessary to do this
work will be acceptable or not. Each tool works according to its own logic and converts the
source code into a document. After that, it should be examined again by software engineers,
and if necessary, manual corrections should be made. The high level of abstraction provides a
better understanding of design by maintenance engineers, especially in large software.
Another area where reverse engineering is applied is to address the design from running
programs with no source code at all. The object code of running programs contains machine
code in binary state. Today, there are also tools that understand this machine code and turn it
into a readable programming language. However, the module, procedure, and variable names
may not contain logical expressions, but only identifiers to facilitate tracking. It is possible to
replace these identifiers later with the desired words.
In both methods, reverse engineering should be complete. In other words, no part of the code
should be skipped, the design should be fully cover.
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
Dr. Nesrin AYDIN ATASOY
Week 13: Automated Testing
Automated tests use software to perform tasks without the manual instruction of a tester.
In manual testing, the tester will write the code they want to execute or plan the software path
they want to check is working properly. Automated tests take care of such things on the testers
behalf. Here’s a quick list of automated software and QA tools that QA analysts should know:
• Selenium
• Cucumber
• Katalon Studio
1. Selenium
Selenium is an open source, flexible library used to automate the testing of Web applications.
Selenium test scripts can be written in different programming languages such as Java, Python,
C# and many more. These test scripts can run on various browsers like Chrome, Safari, Firefox,
Opera and support various platforms like Windows, Mac OS, Linux, Solaris.
Selenium is mainly for automating web applications for testing purposes, but certainly not
limited to that. It allows you to open a browser of your choice and perform tasks like a human
would. For example:
It should be noted that data scraping with libraries like Selenium is against most sites' terms of
service. If you pull data too often or maliciously, your IP address may be banned from that web
page.
• Selenium IDE
• Support for many languages (Java, .NET, Python, Ruby, PHP, Perl)
• Thanks to Selenium being open source, it works on many platforms (Windows, Linux, IOS)
without any problems.
• It is preferred more than other test tools thanks to its multi-language and platform support.
(UFT, QTP)
• Selenium RC
Selenium WebDriver is a browser automation framework that accepts commands and sends
them to a browser. It is implemented via a scanner-specific driver. It communicates directly
with the scanner and controls it. Selenium WebDriver supports various programming languages
such as Java, C#, PHP, Perl and JavaScript.
For example, you need to test whether the comment form on the bottom page of a site you are
going to test works. For this, we first enter the site and in the next step we come to the part
where the comment form is located at the bottom of the page with scroll. But Selenium cannot
give us the scroll operation as a code output. This is where WebDriver's libertarian structure
comes into play. It enables us to make our test cases more workable and controllable by using
WebDriver elements.
• Selenium Grid
Selenium Grid is a tool used with Selenium RC. It is used to run tests on different machines in
parallel with different browsers.
Selenium-Grid, which has been developed and continues to be developed by Selenium, runs on
different servers in parallel with different browsers. The main purpose here is to see test results
on combinations such as different operating systems, hardware, devices, to run test processes
in parallel in a distributed environment and to get test results quickly. When these tests run in
Hub: The Hub used in Selenium works on the condition that there is only one. This structure,
which acts as a server, hosts many processes on itself, and you can test the same code on
Node: The structure consisting of one or more clients connected to the hub is called a node.
You can test with Nodes by making requests from many Nodes to a single Hub using selenium-
grid.
1.2. Web Applications Test Methods
• Functional Test
• Database Test
• Interface Test
• Usability Test
• Compatibility Test,
• Performance Test
• Security Test
• Functional Test
1. Funcional Test
or redirected. In this way, it can be ensured that there are no dead pages or invalid redirects.
Links on the same page: Links on the page should be checked one by one. E-mail links and
2. Cucumber
Cucumber is a widely used tool for Behaviour Driven Development because it provides an
easily understandable testing script for system acceptance and automation testing.
BDD includes test case development in the form of simple English statements inside a feature
file, which is human-generated. Test case statements are based on the system's behavior and
more user-focused.
BDD is written in simple English language statements rather than a typical programming
language, which improves the communication between technical and non-technical teams and
stakeholders.
In other words, "Cucumber is a software tool used by the testers to develop test cases for the
testing of behavior of the software."
Cucumber tool plays a vital role in the development of acceptance test cases for automation
testing. It is mainly used to write acceptance tests for web applications as per the behavior of
their functionalities.
In the Cucumber testing, the test cases are written in a simple English text, which
anybody can understand without any technical knowledge. This simple English
text is called the Gherkin language.
We can use Cucumber along with Watir, Selenium, and Capybara, etc. It supports
many other languages like PHP, Net, Python, Perl, etc.
The use of the application consists of two parts, as can be seen in the picture
above. These; Features and Glue Code. It can also be easily converted to a
different programming language. In addition, the following headings are included
in the Cucumber Terminology.
• Feature : We define a behavior in the Feature part (eg: display the home
page for the application, make sure as a user that the page is loaded, I want
to see the home page, etc.
• Scenario: We create our scenario that is suitable for our request in the
feature section.
• Given-When-Then : Given defines the prerequisite state, the When event,
and Then finalizes the event in the when keyword.
• And
• But
Tests are the ability to easily convert Feature and Scenario statements of application use cases
written in almost plain text into runnable unit tests on platforms belonging to Java or other
languages. No coding knowledge is required for writing and reading application scenarios and
feature files. Thus, business analysts and domain experts can easily understand when they read
the features to determine the scope and limits of the tests.
So how does the Behavior driven development method work with Cucumber?
1. The expected behavior of the code to be tested is written in plain text.
2. Write several Step definitions that will perform the test.
3. The code to be tested is written.
4. The test is run.
In terms of test automation, Cucumber does not provide the opportunity to control the browser
from the screen, for such needs it is necessary to use tools such as selenium web driver.
Example:
Test scenarios start with Feature and continue with the name of the scenario to be tested on the
Scenario side. Then, the steps continue to be written using sub-headings and commands such
as Given, When, Then.
Here, as we mentioned above, we will talk about a test application written in ruby on a sign-
up scenario (Cucumber Feature and Scenario steps.). First, we direct the test to our application
running in our locale, then we give the links to be clicked during the test and the urls of the
pages to be directed accordingly.
Finally, the email and password validation processes required for the sign-in process are
performed, and the necessary information is entered in the form in our system, and the
scenario is tested.
✓ BDD Benefits
• Planning
• Design
• Development
• Test
• Delivery
As a problem in this method, it is a problem to communicate with the customer in the process
from the Planning phase to the Design phase. After all, you can't make the client read code.
That's why there should be a BA (Business Analyst) person in between.
With BDD there is a slight displacement in the process;
• Planning
• Design
• Test
• Development
• Delivery
The benefit of BDD with this relocation is that it becomes easier to create BDD scenarios
through stories created by BA (Business Analyst) and to establish relationships with customers
through these scenarios.
If we talk about the Agile method that we saw above, Agile includes the user in the process.
Agile, together with the user, adjusts all its organization and systems according to the customer.
“If you are doing business with Agile methodology and not using BDD for application testing,
you are contradicting yourself. ”
Of course, the test can be not only following the behavior as described here, but also testing the
operation of all the units running in the background, both individually (unit-testing) and
together (integration-testing).
3. Katalon
Katalon Platform is an automation testing software tool developed by Katalon, Inc. The
software is built on top of the open-source automation frameworks Selenium, Appium with a
specialized IDE interface for web, API, mobile and desktop application testing. Its initial release
for internal use was in January 2015. Its first public release was in September 2016. In 2018,
the software acquired 9% of market penetration for UI test automation, according to The State
of Testing 2018 Report by SmartBear.
Katalon is recognized as a March 2019 and March 2020 Gartner Peer Insights Customers’
Choice for Software Test Automation.
You can test your Web, Mobile and Desktop (latest version 7.0) applications,
as well as use them in your test automation processes of your backend
services. Thus, you can manage your testing processes in a hybrid way on a
single platform. You can easily integrate the scripts you have prepared into
your CI/CD processes, so you can automate your software quality processes.
✓ Katalon Features
✓ It is a Java-based application.
✓ Scripts prepared without writing an additional script can be run
separately or simultaneously in many browsers such as
Chrome, Firefox, Safari, Edge.
✓ Thanks to the Record&Play feature, the processes can be
prepared easily without having knowledge about script writing.
✓ With Slack integration, real-time feedback and communication
between team members can be provided.
✓ You can enable git integration for source control.
✓ You can start the application by running the run file without
any installation, you can start using it quickly with the
keywords in it. (https://katalon.com/download)
✓ Many of its features are free, and in new versions, paid features
have been started to be used.
✓ It works with the Page Object Model (POM) design model, which
aims to improve test maintenance and eliminate code duplication.
✓ It uses the selenium library in the background for web automation
and the appium library for mobile automation.
✓ For test data, it provides a data file object that can query data from
external sources such as CSV file, excel file, relational database.
✓ Katalon Studio offers BDD testing capability with files with the
.feature extension.
✓ Katalon Studio uses Grid — TestOps Cloud to run tests entirely in the
cloud and automatically deliver results to Katalon Analytics. Katalon
Analytics is an artificial intelligence supported platform that provides
users with detailed dashboards and reports about test executions.
KAYNAKLAR
2. https://www.mobilhanem.com/web-uygulama-testleri-nelerdir/
3. https://teknoloji.org/selenium-kutuphanesi-nedir-nasil-kullanilir/
4. https://medium.com/@ilkebasalak/selenium-nedir-8c7d908c93e6
5. https://cucumber.io/
6. https://www.javatpoint.com/cucumber-testing
7. http://www.defnesarlioglu.com/cucumber-ile-behaviour-driven-development/
8. https://www.linkedin.com/pulse/cucumber-ile-behaviour-driven-development-bdd-
halil-bozan/?originalSubdomain=tr