0% found this document useful (0 votes)
24 views181 pages

1-14 Notlar

The document outlines the course CPE 310 Software Engineering, focusing on the definition, importance, and processes involved in software engineering. It discusses the software development life cycle, key steps such as planning, analysis, design, implementation, and maintenance, as well as the significance of software quality assurance and the roles of various stakeholders. Additionally, it emphasizes the need for systematic approaches and methodologies to manage software complexity and ensure project success.

Uploaded by

cikif83366
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views181 pages

1-14 Notlar

The document outlines the course CPE 310 Software Engineering, focusing on the definition, importance, and processes involved in software engineering. It discusses the software development life cycle, key steps such as planning, analysis, design, implementation, and maintenance, as well as the significance of software quality assurance and the roles of various stakeholders. Additionally, it emphasizes the need for systematic approaches and methodologies to manage software complexity and ensure project success.

Uploaded by

cikif83366
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 181

CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

CONTENTS

Aim Of Lesson
Software engineering is about imagination and creativity. The process of creating something
apparently tangible from nothing. Software engineering methods have not yet been completely
analyzed and systematized. Software engineering is about the creation of large pieces of
software that consist of thousands of lines of code and involve many person months of human
effort.
Books, Videos and Online Course Documents
1. Software Engineering, Ian Sommerville, 9th edition, Addison-Wesley
2. Software Engineering For Students, Douglas Bell, 4th edition, AddisonWesley
3. Object - Oriented Software Engineering, Timothy C. Lethbridge and Robert Laganière,
2nd edition, McGraw- Hill
Evaluations
1 Mid-term, 1 Final Exam and 1 Project
WEEK 1: DEFINITION AND IMPORTANCE of SOFTWARE ENGINEERING

➢ What is software?

❖ Performing a defined function,

❖ With inputs and outputs,

❖ Working on any hardware,

❖ It is a product of computer program or programs.

❖ Configuration files used to set up these programs

❖ User documentation explaining how to use the software

❖ System documentation describing the structure of the software

❖ It includes all methods, tools, information and documents that can be used to
combine and manage logic, data, documents, human and program components
for a specific production purpose [1, 2].


The software can be examined as General and Special for Customer Software [3].

✓ General Software: It is software whose features are determined according to


the general need in the market. Ex: MS Office, Photoshop, AutoCAD.

✓ Special for Customer: It is the software whose features are determined


according to the needs of the customer. Ex: Pharmacy Stock Program,
University Automation Program.
➢ Hardware (Manufacturing)- Software (Development) Comparison

❖ Software is engineered, not manufactured.


❖ Once a hardware product has been manufactured, it is difficult or impossible to modify.
In contrast, software products are routinely modified and upgraded.
❖ In hardware, hiring more people allows you to accomplish more work, but the same
does not necessarily hold true in software engineering.
❖ Unlike hardware, software costs are concentrated in design rather than production.

➢ Software Deteriorates

❖ Software does not wear-out, but it does deteriorate due to changes


❖ Most software models a part of reality and reality evolves.
❖ If software does not evolve with the reality that is being modeled, then it deteriorates

➢ Where is the Software?


❖ In computer systems

Operating systems (eg: Windows, Linux)

End-user programs (eg:Photoshop,dreamveawer)

Compilers (eg: javac, pascal, gcc)

❖ Aircrafts, Space Shuttles (Eg: F16, Discovery Space Shuttle)


❖ Cellular Phones (Eg: IOS, Android etc.)
❖ Education (Eg: Distance Learning)
❖ Entertainment, Transportation
❖ Health systems, Military
❖ And many more….
➢ Granularity of Software

Trivial: 1 month, 1 programmer, 500 LOC, Ex: Intro programming assignments

Very small: 3 months, 1 programmer, 2000 LOC, Ex: Course project

Small: 1 year, 3 programmers, 50K LOC, Ex: Mobile App

Medium: 3 years, 10s of programmers, 100K LOC, Ex: Optimizing compiler

Large: 5 years, 100s of programmers, 1M LOC, Ex: MS Word, Excel

Very large: 10 years, 1000s of programmers, 10M LOC, Ex: Air traffic control,
Telecommunications, space shuttle

➢ What type of software?

❖ Small single-developer projects can typically get by without Software


Engineering.
o Typically no deadlines, small budget (freeware), not safetycritical

❖ Software Engineering is especially required for


o Medium to large projects (50,000 lines of code and up)
o Multiple subsystems
o Teams of developers (often geographically dispersed)
o Safety-critical systems (software that can kill people...)
➢ What is Software Engineering?

Software Engineering term first emerged in 1968 at the NATO Software Engineering
conference in Germany. It emerged with the evolution of Computer Science
discipline.[4].

Software Engineering has been described in many ways, some are as follows:

✓ Software engineering is obtained robust engineering principles to develop


economical software that works accurately and efficiently on real machines [5].

✓ Software engineering is an engineering science dealing with all situations related


to software production.

✓ Software engineering; applied for the development, operation and maintenance


of the software product; is a systematic, disciplined and measurable approach[6].

➢ Software Engineering is concerned with …


o Technical processes of software development
o Software project management
o Development of tools, methods and theories to support software production
o Getting results of the required quality within the schedule and budget
o Often involves making compromises
o Often adopt a systematic and organized approach
o Less formal development is particularly appropriate for the development of web-based
systems

➢ Software Engineering is important because

o Individuals and society rely on advanced software systems


o Produce reliable and trustworthy systems economically and quickly
o Cheaper in the long run to use software engineering methods and techniques for
software systems
➢ Fundamental activities being common to all software processes:

o Software specification: customers and engineers define software that is to be produced


and the constraints on its operation
o Software development: software is designed and programmed
o Software validation: software is checked to ensure that it is what the customer requires
o Software evolution: software is modified to reflect changing customer and market
requirements

➢ Software Engineer

Software Engineer is the software engineering job. However, person cannot do this job
without formal training. The software engineer is not just an encoder. The person who
knows best how to tell user requests to the computer. It is mostly related to people and
deals with the logical dimension of the software. Today, software engineering has
become a profession and has schools.

➢ Importance of Software Engineering

Computer software is now everywhere in our lives. For this, the goal of Software
Engineering;

• To eliminate the complexity of software development,


• True,
• Reliable,
• To produce suitable products.

Errors in software production show propagation. For this reason, error correction costs increase
gradually in the following stages. Its main goal is to realize the production with the lowest cost
and the highest quality. Therefore, the hardware cost is ineffective besides the cost of the
software. Table 1 shows error correction costs in software production [2].

Table 1. Error correction costs in software production.

Analysis 1

Design 5

Coding 10

Test 25

Acceptance
50
Test
Operating 100

➢ Software Quality Assurance

The objectives of software quality assurance activities can be summarized as follows [2]:

✓ Reducing software costs,


✓ Facilitating software production management,
✓ Elimination of documentation and standard troubles.

Some software quality criteria are shown in Table 2.

Table 2. Sample software quality criteria.

Economy Completeness Reusability Effectiveness Integrity

Reliability Modularity Documentation Convenience Cleaning

Interchangeability Validity Generality Portability Maintainability

Essential attributes of good software:

➢ Maintainability

o Evolve to meet the changing needs of customers

o Software change is inevitable (see changing business environment)

➢ Dependability and security

o Includes reliability, security and safety

o Should not cause physical or economic damage in case of system failure

o Take special care for malicious users

➢ Efficiency

o Includes responsiveness, processing time, memory utilization

o Care about memory and processor cycles

➢ Acceptability
o Acceptable to the type of users for which it is designed

o Includes understandable, usable and compatible with other systems

➢ Software Perception by Individuals

Management, customers and analysts are the most important stakeholders in the perception of
software.

Administration according to an old belief, a good manager manages all projects. But this is
not true for today's projects. Because, In a constantly evolving world, a good manager must be
well-off with the latest technology.

Customer is the person who directs the project to be developed and determines the
qualifications of the project in line with his own wishes. Before starting software development,
it has a great role in understanding and analyzing the subject thoroughly.

Operator should analyze the subject very well before coding begins, design it with all the
details and then coding should begin.

➢ Participants and Roles

• Developing software requires collaboration of many people with different


backgrounds and interests.

• All the persons involved in the software project are called


participants (stakeholders)

• The set of responsibilities in the project of a system are defined


as roles.

• A role is associated with a set of tasks assigned to a participant.

• Role is also called stakeholder

• The same participant can fulfill multiple roles.

Case of roles:
➢ Why is software development difficult?

▪ Change (in requirements & technology)

o The “Entropy” of a software system increases with each change: Each implemented
change erodes the structure of the system which makes the next change even more
expensive (“Second Law of Software Dynamics”).
o As time goes on, the cost to implement a change will be too high, and the system
will then be unable to support its intended task. This is true of all systems,
independent of their application domain or technological base.

➢ The problem domain (also called application domain) is difficult


➢ The solution domain is difficult
➢ The development process is difficult to manage
➢ Software offers extreme flexibility

▪ Dealing with Complexity

1. Abstraction

We use Models to describe Software Systems:

1. Object model: What is the structure of the system?


2. Functional model: What are the functions of the system?
3. Dynamic model: How does the system react to external events?
4. System Model: Object model + functional model + dynamic model

Other models used to describe Software System Development:

1. Task Model:
o PERT Chart: What are the dependencies between tasks?
o Schedule: How can this be done within the time limit?
o Organization Chart: What are the roles in the project?
2. Issues Model:
o What are the open and closed issues?
o What constraints were imposed by the client?
o What resolutions were made?
2. Decomposition

➢ A technique used to master complexity (“divide and conquer”)


➢ Functional decomposition
➢ Object-oriented decomposition

3. Hierarchy

➢ We got abstractions and decomposition


o This leads us to chunks (classes, objects) which we view with object model
➢ Another way to deal with complexity is to provide simple relationships between the
chunks
➢ One of the most important relationships is hierarchy

➢ Software Classification

It is possible to classify the software into a number of classes according to development and
design.

✓ Ready commercial products

Produced to be sold to many different customers (Commercial Off The Shelf - COTS)

✓ Special Software

They are products prepared according to the needs of a single customer.

✓ System Software
It is the software that installs every time the computer is turned on and makes the computer
ready for use. The BIOS program on PCs does this task. This program is loaded into RAM
when the computer is started and remains in memory until it is turned off.

✓ Application Software

Usually, they are all programs outside of the system software. These softwares are programs
written to solve a certain problem using appropriate data processing techniques. Unlike system
and support software, it is written for a uniform application. For example; Microsoft Excel,
Microsoft Word, etc.

✓ Support Software

They are general-purpose computer programs that are not specific to any application and allow
certain commands to be performed. Sorting, copying, formatting, etc. like software that handles
transactions.

REFERENCES

1. Yazılım Mühendisliği Ders Notları; Yrd.Doç.Dr. Buket Doğan.


2. Yazılım Mühendisliği; Ali Arifoğlu, Ali Doğru.
3. BBS-651 Yazılım Mühendisliği Ders Notu; A. Tarhan,2010
4. Yazılım Mühendisliği Yöntemleri İleri Konular, Dr. Çağatay Çatal, 2012.
5. Naur and Randell, 1969.
6. IEEE, 1990
7. Yazılım Mühendisliği Temelleri; Dr. M. Erhan Sarıdoğan
8. Software Engıneerıng Lecture Notes, Benjamin Sommer
9. Object- Oriented Software Engineering Using UML, Patterns, and Java , Bernd
Bruegge & Allen H. Dutoit
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2023

Dr. Nesrin AYDIN ATASOY

Week 2: Software Life Cycles Process


Studies are carried out on various process models and new technologies to systematize
software development. The model is a guide on how to do software development activity and
how the general development order will be. The main target of these models; To define the
engineering processes (workflow consisting of predetermined steps) recommended to be

followed throughout the software development life cycle for project success.

➢ Software Development Life Cycle


➢ It is defined as all stages that a software goes through, including the
production phase and the usage phase.
➢ Since software functions and needs are constantly changing and
developing, they are thought of in the form of a cycle.
➢ Software life cycles should not be considered as unidirectional, linear.

➢ Software Life Cycle Basic Steps


• Planning: It is the stage where personnel and equipment requirements are
determined, feasibility study is carried out and the project plan is created.
• Analysis: The stage in which system requirements and functions are detailed.
The existing works are examined, basic problems are revealed.
• Design: It is the stage in which the basic structure of the software system that
will respond to the specified requirements is formed.

o Logical design; The structure of the proposed system is explained,

o Physical design; the components that contain the software and their
details.

• Implementation: It is the stage where coding, testing and installation work is


done.
• Maintenance: It is the stage of troubleshooting and making new additions
after delivery.

Figure 1: Software lifecycle key steps.

➢ Software Life Cycle Data


Software development standards require the preparation of life cycle data. Some standards
require the way and format of the data is prepared, while others only define the content and
release the document format.

Lifecycle data must be able to be updated, read, deleted, archived and, if necessary,
transferred with ownership rights. The main purpose of preparing a life cycle data is to:

Identifying and recording the information required throughout the life cycle of the software
product,

✓ Helping the usability of the software product,


✓ Defining and controlling lifecycle processes,
✓ Keeping a history of data change,

Life cycle data of projects are usually stored in documents. Thus, these documents are very
important for a healthy project and product maintenance. Because documents are regular
environments where information is collected. The characteristics that the information should
have are as follows:

➢ Contradiction
➢ Completeness
➢ Verifiability
➢ Consistency
➢ Interchangeability
➢ Traceability
➢ Exhibitability
➢ Privacy
➢ Preservation
➢ Sensitivity

➢ Software Process Models


With the development of software technologies, existing models and methodologies also
develop and new models emerge. The hardware and software technologies of the relevant
period and the needs of the sector played an important role in the emergence of the models.

Using appropriate software development models plays a very important role in the
development of the software as more secure, accurate, understandable, testable and
maintainable.

➢ What is Process?

✓ It is a chain of steps performed for a specific goal. [IEEE]


✓ A process is a series of steps involving activities, constraints, and resources that
produce intended output of some kind.
✓ A process involves a set of tools and techniques.

➢ What is the software process?

• Activities, methods, practices and transformations used to develop and


maintain the software and its related products.
• It is a set of activities aimed at software development and maintenance.
➢ What is the Software Process Model?
It is a simplified representation of a software process from a specific point of view. Sample
perspectives:

Workflow: How are the activities sequenced?

Data-stream: How is the information ordered?

Role-act: Who is doing what?

➢ Reasons for Modeling a Process


➢ To form a common understanding

➢ To find inconsistencies, redundancies, omissions

➢ To find and evaluate appropriate activities for reaching process goals

➢ To tailor a general process for a particular situation in which it will be used

➢ Software Process Model Types


1. Waterfall Model
2. V Model
3. Prototype Model
4. Spiral model
5. RAD Model
6. Iterative Model
7. Incremental Model

Let's examine these methods:

1. Waterfall Model
The waterfall model is a classical model used in system development life cycle to create a
system with a linear and sequential approach. It is termed as waterfall because the model
develops systematically from one phase to another in a downward fashion. This model is
divided into different phases and the output of one phase is used as the input of the next
phase. Every phase has to be completed before the next phase starts and there is no
overlapping of the phases.

The sequential phases described in the Waterfall model are:

Requirements

Analysis

Design

Coding

Testing

Deployment
Maintenance

Figure 2 : The different sequential phases of the classical waterfall model.

1. Requirement Gathering: All possible requirements are captured in product requirement


documents.

2. Analysis Read: The requirement and based on analysis define the schemas, models and
business rules.

3. System Design: Based on analysis design the software architecture.

4. Implementation- Coding: Development of the software in the small units with functional
testing.

5. Integration and Testing: Integrating of each unit developed in previous phase and post
integration test the entire system for any faults.

6. Deployment of System: Make the product live on production environment after all
functional and nonfunctional testing completed.

7. Maintenance: Fixing issues and release new version with the issue patches as required.

➢ Advantages of the Model:


1. Easy to use, simple and understandable,

2. Easy to manage as each phase has specific outputs and review process,

3. Clearly-defined stages,

4. Works well for smaller projects where requirements are very clear,

5. Process and output of each phase are clearly mentioned in the document.

➢ Disadvantages:

1. It doesn’t allow much reflection or revision. When the product is in testing phase, it is very
difficult to go back and change something which is left during the requirement analysis phase.

2. Risk and uncertainty are high.

3. Not advisable for complex and object-oriented projects.

4. Changing requirements can’t be accommodated in any phase.

5. As testing is done at a later phase. So, there is a chance that challenges and risks at earlier
phases are not identified.

2. V Model
V-Model also referred to as the Verification and Validation Model. In this, each phase of
SDLC (Software Development Life Cycle) must complete before the next phase starts. It
follows a sequential design process same as the waterfall model. Testing of the device is
planned in parallel with a corresponding stage of development.

SDLC: SDLC is Software Development Life Cycle. It is the sequence of activities carried out
by Developers to design and develop high-quality software.

STLC: STLC is Software Testing Life Cycle. It consists of a series of activities carried out by
Testers methodologically to test your software product.
Figure 3. V model process steps.

Verification: It involves a static analysis method (review) done without executing code. It is
the process of evaluation of the product development process to find whether specified
requirements meet.

Validation: It involves dynamic analysis method (functional, non-functional), testing is done


by executing code. Validation is the process to classify the software after the completion of
the development process to determine whether the software meets the customer expectations
and requirements.

V-Model contains Verification phases on one side of the Validation phases on the other side.
Verification and Validation process is joined by coding phase in V-shape. Thus it is known as
V-Model.

➢ There are the various phases of Verification Phase of V-model:

Business requirement analysis: This is the first step where product requirements understood
from the customer's side. This phase contains detailed communication to understand
customer's expectations and exact requirements.
System Design: In this stage system engineers analyze and interpret the business of the
proposed system by studying the user requirements document.

Architecture Design: The baseline in selecting the architecture is that it should understand all
which typically consists of the list of modules, brief functionality of each module, their
interface relationships, dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular phase.

Module Design: In the module design phase, the system breaks down into small modules.
The detailed design of the modules is specified, which is known as Low-Level Design

Coding Phase: After designing, the coding phase is started. Based on the requirements, a
suitable programming language is decided. There are some guidelines and standards for
coding. Before checking in the repository, the final build is optimized for better performance,
and the code goes through many code reviews to check the performance.

➢ There are the various phases of Validation Phase of V-model:

Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the module
design phase. These UTPs are executed to eliminate errors at code level or unit level. A unit is
the smallest entity which can independently exist, e.g., a program module. Unit testing
verifies that the smallest entity can function correctly when isolated from the rest of the codes/
units.

Integration Testing: Integration Test Plans are developed during the Architectural Design
Phase. These tests verify that groups created and tested independently can coexist and
communicate among themselves.

System Testing: System Tests Plans are developed during System Design Phase. Unlike Unit
and Integration Test Plans, System Tests Plans are composed by the client?s business team.
System Test ensures that expectations from an application developer are met.

Acceptance Testing: Acceptance testing is related to the business requirement analysis part.
It includes testing the software product in user atmosphere. Acceptance tests reveal the
compatibility problems with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like load and performance
defects within the real user atmosphere.
➢ When to use V-Model?
• When the requirement is well defined and not ambiguous.
• The V-shaped model should be used for small to medium-sized projects where
requirements are clearly defined and fixed.
• The V-shaped model should be chosen when sample technical resources are available
with essential technical expertise.

➢ Advantage of V-Model:
• Easy to Understand.
• Testing Methods like planning, test designing happens well before coding.
• This saves a lot of time. Hence a higher chance of success over the waterfall model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.

➢ Disadvantage of V-Model:
• Very rigid and least flexible.
• Not a good for a complex project.

Software is developed during the implementation stage, so no early prototypes of the software
are produced.

3. Prototype Model
The prototype model requires that before carrying out the development of actual software, a
working prototype of the system should be built. A prototype is a toy implementation of the
system. A prototype usually turns out to be a very crude version of the actual system, possible
exhibiting limited functional capabilities, low reliability, and inefficient performance as
compared to actual software.

In many instances, the client only has a general view of what is expected from the software
product. In such a scenario where there is an absence of detailed information regarding the
input to the system, the processing needs, and the output requirement, the prototyping model
may be employed.
Figure 4. Prototyping model activity steps.

➢ Steps of Prototype Model


• Requirement Gathering and Analyst
• Quick Decision
• Build a Prototype
• Assessment or User Evaluation
• Prototype Refinement
• Engineer Product
➢ Advantage of Prototype Model
• Reduce the risk of incorrect user requirement
• Good where requirement are changing/uncommitted
• Regular visible process aids management
• Support early product marketing
• Reduce Maintenance cost.
• Errors can be detected much earlier as the system is made side by side.

➢ Disadvantage of Prototype Model


• An unstable/badly implemented prototype often becomes the final product.
• Require extensive customer collaboration
• Costs customer money
• Needs committed customer
• Difficult to finish if customer withdraw
• May be too customer specific, no broad market
• Difficult to know how long the project will last.
• Easy to fall back into the code and fix without proper requirement analysis,
design, customer evaluation, and feedback.
• Prototyping tools are expensive.
• Special tools & techniques are required to build a prototype.
• It is a time-consuming process.

4. Spiral model
The spiral model, initially proposed by Boehm, is an evolutionary software process model that
couples the iterative feature of prototyping with the controlled and systematic aspects of the
linear sequential model. It implements the potential for rapid development of new versions of
the software. Using the spiral model, the software is developed in a series of incremental
releases. During the early iterations, the additional release may be a paper model or prototype.
During later iterations, more and more complete versions of the engineered system are
produced.
Figure 4. Spiral model.

➢ Each cycle in the spiral is divided into four parts:

Objective setting: Each cycle in the spiral starts with the identification of purpose for that
cycle, the various alternatives that are possible for achieving the targets, and the constraints
that exists.

Risk Assessment and reduction: The next phase in the cycle is to calculate these various
alternatives based on the goals and constraints. The focus of evaluation in this stage is located
on the risk perception for the project.

Development and validation: The next phase is to develop strategies that resolve
uncertainties and risks. This process may include activities such as benchmarking, simulation,
and prototyping.

Planning: Finally, the next step is planned. The project is reviewed, and a choice made
whether to continue with a further period of the spiral. If it is determined to keep, plans are
drawn up for the next step of the project.

The development phase depends on the remaining risks. For example, if performance or user-
interface risks are treated more essential than the program development risks, the next phase
may be an evolutionary development that includes developing a more detailed prototype for
solving the risks.

The risk-driven feature of the spiral model allows it to accommodate any mixture of a
specification-oriented, prototype-oriented, simulation-oriented, or another type of approach.
An essential element of the model is that each period of the spiral is completed by a review
that includes all the products developed during that cycle, including plans for the next cycle.
The spiral model works for development as well as enhancement projects.

➢ When to use Spiral Model?


• When deliverance is required to be frequent.
• When the project is large
• When requirements are unclear and complex
• When changes may require at any time
• Large and high budget projects
• Advantages
• High amount of risk analysis
• Useful for large and mission-critical projects.
• Disadvantages
• Can be a costly model to use.
• Risk analysis needed highly particular expertise
• Doesn't work well for smaller projects.

5. RAD (Rapid Application Development) Model


RAD is a linear sequential software development process model that emphasizes a concise
development cycle using an element based construction approach. If the requirements are well
understood and described, and the project scope is a constraint, the RAD process enables a
development team to create a fully functional system within a concise time period.

RAD is a concept that products can be developed faster and of higher quality through:

• Gathering requirements using workshops or focus groups


• Prototyping and early, reiterative user testing of designs
• The re-use of software components
• A rigidly paced schedule that refers design improvements to the next product version
• Less formality in reviews and other team communication

6. Iterative Model

In this Model, you can start with some of the software specifications and develop the first
version of the software. After the first version if there is a need to change the software, then a
new version of the software is created with a new iteration. Every release of the Iterative
Model finishes in an exact and fixed period that is called iteration.

The Iterative Model allows the accessing earlier phases, in which the variations made
respectively. The final output of the project renewed at the end of the Software Development
Life Cycle (SDLC) process.

The various phases of Iterative model are as follows:

1. Requirement gathering & analysis: In this phase, requirements are gathered from
customers and check by an analyst whether requirements will fulfil or not. Analyst checks that
need will achieve within budget or not. After all of this, the software team skips to the next
phase.

2. Design: In the design phase, team design the software by the different diagrams like Data
Flow diagram, activity diagram, class diagram, state transition diagram, etc.

3. Implementation: In the implementation, requirements are written in the coding language


and transformed into computer programmes which are called Software.

4. Testing: After completing the coding phase, software testing starts using different test
methods. There are many test methods, but the most common are white box, black box, and
grey box test methods.

5. Deployment: After completing all the phases, software is deployed to its work
environment.

6. Review: In this phase, after the product deployment, review phase is performed to check
the behaviour and validity of the developed product. And if there are any error found then the
process starts again from the requirement gathering.
7. Maintenance: In the maintenance phase, after deployment of the software in the working
environment there may be some bugs, some errors or new updates are required. Maintenance
involves debugging and new addition options.

➢ When to use the Iterative Model?


• When requirements are defined clearly and easy to understand.
• When the software application is large.
• When there is a requirement of changes in future.

➢ Advantage of Iterative Model


• Testing and debugging during smaller iteration is easy.
• A Parallel development can plan.
• It is easily acceptable to ever-changing needs of the project.
• Risks are identified and resolved during iteration.
• Limited time spent on documentation and extra time on designing.

➢ Disadvantage of Iterative Model


• It is not suitable for smaller projects.
• More Resources may be required.
• Design can be changed again and again because of imperfect requirements.
• Requirement changes can cause over budget.

• Project completion date not confirmed because of changing requirements.

7. Incremental Model

Incremental Model is a process of software development where requirements divided into


multiple standalone modules of the software development cycle. In this model, each module
goes through the requirements, design, implementation and testing phases. Every subsequent
release of the module adds function to the previous release. The process continues until the
complete system achieved.
➢ The various phases of incremental model are as follows:

1. Requirement analysis: In the first phase of the incremental model, the product analysis
expertise identifies the requirements. And the system functional requirements are understood
by the requirement analysis team. To develop the software under the incremental model, this
phase performs a crucial role.

2. Design & Development: In this phase of the Incremental model of SDLC, the design of
the system functionality and the development method are finished with success. When
software develops new practicality, the incremental model uses style and development phase.

3. Testing: In the incremental model, the testing phase checks the performance of each
existing function as well as additional functionality. In the testing phase, the various methods
are used to test the behavior of each task.

4. Implementation: Implementation phase enables the coding phase of the development


system. It involves the final coding that design in the designing and development phase and
tests the functionality in the testing phase.

After completion of this phase, the number of the product working is enhanced and upgraded
up to the final system product.
➢ When we use the Incremental Model?
• When the requirements are superior.
• A project has a lengthy development schedule.
• When Software team are not very well skilled or trained.
• When the customer demands a quick release of the product.
• You can develop prioritized requirements first.

➢ Advantage of Incremental Model


• Errors are easy to be recognized.
• Easier to test and debug
• More flexible.
• Simple to manage risk because it handled during its iteration.
• The Client gets important functionality early.

➢ Disadvantage of Incremental Model


• Need for good planning
• Total Cost is high.
• Well defined module interfaces are needed.
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

CONTENTS

Week 3: Software Metrics


A software metric is a measure of software characteristics which are measurable or countable.
Software metrics are valuable for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.

Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.

• Classification of Software Metrics


Software metrics can be classified into two types as follows:

1. Product Metrics: These are the measures of various characteristics of the software product.

The two important software characteristics are:

o Size and complexity of software.


o Quality and reliability of software.

These metrics can be computed for different stages of SDLC.


2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.

➢ Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC)
measure.

External metrics: External metrics are the metrics used for measuring properties that are
viewed to be of greater importance to the user, e.g., portability, reliability, functionality,
usability, etc.

Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost, and
time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be
improved. As quality improves, the number of errors and time, as well as cost required, is also
reduced.

➢ Advantage of Software Metrics

• Comparative study of various design methodology of software systems.


• For analysis, comparison, and critical study of different programming language
concerning their characteristics.
• In comparing and evaluating the capabilities and productivity of people involved in
software development.
• In the preparation of software quality specifications.
• In the verification of compliance of software systems requirements and specifications.
• In making inference about the effort to be put in the design and development of the
software systems.
• In getting an idea about the complexity of the code.
• In taking decisions regarding further division of a complex module is to be done or
not.
• In guiding resource manager for their proper utilization.
• In comparison and making design tradeoffs between software development and
maintenance cost.
• In providing feedback to software managers about the progress and quality during
various phases of the software development life cycle.
• In the allocation of testing resources for testing the code.

➢ Disadvantage of Software Metrics


• The application of software metrics is not always easy, and in some cases, it is difficult
and costly.
• The verification and justification of software metrics are based on historical/empirical
data whose validity is difficult to verify.
• These are useful for managing software products but not for evaluating the performance
of the technical staff.
• The definition and derivation of Software metrics are usually based on assuming which
are not standardized and may depend upon tools available and working environment.
• Most of the predictive models rely on estimates of certain variables which are often not
known precisely.
➢ Size Oriented Metrics
o LOC Metrics

It is one of the earliest and simpler metrics for calculating the size of the computer program.
It is generally used in calculating and comparing the productivity of programmers. These
metrics are derived by normalizing the quality and productivity measures by considering
the size of the product as a metric.

Following are the points regarding LOC measures:

1. In size-oriented metrics, LOC is considered to be the normalization value.


2. It is an older method that was developed when FORTRAN and COBOL
programming were very popular.
3. Productivity is defined as KLOC / EFFORT, where effort is measured in person-
months.
4. Size-oriented metrics depend on the programming language used.
5. As productivity depends on KLOC, so assembly language code will have more
productivity.
6. LOC measure requires a level of detail which may not be practically achievable.
7. The more expressive is the programming language, the lower is the productivity.
8. LOC method of measurement does not apply to projects that deal with visual (GUI-
based) programming. As already explained, Graphical User Interfaces (GUIs) use
forms basically. LOC metric is not applicable here.
9. It requires that all organizations must use the same method for counting LOC. This
is so because some organizations use only executable statements, some useful
comments, and some do not. Thus, the standard needs to be established.
10. These metrics are not universally accepted.

Based on the LOC/KLOC count of software, many other metrics can be computed:

1. Errors/KLOC.
2. $/ KLOC.
3. Defects/KLOC.
4. Pages of documentation/KLOC.
5. Errors/PM.
6. Productivity = KLOC/PM (effort is measured in person-months).
7. $/ Page of documentation.

➢ Advantages of LOC

1. Simple to measure

➢ Disadvantage of LOC
1. It is defined on the code. For example, it cannot measure the size of the specification.
2. It characterizes only one specific view of size, namely length, it takes no account of
functionality or complexity
3. Bad software design may cause an excessive line of code
4. It is language dependent
5. Users cannot easily understand it

o Functional Point (FP) Analysis

Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has been
further modified by the International Function Point Users Group (IFPUG). FPA is used to
make estimate of the software project, including its testing in terms of functionality or function
size of the software product. However, functional point analysis may be used for the test
estimation of the product. The functional size of the product is measured in terms of the function
point, which is a standard of measurement to measure the software application.

o Objectives of FPA
The basic and primary purpose of the functional point analysis is to measure and provide the
software application functional size to the client, customer, and the stakeholder on their request.
Further, it is used to measure the software project development along with its maintenance,
consistently throughout the project irrespective of the tools and the technologies.

Following are the points regarding FPs:

1. FPs of an application is found out by counting the number and types of functions used
in the applications. Various functions used in an application can be put under five types,
as shown in Table:

Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces (EIF) Shared databases and shared routines.

All these parameters are then individually assessed for complexity.


2. FP characterizes the complexity of the software system and hence can be used to depict the
project time and the manpower requirement.

3. The effort required to develop the project depends on what the software does.

4. FP is programming language independent.

5. FP method is used for data processing systems, business systems like information systems.

6. The five parameters mentioned above are also known as information domain
characteristics.

7. All the parameters mentioned above are assigned some weights that have been
experimentally determined and are shown in Table.

The functional complexities are multiplied with the corresponding weights against each
function, and the values are added up to determine the UFP (Unadjusted Function Point) of the
subsystem.
Here that weighing factor will be simple, average, or complex for a measurement parameter
type.

1. Based on the FP measure of software many other metrics can be computed:


2. Errors/FP
3. $/FP.
4. Defects/FP
5. Pages of documentation/FP
6. Errors/PM.
7. Productivity = FP/PM (effort is measured in person-months).
8. $/Page of Documentation.
8. LOCs of an application can be estimated from FPs. That is, they are interconvertible. This
process is known as backfiring. For example, 1 FP is equal to about 100 lines of COBOL
code.

9. FP metrics is used mostly for measuring the size of Management Information System (MIS)
software.

10. But the function points obtained above are unadjusted function points (UFPs). These (UFPs)
of a subsystem are further adjusted by considering some more General System Characteristics
(GSCs). It is a set of 14 GSCs that need to be considered. The procedure for adjusting UFPs is
as follows:

a. Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5. (b)
If a particular GSC has no influence, then its weight is taken as 0 and if it has a strong
influence then its weight is 5.
b. The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
c. Then Value Adjustment Factor (VAF) is computed from TDI by using the
formula: VAF = (TDI * 0.01) + 0.65

Remember that the value of VAF lies within 0.65 to 1.35 because

a. When TDI = 0, VAF = 0.65


b. When TDI = 70, VAF = 1.35
c. VAF is then multiplied with the UFP to get the final FP count: FP = VAF * UFP

Example: Compute the function point, productivity, documentation, cost per function for the
following data:

1. Number of user inputs = 24


2. Number of user outputs = 46
3. Number of inquiries = 8
4. Number of files = 4
5. Number of external interfaces = 2
6. Effort = 36.9 p-m
7. Technical documents = 265 pages
8. User documents = 122 pages
9. Cost = $7744/ month
Various processing complexity factors are: 4, 1, 0, 3, 3, 5, 4, 4, 3, 3, 2, 2, 4, 5.

Solution:

Measurement Count Process Weighing factor


Parameter

1. Number of 24 * 4 = 96
external inputs (EI)

2. Number of 46 * 4 = 184
external outputs
(EO)

3. Number of 8 * 6 = 48
external inquiries
(EQ)

4. Number of 4 * 10 = 40
internal files (ILF)

5. Number of 2 * 5 = 10
external interfaces 378
(EIF) Count-total →
➢ Differentiate between FP and LOC

FP LOC

1. FP is specification based. 1. LOC is an analogy based.

2. FP is language independent. 2. LOC is language dependent.

3. FP is user-oriented. 3. LOC is design-oriented.

4. It is extendible to LOC. 4. It is convertible to FP (backfiring)


CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 4: Planning

The first stage of the software development process is the planning stage. In order
to develop a successful project, the whole picture of the project must be taken.
This picture is produced as a result of the project planning phase. Project plan
components are as follows:
o Project Scope
o Project time-Work plan
o Project team structure
o Technical definitions of the proposed system, Special development
tools and environments
o Project standards, methods and methodologies
o Quality assurance plan
o Environmental management plan
o Resource management plan
o Education plan
o Test plan
o Maintenance plan

Things to be done at the project planning stage can be summarized as follows:


o Determination of project resources
o Project cost estimation
o Creating the project team structure
o Detailed project plan
o Project monitoring

The project plan, which is the main output of the planning phase, is a document
that will be used, reviewed and updated throughout the project. Therefore, the
Planning stage is different from other stages.
The resources to be used when planning a software project should be:

➢ Human Resources: It is determined who will take place for which period
and at which stages of the project.
Project manager Hardware Team Leader

Software Team Hardware Engineer


Leader

Web Designer Network Specialist

System Designer Software Support Staff

Programmer Hardware Support Staff

System administrator Instructor

Database Manager supervisory

Quality Assurance Call Center Staff


Manager

➢ Software Resources: Software resources are computer aided platforms


used to develop code.

➢ Hardware Resources: Nowadays, hardware systems are gradually turning


into open system architecture. Brand loyalty in the past is gradually
disappearing. It is the determination of the environment such as Hosts,
Servers (Web, Email, Database), User computers (PC), Local area Network
(LAN) Infrastructure, Wide area network (WAN) infrastructure.

➢ Project Classes

o Discrete Projects: Like a human resources management


system working on Lan, which is small in size, by experienced
staff.

o Semi-Embedded Projects: Projects with both information


dimension and hardware deployment dimension.

o Embedded Projects: Projects aiming to drive equipment


(software driving a pilotless plane - hardware constraints are
high).
➢ Project Costs

Cost management has an important place in the development process. Thanks to


cost management;

• Facilitating the information system development process,


• Prevention of delays,
• Providing more effective resource use
• Effective implementation of the work schedule
• Healthy pricing of the product
• It is ensured that the product is finished on time and within the targeted
budget limits.

✓ Cost Estimation Methods

Information such as the total duration of the project, the total cost of the
project, the total number of lines, the number of staff-quality-working time,
the cost of a person-month gives important information about the cost
estimation for other projects after the project is finished or most of the
project is finished. The most commonly used cost estimation methods are
shown in Table 1.

Table 1. The most commonly used cost estimation methods


These methods are classified according to the size of the project and the way in
which the methods are applied. The common features of these methods are as
follows:

✓ The main steps of the project are input.


✓ Taking environmental features of the project as input,
✓ Using linear or nonlinear equations,
✓ Outputs of the project such as number of lines, number of function points,
time, labor, monetary cost

o Effective cost method -COCOMO

COCOMO is a cost estimation model that has received a lot of attention since it
was published by Boehm in 1981. The application can be done in three different
model formats depending on the level of detail to be used:

✓ Basic model
✓ Intermediate model
✓ Detail model

All COCOMO models take line number estimation as basic input and output
workforce and time as output. By dividing the workforce value by the time value,
approximately the number of people is estimated.

Figure 1. COCOMO models

All COCOMO models use nonlinear exponential formulas for workforce and time
values. The formulas used in Figure 2 can be seen. The COCOMO formulas used
different project types vary.
Figure 2. COCOMO model formulas.

✓ Basic model: It is used for fast estimation for small and medium projects.
Formulas used;

Discrete projects:

K = 2.4 x S1.05 Labor


T = 2.5 x K0.38 Time

S: bin (103 ) türünden satır sayısı(Number of


Line)

Semi-Embedded projects:

K = 3.0 x S1.12 Labor


T = 2.5 x K0.35 Time

Embedded projects:

K = 3.6 x S1.20 Labor


T = 2.5 x K0.32 Time

Figure 3. COCOMO model formulas used in different projects.

Example: Calculate the time and labor costs of a 50,000-line semi-embedded


type project using the COCOMO basic model.

➢ Creating a Project Team Structure


In order for a software project to work effectively, the project team needs to be
determined well. Generally, all project management methodologies suggest a
project team structure. One of these, the PANDA project team structure is
basically created on the basis that each project unit operates directly under project
management. The main components are:

o Project Control Unit: Consists of top executives who are responsible for
developing the project. As high level problems are highly collected, it is
necessary to keep the interest of the top management with the project
constantly and be included in the project.

o Project Management Unit: It is the highest responsible unit for project


management. It consists of one or more managers according to the project
size. It is responsible to the Project Audit Unit and directly to the project
owner.

o Quality Management Unit: Checks and approves the compliance of the


project to its purpose throughout the production process. It only works
dependent on the Project management unit at a distance equal to all other
units.

o Project Office: It is the unit responsible for all kinds of administrative


works (correspondence, personnel monitoring) related to the project.

o Technical Support Unit: Technical support unit such as hardware,


operating system, database.

o Software Production Coordination Unit: Consists of Software


Production Teams (4-7 people will not increase much). If there is more than
one software Production Team, the Joint application becomes the Software
Support Team responsible for the development of the software parts.

o Education Unit: This unit is responsible for any training related to the
project.

Application Support Unit: For example, the unit that provides instant support
by phone.
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 5: Software Request Analysis


➢ What is Request?

Request: Means that is required, desired or needed.

Request Definition: The requirements of a system are the definition of the services and functional
constraints provided by that system..

IEEE 729: Situation or support needed by the user to solve a problem or achieve a goal.

The requirement is not related to how the system or its functions will be fulfilled. It is about what it is.

• which database,

• which tables,

• how much memory is used,

these are handled during the realization phase.

➢ What is Request Engineering?

All services and restrictions,

• Identify,
• Analyzing,
• Certification
• Checking

The process is called Request Engineering.

➢ Why are request important?

Mistakes arising from requirements are noticed in the late stages. Often wrong information is caused by
negligence and inconsistency. In this case, correction costs will be high.

➢ Change of Request

Although the analysis of the requirements is very good, changes may occur during the process. Fort
his reasons:

• Communication between the customer and the developer is not sufficient,


• Some assumptions or assumptions have been made in order to pass the analysis of the
demands quickly,
• Customer not knowing exactly what he wants and changing ideas frequently,
• The developer's lack of experience,
• The emergence of the necessity of new requests when we move on to detailed design

It should not be forgotten that no matter how much changes, all requests must be informed of the
customer and must be certified.

➢ Requirements Engineering Process

Requirements analysis, also called requirements engineering, is the process of determining user
expectations for a new or modified product. Requirements engineering is a major software engineering
action that begins during the communication activity and continues into the modeling activity. It must
be adapted to the needs of the process, the project, the product, and the people doing the work.
Requirements engineering builds a bridge to design and construction. It has four step process:

Feasibility Study: When the client approaches the organization for getting the desired product
developed, it comes up with rough idea about what all functions the software must perform and which
all features are expected from the software.

Referencing to this information, the analysts does a detailed study about whether the desired system and
its functionality are feasible to develop.

This feasibility study is focused towards goal of the organization. This study analyzes whether the
software product can be practically materialized in terms of implementation, contribution of project to
organization, cost constraints and as per values and objectives of the organization. It explores technical
aspects of the project and product such as usability, maintainability, productivity and integration ability.

The output of this phase should be a feasibility study report that should contain adequate comments and
recommendations for management about whether or not the project should be undertaken.

Requirement Gathering: If the feasibility report is positive towards undertaking the project, next phase
starts with gathering requirements from the user. Analysts and engineers communicate with the client
and end-users to know their ideas on what the software should provide and which features they want the
software to include.

Software Requirement Specification (SRS): SRS is a document created by system analyst after the
requirements are collected from various stakeholders.

SRS defines how the intended software will interact with hardware, external interfaces, speed of
operation, response time of system, portability of software across various platforms, maintainability,
speed of recovery after crashing, Security, Quality, Limitations etc.
The requirements received from client are written in natural language. It is the responsibility of system
analyst to document the requirements in technical language so that they can be comprehended and useful
by the software development team.

SRS should come up with following features:

• User Requirements are expressed in natural language.


• Technical requirements are expressed in structured language, which is used inside the
organization.
• Design description should be written in Pseudo code.
• Format of Forms and GUI screen prints.
• Conditional and mathematical notations for DFDs etc.

Software Requirement Validation: After requirement specifications are developed, the requirements
mentioned in this document are validated. User might ask for illegal, impractical solution or experts may
interpret the requirements incorrectly. This results in huge increase in cost if not nipped in the bud.
Requirements can be checked against following conditions:

• If they can be practically implemented


• If they are valid and as per functionality and domain of software
• If there are any ambiguities
• If they are complete
• If they can be demonstrated

➢ Requirement Elicitation Process

Requirement elicitation process can be depicted using the following diagram:

Requirements gathering: The developers discuss with the client and end users and know their
expectations from the software.

Organizing Requirements: The developers prioritize and arrange the requirements in order of
importance, urgency and convenience.

Negotiation & discussion: If requirements are ambiguous or there are some conflicts in requirements
of various stakeholders, if they are, it is then negotiated and discussed with stakeholders. Requirements
may then be prioritized and reasonably compromised.
The requirements come from various stakeholders. To remove the ambiguity and conflicts, they are
discussed for clarity and correctness. Unrealistic requirements are compromised reasonably.

Documentation: All formal & informal, functional and non-functional requirements are documented
and made available for next phase processing.

➢ Requirement Elicitation Techniques

Requirements Elicitation is the process to find out the requirements for an intended software system
by communicating with client, end users, system users and others who have a stake in the software
system development.

There are various ways to discover requirements:

1. Interviews

Interviews are strong medium to collect requirements. Organization may conduct several types of
interviews such as:

• Structured (closed) interviews, where every single information to gather is decided in advance,
they follow pattern and matter of discussion firmly.
• Non-structured (open) interviews, where information to gather is not decided in advance, more
flexible and less biased.
• Oral interviews
• Written interviews
• One-to-one interviews which are held between two persons across the table.
• Group interviews which are held between groups of participants. They help to uncover any
missing requirement as numerous people are involved.

2. Surveys

Organization may conduct surveys among various stakeholders by querying about their expectation and
requirements from the upcoming system.

3. Questionnaires

A document with pre-defined set of objective questions and respective options is handed over to all
stakeholders to answer, which are collected and compiled.
A shortcoming of this technique is, if an option for some issue is not mentioned in the questionnaire, the
issue might be left unattended.

4. Task analysis

Team of engineers and developers may analyze the operation for which the new system is required. If
the client already has some software to perform certain operation, it is studied and requirements of
proposed system are collected.

5. Domain Analysis

Every software falls into some domain category. The expert people in the domain can be a great help
to analyze general and specific requirements.

6. Brainstorming

An informal debate is held among various stakeholders and all their inputs are recorded for further
requirements analysis.

7. Prototyping

Prototyping is building user interface without adding detail functionality for user to interpret the features
of intended software product. It helps giving better idea of requirements. If there is no software installed
at client’s end for developer’s reference and the client is not aware of its own requirements, the developer
creates a prototype based on initially mentioned requirements. The prototype is shown to the client and
the feedback is noted. The client feedback serves as an input for requirement gathering.

8. Observation

Team of experts visit the client’s organization or workplace. They observe the actual working of the
existing installed systems. They observe the workflow at client’s end and how execution problems
are dealt. The team itself draws some conclusions which aid to form requirements expected from the
software.

➢ Software Requirements Characteristics

Gathering software requirements is the foundation of the entire software development project. Hence
they must be clear, correct and well-defined.
A complete Software Requirement Specifications must be:

• Clear
• Correct
• Consistent
• Coherent
• Comprehensible
• Modifiable
• Verifiable
• Prioritized
• Unambiguous
• Traceable
• Credible source

➢ Software Requirements

We should try to understand what sort of requirements may arise in the requirement elicitation phase
and what kinds of requirements are expected from the software system.

Software requirements should be categorized in two categories:

1. Functional Requirements

Requirements, which are related to functional aspect of software fall into this category. They define
functions and functionality within and from the software system.

Examples:

• Search option given to user to search from various invoices.


• User should be able to mail any report to management.
• Users can be divided into groups and groups can be given separate rights.
• Should comply business rules and administrative functions.
• Software is developed keeping downward compatibility intact.

2. Non-Functional Requirements

Requirements, which are not related to functional aspect of software, fall into this category. They are
implicit or expected characteristics of software, which users make assumption of. Non-functional
requirements include:

• Security
• Logging
• Storage
• Configuration
• Performance
• Cost
• Interoperability
• Flexibility
• Disaster recovery
• Accessibility

Requirements are categorized logically as:

Must Have: Software cannot be said operational without them.

Should have: Enhancing the functionality of software.

Could have: Software can still properly function with these requirements.

Wish list: These requirements do not map to any objectives of software.

While developing software, ‘Must have’ must be implemented, ‘Should have’ is a matter of debate with
stakeholders and negation, whereas ‘could have’ and ‘wish list’ can be kept for software updates.

➢ User Interface requirements

UI is an important part of any software or hardware or hybrid system. A software is widely accepted if
it is:

• easy to operate
• quick in response
• effectively handling operational errors
• providing simple yet consistent user interface

User acceptance majorly depends upon how user can use the software. UI is the only way for users to
perceive the system. A well performing software system must also be equipped with attractive, clear,
consistent and responsive user interface. Otherwise the functionalities of software system can not be
used in convenient way. A system is said be good if it provides means to use it efficiently. User interface
requirements are briefly mentioned below:

• Content presentation
• Easy Navigation
• Simple interface
• Responsive
• Consistent UI elements
• Feedback mechanism
• Default settings
• Purposeful layout
• Strategical use of color and texture.
• Provide help information
• User centric approach
• Group based view settings

➢ Software System Analyst

System analyst in an IT organization is a person, who analyzes the requirement of proposed system and
ensures that requirements are conceived and documented properly & correctly. Role of an analyst starts
during Software Analysis Phase of SDLC. It is the responsibility of analyst to make sure that the
developed software meets the requirements of the client.

System Analysts have the following responsibilities:

• Analyzing and understanding requirements of intended software


• Understanding how the project will contribute in the organization objectives
• Identify sources of requirement
• Validation of requirement
• Develop and implement requirement management plan
• Documentation of business, technical, process and product requirements
• Coordination with clients to prioritize requirements and remove and ambiguity
• Finalizing acceptance criteria with client and other stakeholders

➢ Software Metrics and Measures

Software Measures can be understood as a process of quantifying and symbolizing various attributes
and aspects of software. Software Metrics provide measures for various aspects of software process and
software product.

Software measures are fundamental requirement of software engineering. They not only help to control
the software development process but also aid to keep quality of ultimate product excellent.

According to Tom DeMarco, a (Software Engineer), “You cannot control what you cannot measure.”
By his saying, it is very clear how important software measures are.

Let us see some software metrics:

Size Metrics: LOC (Lines of Code), mostly calculated in thousands of delivered source code lines,
denoted as KLOC.
Function Point Count is measure of the functionality provided by the software. Function Point count
defines the size of functional aspect of software.

Complexity Metrics: McCabe’s Cyclomatic complexity quantifies the upper bound of the number of
independent paths in a program, which is perceived as complexity of the program or its modules. It is
represented in terms of graph theory concepts by using control flow graph.

Quality Metrics: Defects, their types and causes, consequence, intensity of severity and their
implications define the quality of product.

The number of defects found in development process and number of defects reported by the client after
the product is installed or delivered at client-end, define quality of product.

Process Metrics: In various phases of SDLC, the methods and tools used, the company standards and
the performance of development are software process metrics.

Resource Metrics: Effort, time and various resources used, represents metrics for resource
measurement.

➢ Requirements Modeling: Scenarios, Information, And Analysis Classes Requirements


Analysis

o Requirements Modeling Approaches

Requirements analysis results in the specification of software’s operational characteristics, indicates


software’s interface with other system elements, and establishes constraints that software must meet.
Requirements analysis allows you to elaborate on basic requirements established during the inception,
elicitation, and negotiation tasks that are part of requirements engineering.

The requirements modeling action results in one or more of the following types of models:
➢ Scenario-based models of requirements from the point of view of various system

“actors”

➢ Data models that depict the information domain for the problem
➢ Class-oriented models that represent object-oriented classes (attributes and operations)

and the manner in which classes collaborate to achieve system requirements

• Flow-oriented models that represent the functional elements of the system and how they

transform data as it moves through the system

• Behavioral models that depict how the software behaves as a consequence of external

“events”.

The requirements model as a bridge between the system description and the design model:

The requirements model must achieve three primary objectives:

1. To describe what the customer requires,

2. to establish a basis for the creation of a software design, and

3. to define a set of requirements that can be validated once the software is built.

The analysis model bridges the gap between a system-level description that describes overall system or
business functionality as it is achieved by applying software, hardware, data, human, and other system
elements and a software design that describes the software’s application architecture, user interface, and
component-level structure.

1. Scenario-Based Modeling

Scenario-based elements depict how the user interacts with the system and the specific sequence of
activities that occur as the software is used.
A use case describes a specific usage scenario in straightforward language from the point of view of a
defined actor. These are the questions that must be answered if use cases are to provide value as a
requirements modeling tool.

(1) what to write about,

(2) how much to write about it,

(3) how detailed to make your description, and (4) how to organize the description?

To begin developing a set of use cases, list the functions or activities performed by a specific actor.

❖ Writing a Formal Use Case

• The typical outline for formal use cases can be in following manner
• The goal in context identifies the overall scope of the use case.
• The precondition describes what is known to be true before the use case is initiated.
• The trigger identifies the event or condition that “gets the use case started”
• The scenario lists the specific actions that are required by the actor and the appropriate system
responses.
• Exceptions identify the situations uncovered as the preliminary use case is refined Additional
headings may or may not be included and are reasonably self-explanatory.

Every modeling notation has limitations, and the use case is no exception. A use case focuses on
functional and behavioral requirements and is generally inappropriate for nonfunctional requirements.
However, scenario-based modeling is appropriate for a significant majority of all situations that you will
encounter as a software engineer. There is a simple use case diagram in the following figüre.
2. Data Modeling Concepts

Data modeling is the process of documenting a complex software system design as an easily understood
diagram, using text and symbols to represent the way data needs to flow. The diagram can be used as a
blueprint for the construction of new software or for re-engineering a legacy application. The most
widely used data Model by the Software engineers is Entity Relationship Diagram (ERD), it addresses
the issues and represents all data objects that are entered, stored, transformed, and produced within an
application.

• Data Objects

A data object is a representation of composite information that must be understood by software. A data
object can be an external entity (e.g., anything that produces or consumes information), a thing (e.g.,
a report or a display), an occurrence (e.g., a telephone call) or event (e.g., an alarm), a role (e.g.,
salesperson), an organizational unit (e.g., accounting department), a place (e.g., a warehouse), or a
structure (e.g., a file).

For example, a person or a car can be viewed as a data object in the sense that either can be defined in
terms of a set of attributes. The description of the data object incorporates the data object and all of its
attributes.

A data object encapsulates data only there is no reference within a data object to operations that act on
the data. Therefore, the data object can be represented as a table as shown in following table. The
headings in the table reflect attributes of the object. Tabular representation of data objects in the
following figüre.
• Data Attributes

Data attributes define the properties of a data object and take on one of three different characteristics.
They can be used to (1) name an instance of the data object, (2) describe the instance, or (3) make
reference to another instance in another table.

• Relationships

Data objects are connected to one another in different ways. Consider the two data objects, person and
car. These objects can be represented using the following simple notation and relationships are 1) A
person owns a car, 2) A person is insured to drive a car. There is a Relationships between data objects
in the following figure.

3. Class-Based Modeling

Class-based modeling represents the objects that the system will manipulate, the operations that will be
applied to the objects to effect the manipulation, relationships between the objects, and the
collaborations that occur between the classes that are defined. The elements of a class-based model
include classes and objects, attributes, operations, class responsibility collaborator (CRC) models,
collaboration diagrams, and packages.

• Identifying Analysis Classes

We can begin to identify classes by examining the usage scenarios developed as part of the requirements
model and performing a “grammatical parse” on the use cases developed for the system to be built.

Analysis classes manifest themselves in one of the following ways:

• External entities (e.g., other systems, devices, people) that produce or consume information to
be used by a computer-based system.
• Things (e.g., reports, displays, letters, signals) that are part of the information domain for the
problem.
• Occurrences or events (e.g., a property transfer or the completion of a series of robot
movements) that occur within the context of system operation.
• Roles (e.g., manager, engineer, salesperson) played by people who interact with the system.
• Organizational units (e.g., division, group, team) that are relevant to an application.
• Places (e.g., manufacturing floor or loading dock) that establish the context of the problem and
the overall function of the system.
• Structures (e.g., sensors, four-wheeled vehicles, or computers) that define a class of objects or
related classes of objects.

Attributes describe a class that has been selected for inclusion in the requirements model. The attributes
that define the class that clarify what is meant by the class in the context of the problem space.

To develop a meaningful set of attributes for an analysis class, you should study each use case and select
those “things” that reasonably “belong” to the class.

Operations define the behavior of an object. Although many different types of operations exist, they
can generally be divided into four broad categories: (1) operations that manipulate data in some way
(e.g., adding, deleting, reformatting, selecting), (2) operations that perform a computation, (3) operations
that inquire about the state of an object, and (4) operations that monitor an object for the occurrence of
a controlling event. There is a Class diagram for the system class in the following figure.
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 6: Modeling with UML


1. What is modeling?

➢ Modeling consists of building an abstraction of reality.


➢ Abstractions are simplifications because:
o They ignore irrelevant details and
o They only represent the relevant details.
➢ Focus only on relevant parts of the problem
➢ What is relevant or irrelevant depends on the purpose of the model.
➢ We need simpler representations for complex systems
o Modeling is a means for dealing with complexity

2. Why model software?

➢ Visualize software before its produced


➢ Code is not easily understandable by developers who did not write it
➢ Document the design decisions & Communicate Ideas
➢ Provide template for guiding the software production

Application Domain (Analysis): The environment in which the system is operating.

Solution Domain (Design, Implementation): The technologies used to build the system  Modeling
space of all possible systems.

Both domains contain abstractions that we can use for the construction of the system model.
Example:

3. What is UML(Unified Modeling Language)?

 The UML is a language for

➢ visualizing
➢ specifying
➢ constructing
➢ documenting the artifacts of a software-intense system

General Definition: The UML offers a standard way to write a system's blueprints, including
conceptual things such as business processes and system functions as well as concrete things such as
programming language statements, database schemas, and reusable software components.

Commercial tools: Rational (IBM), Together (Borland), Visual Paradigm

Open Source tools: ArgoUML, UMLet, Umbrello


The figure below shows the UML diagram hierarchy and the positioning of the UML Use Case
Diagram. As you can see, use case diagrams belong to the family of behavioral diagrams.

4. The Origin of UML

The goal of UML is to provide a standard notation that can be used by all object-oriented methods

and to select and integrate the best elements of precursor notations. UML has been designed for a

broad range of applications. Hence, it provides constructs for a broad range of systems and activities

(e.g., distributed systems, analysis, system design and deployment).

UML is a notation that resulted from the unification of OMT from


1. Object Modeling Technique OMT [James Rumbaugh 1991] - was best for analysis and
data-intensive information systems.

2. Booch [Grady Booch 1994] - was excellent for design and implementation. Grady Booch
had worked extensively with the Ada language, and had been a major player in the
development of Object Oriented techniques for the language. Although the Booch method
was strong, the notation was less well received (lots of cloud shapes dominated his models -
not very tidy)

3. OOSE (Object-Oriented Software Engineering [Ivar Jacobson 1992]) - featured a model


known as Use Cases. Use Cases are a powerful technique for understanding the behaviour of
an entire system (an area where OO has traditionally been weak).
The first thing to notice about the UML is that there are a lot of different diagrams (models) to get

used to. The reason for this is that it is possible to look at a system from many different viewpoints.

A software development will have many stakeholders playing a part.

For Example:
• Analysts
• Designers
• Coders
• Testers
• QA
• The Customer
• Technical Authors

All of these people are interested in different aspects of the system, and each of them require a
different level of detail. For example, a coder needs to understand the design of the system and be
able to convert the design to a low level code. By contrast, a technical writer is interested in the
behavior of the system as a whole, and needs to understand how the product functions. The UML
attempts to provide a language so expressive that all stakeholders can benefit from at least one UML
diagram.

4.1. What is a Class Diagram?

The class diagram is a central modeling technique that runs through nearly all object-oriented

methods. This diagram describes the types of objects in the system and various kinds of static

relationships which exist between them.


Relationships

There are three principal kinds of relationships which are important:


1. Association - represent relationships between instances of types (a person works for a
company, a company has a number of offices.
2. Inheritance - the most obvious addition to ER diagrams for use in OO. It has an immediate
correspondence to inheritance in OO design.
3. Aggregation - Aggregation, a form of object composition in object-oriented design.

Class Diagram Example

4.2. What is Component Diagram?

In the Unified Modeling Language, a component diagram depicts how components are wired together

to form larger components or software systems. It illustrates the architectures of the software

components and the dependencies between them. Those software components including run-time

components, executable components also the source code components.


Component Diagram Example

4.3. What is a Deployment Diagram?

The Deployment Diagram helps to model the physical aspect of an Object-Oriented software system.

It is a structure diagram which shows architecture of the system as deployment (distribution) of

software artifacts to deployment targets. Artifacts represent concrete elements in the physical world

that are the result of a development process. It models the run-time configuration in a static view and

visualizes the distribution of artifacts in an application. In most cases, it involves modeling the

hardware configurations together with the software components that lived on.

Deployment Diagram Example

4.4. What is an Object Diagram?

An object diagram is a graph of instances, including objects and data values. A static object diagram

is an instance of a class diagram; it shows a snapshot of the detailed state of a system at a point in
time. The difference is that a class diagram represents an abstract model consisting of classes and their

relationships. However, an object diagram represents an instance at a particular moment, which is

concrete in nature. The use of object diagrams is fairly limited, namely to show examples of data

structure.
Class Diagram vs Object Diagram - An Example
Some people may find it difficult to understand the difference between a UML Class Diagram and a

UML Object Diagram as they both comprise of named "rectangle blocks", with attributes in them, and

with linkages in between, which make the two UML diagrams look similar. Some people may even

think they are the same because in the UML tool they use both the notations for Class Diagram and

Object Diagram are put inside the same diagram editor - Class Diagram.

But in fact, Class Diagram and Object Diagram represent two different aspects of a code base. In this

article, we will provide you with some ideas about these two UML diagrams, what they are, what are

their differences and when to use each of them.

Relationship between Class Diagram and Object Diagram


You create "classes" when you are programming. For example, in an online banking system you may

create classes like 'User', 'Account', 'Transaction', etc. In a classroom management system you may

create classes like 'Teacher', 'Student', 'Assignment', etc.

In each class, there are attributes and operations that represent the characteristic and behavior of the

class. Class Diagram is a UML diagram where you can visualize those classes, along with their

attributes, operations and the inter-relationship.

UML Object Diagram shows how object instances in your system are interacting with each other at a

particular state. It also represents the data values of those objects at that state. In other words, a UML

Object Diagram can be seen as a representation of how classes (drawn in UML Class Diagram) are

utilized at a particular state.

If you are not a fan of those definition stuff, take a look at the following UML diagram examples. I

believe that you will understand their differences in seconds.


Class Diagram Example
The following Class Diagram example represents two classes - User and Attachment. A user can

upload multiple attachment so the two classes are connected with an association, with 0..* as

multiplicity on the Attachment side.

Object Diagram Example

The following Object Diagram example shows you how the object instances of User and Attachment

class "look like" at the moment Peter (i.e. the user) is trying to upload two attachments. So there are

two Instance Specification for the two attachment objects to be uploaded.

4.5. What is a Package Diagram?

Package diagram is UML structure diagram which shows packages and dependencies between the

packages. Model diagrams allow to show different views of a system, for example, as multi-layered

(aka multi-tiered) application - multi-layered application model.


Package Diagram Example
4.6. What is a Use Case Diagram?

A use-case model describes a system's functional requirements in terms of use cases. It is a model of

the system's intended functionality (use cases) and its environment (actors). Use cases enable you to

relate what you need from a system to how the system delivers on those needs.

Think of a use-case model as a menu, much like the menu you'd find in a restaurant. By looking at the

menu, you know what's available to you, the individual dishes as well as their prices. You also know

what kind of cuisine the restaurant serves: Italian, Mexican, Chinese, and so on. By looking at the

menu, you get an overall impression of the dining experience that awaits you in that restaurant. The

menu, in effect, "models" the restaurant's behavior.

Because it is a very powerful planning instrument, the use-case model is generally used in all phases

of the development cycle by all team members.

Use Case Diagram Example

4.7. What is an Activity Diagram?

Activity diagrams are graphical representations of workflows of stepwise activities and actions with

support for choice, iteration and concurrency. It describes the flow of control of the target system,

such as the exploring complex business rules and operations, describing the use case also the business

process. In the Unified Modeling Language, activity diagrams are intended to model both

computational and organizational processes (i.e. workflows).


Activity Diagram Example

4.8. What is a State Machine Diagram?

A state diagram is a type of diagram used in UML to describe the behavior of systems which is based

on the concept of state diagrams by David Harel. State diagrams depict the permitted states and

transitions as well as the events that effect these transitions. It helps to visualize the entire lifecycle

of objects and thus help to provide a better understanding of state-based systems.


State Machine Diagram Example

4.9. What is a Sequence Diagram?

The Sequence Diagram models the collaboration of objects based on a time sequence. It shows how

the objects interact with others in a particular scenario of a use case. With the advanced visual

modeling capability, you can create complex sequence diagram in few clicks. Besides, some modeling
tool such as Visual Paradigm can generate sequence diagram from the flow of events which you have

defined in the use case description.


Sequence Diagram Example

4.10. What is Timing Diagram?

Timing Diagram shows the behavior of the object(s) in a given period of time. Timing diagram is a

special form of a sequence diagram. The differences between timing diagram and sequence diagram

are the axes are reversed so that the time are increase from left to right and the lifelines are shown in

separate compartments arranged vertically.


Timing Diagram Example

5. UML Glossary and Terms


• Abstract Class - A class that will never be instantiated. An instance of this class will never
exist.
• Actor - An object or person that initiates events the system is involved with.
• Activity: A step or action within an Activity Diagram. Represents an action taken by the
system or by an Actor.
• Activity Diagram: A glorified flowchart that shows the steps and decisions and parallel
operations within a process, such as an algorithm or a business process.
• Aggregation - Is a part of another class. Shown with a hollow diamond next to the
containing class in diagrams.
• Artifacts - Documents describing the output of a step in the design process. The description
is graphic, textual, or some combination.
• Association - A connection between two elements of a Model. This might represent a
member variable in code, or the association between a personnel record and the person it
represents, or a relation between two categories of workers, or any similar relationship. By
default, both elements in an Association are equal, and are aware of each other through the
Association. An Association can also be a Navigable Association, meaning that the source
end of the association is aware of the target end, but not vice versa.
• Association Class: A Class that represents and adds information to the Association between
two other Classes.
• Attributes - Characteristics of an object which may be used to reference other objects or
save object state information.
• Base Class: A Class which defines Attributes and Operations that are inherited by a
Subclass via a Generalization relationship.
• Branch: A decision point in an Activity Diagram. Multiple Transitions emerge from the
Branch, each with a Guard Condition. When control reaches the Branch, exactly one Guard
Condition must be true; and control follows the corresponding Transition.
• Class: A category of similar Objects, all described by the same Attributes and Operations
and all assignment-compatible.
• Class Diagram - Shows the system classes and relationships between them.
• Classifier: A UML element that has Attributes and Operations. Specifically, Actors,
Classes, and Interfaces.
• Collaboration: A relation between two Objects in a Communication Diagram, indicating
that Messages can pass back and forth between the Objects.
• Communication Diagram - A diagram that shows how operations are done while
emphasizing the roles of objects.
• Component: A deployable unit of code within the system.
• Component Diagram: A diagram that shows relations between various Components and
Interfaces.
• Concept - A noun or abstract idea to be included in a domain model.
• Construction Phase - The third phase of the Rational Unified Process during which several
iterations of functionality are built into the system under construction. This is where the
main work is done.
• Dependence: A relationship that indicates one Classifier knows the Attributes and
Operations of another Classifier, but isn't directly connected to any instance of the second
Classifier.
• Deployment Diagram: A diagram that shows relations between various Processors.
• Domain -The part of the universe that the system is involved with.
• Elaboration Phase - The second phase of the Rational Unified Process that allows for
additional project planning including the iterations of the construction phase.
• Element: Any item that appears in a Model.
• Encapsulation - Data in objects is private.
• Generalization - Indicates that one class is a subclass on another class (superclass). A
hollow arrow points to the superclass.
• Event: In a State Diagram, this represents a signal or event or input that causes the system
to take an action or switch States.
• Final State: In a State Diagram or an Activity Diagram, this indicates a point at which the
diagram completes.
• Fork: A point in an Activity Diagram where multiple parallel control threads begin.
• Generalization: An inheritance relationship, in which a Subclass inherits and adds to the
Attributes and Operations of a Base Class.
• GoF - Gang of Four set of design patterns.
• High Cohesion - A GRASP evaluative pattern which makes sure the class is not too
complex, doing unrelated functions.
• Low Coupling - A GRASP evaluative pattern which measures how much one class relies on
another class or is connected to another class.
• Inception Phase - The first phase of the Rational Unified Process that deals with the
original conceptualization and beginning of the project.
• Inheritance - Subclasses inherit the attributes or characterics of their parent (superclass)
class. These attributes can be overridden in the subclass.
• Initial State: In a State Diagram or an Activity Diagram, this indicates the point at which
the diagram begins.
• Instance - A class is used like a template to create an object. This object is called an
instance of the class. Any number of instances of the class may be created.
• Interface: A Classifier that defines Attributes and Operations that form a contract for
behavior. A provider Class or Component may elect to Realize an Interface (i.e., implement
its Attributes and Operations). A client Class or Component may then Depend upon the
Interface and thus use the provider without any details of the true Class of the provider.
• Iteration - A mini project section during which some small piece of functionality is added
to the project. Includes the development loop of analysis, design and coding.
• Join: A point in an Activity Diagram where multiple parallel control threads synchronize
and rejoin.
• Member: An Attribute or an Operation within a Classifier.
• Merge: A point in an Activity Diagram where different control paths come together.
• Message - A request from one object to another asking the object receiving the message to
do something. This is basically a call to a method in the receiving object.
• Method - A function or procedure in an object.
• Model - The central UML artifact. Consists of various elements arranged in a hierarchy by
Packages, with relations between elements as well.
• Multiplicity - Shown in a domain model and indicated outside concept boxes, it indicates
object quantity relationship to quantiles of other objects.
• Navigability: Indicates which end of a relationship is aware of the other end. Relationships
can have bidirectional Navigability (each end is aware of the other) or single directional
Navigability (one end is aware of the other, but not vice versa).
• Notation - Graphical document with rules for creating analysis and design methods.
• Note: A text note added to a diagram to explain the diagram in more detail.
• Object - Object: In an Activity Diagram, an object that receives information from Activities
or provides information to Activities. In a Collaboration Diagram or a Sequence Diagram,
an object that participates in the scenario depicted in the diagram. In general: one instance
or example of a given Classifier (Actor, Class, or Interface).
• Package - A group of UML elements that logically should be grouped together.
• Package Diagram: A Class Diagram in which all of the elements are Packages and
Dependencies.
• Pattern - Solutions used to determine responsibility assignment for objects to interact. It is
a name for a successful solution to a well-known common problem.
• Parameter: An argument to an Operation.
• Polymorphism - Same message, different method. Also used as a pattern.
• Private: A Visibility level applied to an Attribute or an Operation, indicating that only code
for the Classifier that contains the member can access the member.
• Processor: In a Deployment Diagram, this represents a computer or other programmable
device where code may be deployed.
• Protected: A Visibility level applied to an Attribute or an Operation, indicating that only
code for the Classifier that contains the member or for its Subclasses can access the
member.
• Public: A Visibility level applied to an Attribute or an Operation, indicating that any code
can access the member.
• Reading Direction Arrow - Indicates the direction of a relationship in a domain model.
• Realization: Indicates that a Component or a Class provides a given Interface.
• Role - Used in a domain model, it is an optional description about the role of an actor.
• Sequence Diagram: A diagram that shows the existence of Objects over time, and the
Messages that pass between those Objects over time to carry out some behavior. State chart
diagram - A diagram that shows all possible object states.
• State: In a State Diagram, this represents one state of a system or subsystem: what it is
doing at a point in time, as well as the values of its data.
• State Diagram: A diagram that shows States of a system or subsystem, Transitions between
States, and the Events that cause the Transitions.
• Static: A modifier to an Attribute to indicate that there's only one copy of the Attribute
shared among all instances of the Classifier. A modifier to an Operation to indicate that the
Operation stands on its own and doesn't operate on one specific instance of the Classifier.
• Stereotype: A modifier applied to a Model element indicating something about it which
can't normally be expressed in UML. In essence, Stereotypes allow you to define your own
"dialect" of UML.
• Subclass: A Class which inherits Attributes and Operations that are defined by a Subclass
via a Generalization relationship.
• Swimlane: An element of an Activity Diagram that indicates what parts of a system or a
domain perform particular Activities. All Activities within a Swimlane are the responsibility
of the Object, Component, or Actor represented by the Swimlane.
• Time Boxing - Each iteration will have a time limit with specific goals.
• Transition: In an Activity Diagram, represents a flow of control from one Activity or
Branch or Merge or Fork or Join to another. In a State Diagram, represents a change from
one State to another.
• Transition Phase - The last phase of the Rational Unified Process during which users are
trained on using the new system and the system is made available to users.
• UML - Unified Modeling Language utilizes text and graphic documents to enhance the
analysis and design of software projects by allowing more cohesive relationships between
objects.
• Use Case: In a Use Case Diagram, represents an action that the system takes in response to
some request from an Actor.
• Use Case Diagram: A diagram that shows relations between Actors and Use Cases.
• Visibility: A modifier to an Attribute or Operation that indicates what code has access to the
member. Visibility levels include Public, Protected, and Private.
• Workflow - A set of activities that produces some specific result.
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 7: UML Use Case Diagrams

7.1Purpose of Use Case Diagram


Use case diagrams are typically developed in the early stage of development
and people often apply use case modeling for the following purposes:

• Specify the context of a system

• Capture the requirements of a system

• Validate a systems architecture

• Drive implementation and generate test cases

• Developed by analysts together with domain experts

7.2 Use Case Diagram at a Glance


• A standard form of use case diagram is defined in the Unified Modeling
Language as shown in the Use Case Diagram example below:
Use case diagrams are considered for high level requirement analysis of a system.
When the requirements of a system are analyzed, the functionalities are captured
in use cases.

We can say that use cases are nothing but the system functionalities written in an
organized manner. The second thing which is relevant to use cases are the actors.
Actors can be defined as something that interacts with the system.

Actors can be a human user, some internal applications, or may be some external
applications. When we are planning to draw a use case diagram, we should have
the following items identified.

• Functionalities to be represented as use case

• Actors

• Relationships among the use cases and actors.

Use case diagrams are drawn to capture the functional requirements of a system.
After identifying the above items, we have to use the following guidelines to
draw an efficient use case diagram

• The name of a use case is very important. The name should be chosen in
such a way so that it can identify the functionalities performed.

• Give a suitable name for actors.

• Show relationships and dependencies clearly in the diagram.

• Do not try to include all types of relationships, as the main purpose of the
diagram is to identify the requirements.

• Use notes whenever required to clarify some important points.

Following is a sample use case diagram representing the order management


system. Hence, if we look into the diagram then we will find three use
cases (Order, SpecialOrder, and NormalOrder) and one actor which is the
customer.

The SpecialOrder and NormalOrder use cases are extended from Order use case.
Hence, they have extended relationship. Another important point is to identify
the system boundary, which is shown in the picture. The actor Customer lies
outside the system as it is an external user of the system.

7.3 How to Identify Actor


Often, people find it easiest to start the requirements elicitation process by
identifying the actors. The following questions can help you identify the actors
of your system (Schneider and Winters - 1998):

• Who uses the system?

• Who installs the system?

• Who starts up the system?

• Who maintains the system?

• Who shuts down the system?


• What other systems use this system?

• Who gets information from this system?

• Who provides information to the system?

• Does anything happen automatically at a present time?

7.4 How to Identify Use Cases?


Identifying the Use Cases, and then the scenario-based elicitation process carries
on by asking what externally visible, observable value that each actor desires.
The following questions can be asked to identify use cases, once your actors
have been identified (Schneider and Winters - 1998):

• What functions will the actor want from the system?

• Does the system store information? What actors will create, read, update
or delete this information?

• Does the system need to notify an actor about changes in the internal
state?

• Are there any external events the system must know about? What actor
informs the system of those events?
Notation Description Visual Representation

Actor

• Someone interacts with use case


(system function).

• Named by noun.

• Actor plays a role in the business

• Similar to the concept of user, but a


user can play different roles

• For example:

• A prof. can be instructor and


also researcher

• plays 2 roles with two


systems

• Actor triggers use case(s).

• Actor has a responsibility toward


the system (inputs), and Actor has
expectations from the system
(outputs).

Use Case

• System function (process -


automated or manual)

• Named by verb + Noun (or Noun


Phrase).

• i.e. Do something
• Each Actor must be linked to a use
case, while some use cases may not
be linked to actors.

Communication Link

• The participation of an actor in a


use case is shown by connecting an
actor to a use case by a solid link.

• Actors may be connected to use


cases by associations, indicating
that the actor and the use case
communicate with one another
using messages.
Boundary of system

• The system boundary is potentially


the entire system as defined in the
requirements document.

• For large and complex systems,


each module may be the system
boundary.

• For example, for an ERP system for


an organization, each of the
modules such as personnel, payroll,
accounting, etc.

• can form a system boundary for use


cases specific to each of these
business functions.

• The entire system can span all of


these modules depicting the overall
system boundary
7.3 Structuring Use Case Diagram with Relationships
Use cases share different kinds of relationships. Defining the relationship
between two use cases is the decision of the software analysts of the use case
diagram. A relationship between two use cases is basically modeling the
dependency between the two use cases. The reuse of an existing use case by
using different types of relationships reduces the overall effort required in
developing a system. Use case relationships are listed as the following:

Use Case Relationship Visual Representation

Extends

• Indicates that
an "Invalid
Password" use case
may include (subject
to specified in the
extension) the
behavior specified
by base use
case "Login
Account".

• Depict with a
directed arrow
having a dotted line.
The tip of arrowhead
points to the base
use case and the
child use case is
connected at the
base of the arrow.

• The stereotype
"<<extends>>"
identifies as an
extend relationship

Include

• When a use case is


depicted as using the
functionality of
another use case, the
relationship between
the use cases is
named as include or
uses relationship.

• A use case includes


the functionality
described in another
use case as a part of
its business process
flow.

• A uses relationship
from base use case
to child use case
indicates that an
instance of the base
use case will include
the behavior as
specified in the child
use case.
• An include
relationship is
depicted with a
directed arrow
having a dotted line.
The tip of arrowhead
points to the child
use case and the
parent use case
connected at the
base of the arrow.

• The stereotype
"<<include>>"
identifies the
relationship as an
include relationship.

Generalization

• A generalization
relationship is a
parent-child
relationship between
use cases.

• The child use case is


an enhancement of
the parent use case.

• Generalization is
shown as a directed
arrow with a triangle
arrowhead.
• The child use case is
connected at the
base of the arrow.
The tip of the arrow
is connected to the
parent use case.

7.4. Use Case Examples

1. Use Case Example - Association Link


A Use Case diagram illustrates a set of use cases for a system, i.e. the actors and
the relationships between the actors and use cases.

2. Use Case Example - Include Relationship


The include relationship adds additional functionality not specified in the base
use case. The <<Include>> relationship is used to include common behavior
from an included use case into a base use case in order to support the reuse of
common behavior.
3. Use Case Example - Extend Relationship
The extend relationships are important because they show optional functionality
or system behavior. The <<extend>> relationship is used to include optional
behavior from an extending use case in an extended use case. Take a look at the
use case diagram example below. It shows an extend connector and an extension
point "Search".

4. Use Case Example - Generalization Relationship


A generalization relationship means that a child use case inherits the behavior
and meaning of the parent use case. The child may add or override the behavior
of the parent. The figure below provides a use case example by showing two
generalization connectors that connect between the three use cases.

5. Use Case Diagram - Vehicle Sales Systems


The figure below shows a use case diagram example for a vehicle system. As
you can see even a system as big as a vehicle sales system contains not more
than 10 use cases! That's the beauty of use case modeling.

The use case model also shows the use of extend and include. Besides, there are
associations that connect between actors and use cases.
6. Use-Case Examples
7. Simple example on Use case scenarios
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 8: Software Design

Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into
a form, i.e., easily implementable using programming language.

The software design phase is the first step in SDLC (Software Design Life Cycle), which
moves the concentration from the problem domain to the solution domain. In software design,
we consider the system to be a set of components or modules with clearly defined behaviors &
boundaries.
8.1. Objectives of Software Design

Following are the purposes of Software design:

1. Correctness:Software design should be correct as per requirement.


2. Completeness:The design should have all components like data structures, modules,
and external interfaces, etc.
3. Efficiency:Resources should be used efficiently by the program.
4. Flexibility:Able to modify on changing needs.
5. Consistency:There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be easily maintainable
by other designers.

8.2. Software Design Principles


Software design principles are concerned with providing means to handle the complexity of the
design process effectively. Effectively managing the complexity will not only reduce the effort
needed for design but can also reduce the scope of introducing errors during design.

Following are the principles of Software Design:


1. Problem Partitioning

For small problem, we can handle the entire problem at once but for the significant problem,
divide the problems and conquer the problem it means to divide the problem into smaller pieces
so that each piece can be captured separately.

For software design, the goal is to divide the problem into manageable pieces.

Benefits of Problem Partitioning

1. Software is easy to understand


2. Software becomes simple
3. Software is easy to test
4. Software is easy to modify
5. Software is easy to maintain
6. Software is easy to expand

These pieces cannot be entirely independent of each other as they together form the system.
They have to cooperate and communicate to solve the problem. This communication adds
complexity.
2. Abstraction

An abstraction is a tool that enables a designer to consider a component at an abstract level


without bothering about the internal details of the implementation. Abstraction can be used for
existing element as well as the component being designed.

Here, there are two common abstraction mechanisms

1. Functional Abstraction
2. Data Abstraction

Functional Abstraction

i. A module is specified by the method it performs.


ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.

Functional abstraction forms the basis for Function oriented design approaches.

Data Abstraction

Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.

3. Modularity

Modularity specifies to the division of software into separate modules which are differently
named and addressed and are integrated later on in to obtain the completely functional software.
It is the only property that allows a program to be intellectually manageable. Single large
programs are difficult to understand and read due to a large number of reference variables,
control paths, global variables, etc.

The desirable properties of a modular system are:

o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.

Advantages and Disadvantages of Modularity

Advantages of Modularity

There are several advantages of Modularity

o It allows large programs to be written by several or different people


o It encourages the creation of commonly used routines to be placed in the library and
used by other programs.
o It simplifies the overlay procedure of loading a large program into main storage.
o It provides more checkpoints to measure progress.
o It provides a framework for complete testing, more accessible to test
o It produced the well designed and more readable program.

Disadvantages of Modularity

There are several disadvantages of Modularity

o Execution time maybe, but not certainly, longer


o Storage size perhaps, but is not certainly, increased
o Compilation and loading time may be longer
o Inter-module communication problems may be increased
o More linkage required, run-time may be longer, more source lines must be written, and
more documentation has to be done

Modular Design

Modular design reduces the design complexity and results in easier and faster implementation
by allowing parallel development of various parts of a system.

1. Functional Independence: Functional independence is achieved by developing functions


that perform only one kind of task and do not excessively interact with other modules.
Independence is important because it makes implementation more accessible and faster.

The independent modules are easier to maintain, test, and reduce error propagation and can be
reused in other programs as well. Thus, functional independence is a good design feature which
ensures software quality.

It is measured using two criteria:

o Cohesion: It measures the relative function strength of a module.


o Coupling: It measures the relative interdependence among modules.

2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do
not need for such information.

The use of information hiding as design criteria for modular system provides the most
significant benefits when modifications are required during testing's and later during software
maintenance. This is because as most data and procedures are hidden from other parts of the
software, inadvertent errors introduced during modifications are less likely to propagate to
different locations within the software.

4. Strategy of Design

A good system design strategy is to organize the program modules in such a method that are
easy to develop and latter too, change. Structured design methods help developers to deal with
the size and complexity of programs. Analysts generate instructions for the developers about
how code should be composed and how pieces of code should fit together to form a program.

To design a system, there are two possible approaches:

1. Top-down Approach
2. Bottom-up Approach

1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.

2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing
system.
8.3. Coupling and Cohesion

1. Module Coupling

In software engineering, the coupling is the degree of interdependence between software


modules. Two modules that are tightly coupled are strongly dependent on each other. However,
two modules that are loosely coupled are not dependent on each other. Uncoupled
modules have no interdependence at all within them.

The various types of coupling techniques are shown in fig:

A good design is the one that has low coupling. Coupling is measured by the number of
relations between the modules. That is, the coupling increases as the number of calls
between modules increase or the amount of shared data is large. Thus, it can be said that
a design with high coupling will have more errors.
o Types of Module Coupling

1. No Direct Coupling: There is no direct coupling between M1 and M2.

2. Data Coupling: When data of one module is passed to another module, this is called
data coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using
composite data items such as structure, objects, etc. When the module passes non-global
data structure or entire structure to another module, they are said to be stamp coupled.
For example, passing structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one
module is used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally
imposed data format, communication protocols, or device interface. This is related to
communication to external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information
through some global data items.

7. Content Coupling: Content Coupling exists among two modules if they share code,
e.g., a branch from one module into another module.
2. Module Cohesion

In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.

Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or


"low cohesion."

Types of Modules Cohesion


1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of
a module, cooperate to achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element
of a module form the components of the sequence, where the output from one
component of the sequence is input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if
all tasks of the module refer to or update the same data structure, e.g., the set of functions
defined on an array or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose
of the module are all parts of a procedure in which particular sequence of steps has to
be carried out for achieving a goal, e.g., the algorithm for decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact
that all the methods must be executed in the same time, the module is said to exhibit
temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the
module perform a similar operation. For example Error handling, data input and data
output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs
a set of tasks that are associated with each other very loosely, if at all.

3. Differentiate between Coupling and Cohesion

Coupling Cohesion

Coupling is also called Inter-Module Cohesion is also called Intra-Module


Binding. Binding.

Coupling shows the relationships between Cohesion shows the relationship within the
modules. module.

Coupling shows the Cohesion shows the module's


relative independence between the modules. relative functional strength.
While creating, you should aim for low While creating you should aim for high
coupling, i.e., dependency among modules cohesion, i.e., a cohesive component/ module
should be less. focuses on a single function (i.e., single-
mindedness) with little interaction with other
modules of the system.

In coupling, modules are linked to the other In cohesion, the module focuses on a single
modules. thing.

8.4. Function Oriented Design

Function Oriented design is a method to software design where the model is decomposed into
a set of interacting units or modules where each unit or module has a clearly defined function.
Thus, the system is designed from a functional viewpoint.

4. Design Notations

Design Notations are primarily meant to be used during the process of design and are used to
represent design or design decisions. For a function-oriented design, the design can be
represented graphically or mathematically by the following:
1. Data Flow Diagram
Data-flow design is concerned with designing a series of functional transformations that convert system
inputs into the required outputs. The design is described as data-flow diagrams. These diagrams show how
data flows through a system and how the output is derived from the input through a series of functional
transformations.

Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They show end-to-
end processing. That is the flow of processing from when data enters the system to where it leaves the
system can be traced.

Data-flow design is an integral part of several design methods, and most CASE tools support data-flow
diagram creation. Different ways may use different icons to represent data-flow diagram entities, but their
meanings are similar.

The notation which is used is based on the following symbols:


The report generator produces a report which describes all of the named entities in a data-flow
diagram. The user inputs the name of the design represented by the diagram. The report generator
then finds all the names used in the data-flow diagram. It looks up a data dictionary and retrieves
information about each name. This is then collated into a report which is output by the system.

2. Data Dictionaries

A data dictionary lists all data elements appearing in the DFD model of a system. The data items listed
contain all data flows and the contents of all data stores looking on the DFDs in the DFD model of a
system.

A data dictionary lists the objective of all data items and the definition of all composite data elements in
terms of their component data items. For example, a data dictionary entry may contain that the
data grossPay consists of the parts regularPay and overtimePay.

grossPay = regularPay + overtimePay

For the smallest units of data elements, the data dictionary lists their name and their type.

A data dictionary plays a significant role in any software development process because of the following
reasons:

o A Data dictionary provides a standard language for all relevant information for use by engineers
working in a project. A consistent vocabulary for data items is essential since, in large projects,
different engineers of the project tend to use different terms to refer to the same data, which
unnecessarily causes confusion.

o The data dictionary provides the analyst with a means to determine the definition of various data
structures in terms of their component elements.

3. Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the user without
the knowledge of internal design.

Structured Chart is a graphical representation which shows:

o System partitions into modules

o Hierarchy of component modules

o The relation between processing modules

o Interaction between modules

o Information passed between modules

The following notations are used in structured chart:


4.Pseudo-code
Pseudo code is written more close to programming language. It may be considered as
augmented programming language, full of comments and descriptions.
Pseudo code avoids variable declaration but they are written using some actual programming
language’s constructs, like C, Fortran, Pascal etc.
Pseudo code contains more programming details than Structured English. It provides a method
to perform the task, as if a computer is executing the code.
Example
Program to print Fibonacci up to n numbers.
void function Fibonacci
Get value of n;
Set value of a to 1;
Set value of b to 1;
Initialize I to 0
for (i=0; i< n; i++)
{
if a greater than b
{
Increase b by a;
Print b;
}
else if b greater than a
{
increase a by b;
print a;
}
}

Entity-Relationship Model
Entity-Relationship model is a type of database model based on the notion of real world entities
and relationship among them. We can map real world scenario onto ER database model. ER
Model creates a set of entities with their attributes, a set of constraints and relation among
them.
ER Model is best used for the conceptual design of database. ER Model can be represented as
follows :

• Entity - An entity in ER Model is a real world being, which has some properties
called attributes. Every attribute is defined by its corresponding set of values,
called domain.
For example, Consider a school database. Here, a student is an entity. Student has
various attributes like name, id, age and class etc.
• Relationship - The logical association among entities is called relationship.
Relationships are mapped with entities in various ways. Mapping cardinalities define
the number of associations between two entities.
Mapping cardinalities:

o one to one
o one to many
o many to one
o many to many

8.5 .Object-Oriented Design


In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities). The
state is distributed among the objects, and each object handles its state data. For example, in a Library
Automation Software, each library representative may be a separate object with its data and functions to
operate on these data. The tasks defined for one purpose cannot refer or change data of other objects.
Objects have their internal data which represent their state. Similar objects create a class. In other words,
each object is a member of some class. Classes may inherit features from the superclass.

The different terms related to object design are:


1. Objects: All entities involved in the solution design are known as objects. For example, person,
banks, company, and users are considered as objects. Every entity has some attributes associated
with it and has some methods to perform on the attributes.

2. Classes: A class is a generalized description of an object. An object is an instance of a class. A


class defines all the attributes, which an object can have and methods, which represents the
functionality of the object.

3. Messages: Objects communicate by message passing. Messages consist of the integrity of the
target object, the name of the requested operation, and any other action needed to perform the
function. Messages are often implemented as procedure or function calls.

4. Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is the


removal of the irrelevant and the amplification of the essentials.

5. Encapsulation: Encapsulation is also called an information hiding concept. The data and
operations are linked to a single unit. Encapsulation not only bundles essential information of an
object together but also restricts access to the data and methods from the outside world.

6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or
sub-classes can import, implement, and re-use allowed variables and functions from their
immediate superclasses.This property of OOD is called an inheritance. This makes it easier to
define a specific class and to create generalized classes from specific ones.

7. Polymorphism: OOD languages provide a mechanism where methods performing similar tasks
but vary in arguments, can be assigned the same name. This is known as polymorphism, which
allows a single interface is performing functions for different types. Depending upon how the
service is invoked, the respective portion of the code gets executed.

8.6 User Interface Design


The visual part of a computer application or operating system through which a client interacts with a
computer or software. It determines how commands are given to the computer or the program and how data
is displayed on the screen.

Types of User Interface


There are two main types of User Interface:

o Text-Based User Interface or Command Line Interface

o Graphical User Interface (GUI)

Text-Based User Interface: This method relies primarily on the keyboard. A typical example of this is
UNIX.

Advantages
o Many and easier to customizations options.

o Typically capable of more important tasks.

Disadvantages
o Relies heavily on recall rather than recognition.

o Navigation is often more difficult.

Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example of this
type of interface is any versions of the Windows operating systems.

Characteristics Descriptions
Windows Multiple windows allow different information to
be displayed simultaneously on the user's screen.

Icons Icons different types of information. On some


systems, icons represent files. On other icons
describes processes.

Menus Commands are selected from a menu rather than


typed in a command language.
Pointing A pointing device such as a mouse is used for
selecting choices from a menu or indicating items
of interests in a window.

Graphics Graphics elements can be mixed with text or the


same display.

Advantages
o Less expert knowledge is required to use it.

o Easier to Navigate and can look through folders quickly in a guess and check manner.

o The user may switch quickly from one task to another and can interact with several different
applications.

Disadvantages
o Typically decreased options.

o Usually less customizable. Not easy to use one button for tons of different variations.

5. UI Design Principles

Structure: Design should organize the user interface purposefully, in the meaningful and usual based on
precise, consistent models that are apparent and recognizable to users, putting related things together and
separating unrelated things, differentiating dissimilar things and making similar things resemble one
another. The structure principle is concerned with overall user interface architecture.
Simplicity: The design should make the simple, common task easy, communicating clearly and directly in
the user's language, and providing good shortcuts that are meaningfully related to longer procedures.

Visibility: The design should make all required options and materials for a given function visible without
distracting the user with extraneous or redundant data.

Feedback: The design should keep users informed of actions or interpretation, changes of state or
condition, and bugs or exceptions that are relevant and of interest to the user through clear, concise, and
unambiguous language familiar to users.

Tolerance: The design should be flexible and tolerant, decreasing the cost of errors and misuse by allowing
undoing and redoing while also preventing bugs wherever possible by tolerating varied inputs and
sequences and by interpreting all reasonable actions.
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 8: Coding
The coding is the process of transforming the design of a system into a computer language
format. This coding phase of software development is concerned with software translating
design specification into the source code. It is necessary to write source code & internal
documentation so that conformance of the code to its specification can be easily verified.

Coding is done by the coder or programmers who are independent people than the designer.
The goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later
stage. The cost of testing and maintenance can be significantly reduced with efficient coding.

➢ Goals of Coding
To translate the design of system into a computer language format: The coding is the process
of transforming the design of a system into a computer language format, which can be executed
by a computer and that perform tasks as specified by the design of operation during the design
phase.

To reduce the cost of later phases: The cost of testing and maintenance can be significantly
reduced with efficient coding.

Making the program more readable: Program should be easy to read and understand. It increases
code understanding having readability and understandability as a clear objective of the coding
activity can itself help in producing more maintainable software.

For implementing our design into code, we require a high-level functional language. A
programming language should have the following characteristics:
Readability: A good high-level language will allow programs to be written in some methods
that resemble a quite-English description of the underlying functions. The coding may be done
in an essentially self-documenting way.

Portability: High-level languages, being virtually machine-independent, should be easy to


develop portable software.

Generality: Most high-level languages allow the writing of a vast collection of programs, thus
relieving the programmer of the need to develop into an expert in many diverse languages.

Brevity: Language should have the ability to implement the algorithm with less amount of
code. Programs mean in high-level languages are often significantly shorter than their low-level
equivalents.

Error checking: A programmer is likely to make many errors in the development of a computer
program. Many high-level languages invoke a lot of bugs checking both at compile-time and
run-time.

Cost: The ultimate cost of a programming language is a task of many of its characteristics.
Quick translation: It should permit quick translation.

Efficiency: It should authorize the creation of an efficient object code.

Modularity: It is desirable that programs can be developed in the language as several separately
compiled modules, with the appropriate structure for ensuring self-consistency among these
modules.

Widely available: Language should be widely available, and it should be feasible to provide
translators for all the major machines and all the primary operating systems.

A coding standard lists several rules to be followed during coding, such as the way variables
are to be named, the way the code is to be laid out, error return conventions, etc.

➢ Coding Standards
General coding standards refers to how the developer writes code, so here we will discuss some
essential standards regardless of the programming language being used.

The following are some representative coding standards:

1. Indentation: Proper and consistent indentation is essential in producing easy to read


and maintainable programs. Indentation should be used to:
o Emphasize the body of a control structure such as a loop or a select statement.
o Emphasize the body of a conditional statement
o Emphasize a new scope block
2. Inline comments: Inline comments analyze the functioning of the subroutine, or key
aspects of the algorithm shall be frequently used.
3. Rules for limiting the use of global: These rules file what types of data can be declared
global and what cannot.
4. Structured Programming: Structured (or Modular) Programming methods shall be
used. "GOTO" statements shall not be used as they lead to "spaghetti" code, which is
hard to read and maintain, except as outlined line in the FORTRAN Standards and
Guidelines.
5. Naming conventions for global variables, local variables, and constant identifiers:
A possible naming convention can be that global variable names always begin with a
capital letter, local variable names are made of small letters, and constant names are
always capital letters.
6. Error return conventions and exception handling system: Different functions in a
program report the way error conditions are handled should be standard within an
organization. For example, different tasks while encountering an error condition should
either return a 0 or 1 consistently.

➢ Coding Guidelines

General coding guidelines provide the programmer with a set of the best methods which can be
used to make programs more comfortable to read and maintain. Most of the examples use the
C language syntax, but the guidelines can be tested to all languages.

The following are some representative coding guidelines recommended by many software
development organizations.
1. Line Length: It is considered a good practice to keep the length of source code lines at or
below 80 characters. Lines longer than this may not be visible properly on some terminals and
tools. Some printers will truncate lines longer than 80 columns.

2. Spacing: The appropriate use of spaces within a line of code can improve readability.

Example:

Bad: cost=price+(price*sales_tax)
fprintf(stdout ,"The total cost is %5.2f\n",cost);

Better: cost = price + ( price * sales_tax )


fprintf (stdout,"The total cost is %5.2f\n",cost);

3. The code should be well-documented: As a rule of thumb, there must be at least one
comment line on the average for every three-source line.

4. The length of any function should not exceed 10 source lines: A very lengthy function is
generally very difficult to understand as it possibly carries out many various functions. For the
same reason, lengthy functions are possible to have a disproportionately larger number of bugs.
5. Do not use goto statements: Use of goto statements makes a program unstructured and very
tough to understand.

6. Inline Comments: Inline comments promote readability.

7. Error Messages: Error handling is an essential aspect of computer programming. This does
not only include adding the necessary logic to test for and handle errors but also involves
making error messages meaningful.

➢ Programming Style

Programming style refers to the technique used in writing the source code for a computer
program. Most programming styles are designed to help programmers quickly read and
understands the program as well as avoid making errors. (Older programming styles also
focused on conserving screen space.) A good coding style can overcome the many deficiencies
of a first programming language, while poor style can defeat the intent of an excellent language.

The goal of good programming style is to provide understandable, straightforward, elegant


code. The programming style used in a various program may be derived from the coding
standards or code conventions of a company or other computing organization, as well as the
preferences of the actual programmer.

• Some general rules or guidelines in respect of programming style:


1. Clarity and simplicity of Expression: The programs should be designed in such a manner
so that the objectives of the program is clear.

2. Naming: In a program, you are required to name the module, processes, and variable, and so
on. Care should be taken that the naming style should not be cryptic and non-representative.

For Example: a = 3.14 * r * r


area of circle = 3.14 * radius * radius;

3. Control Constructs: It is desirable that as much as a possible single entry and single exit
constructs used.

4. Information hiding: The information secure in the data structures should be hidden from
the rest of the system where possible. Information hiding can decrease the coupling between
modules and make the system more maintainable.

5. Nesting: Deep nesting of loops and conditions greatly harm the static and dynamic behavior
of a program. It also becomes difficult to understand the program logic, so it is desirable to
avoid deep nesting.

6. User-defined types: Make heavy use of user-defined data types like enum, class, structure,
and union. These data types make your program code easy to write and easy to understand.

7. Module size: The module size should be uniform. The size of the module should not be too
big or too small. If the module size is too large, it is not generally functionally cohesive. If the
module size is too small, it leads to unnecessary overheads.

8. Module Interface: A module with a complex interface should be carefully examined.

9. Side-effects: When a module is invoked, it sometimes has a side effect of modifying the
program state. Such side-effect should be avoided where as possible.
➢ Structured Programming

In structured programming, we sub-divide the whole program into small modules so that the
program becomes easy to understand.

The purpose of structured programming is to linearize control flow through a computer program
so that the execution sequence follows the sequence in which the code is written.

The dynamic structure of the program than resemble the static structure of the program. This
enhances the readability, testability, and modifiability of the program.

This linear flow of control can be managed by restricting the set of allowed applications
construct to a single entry, single exit formats.

➢ Why we use Structured Programming?

We use structured programming because it allows the programmer to understand the program
easily.

If a program consists of thousands of instructions and an error occurs then it is complicated to


find that error in the whole program, but in structured programming, we can easily detect the
error and then go to that location and correct it. This saves a lot of time.

➢ These are the following rules in structured programming:


o Structured Rule One: Code Block

If the entry conditions are correct, but the exit conditions are wrong, the error must be in the
block. This is not true if the execution is allowed to jump into a block. The error might be
anywhere in the program. Debugging under these circumstances is much harder.

Rule 1 of Structured Programming: A code block is structured, as shown in the figure. In


flow-charting condition, a box with a single entry point and single exit point are structured.
Structured programming is a method of making it evident that the program is correct.
o Structure Rule Two: Sequence

A sequence of blocks is correct if the exit conditions of each block match the entry conditions
of the following block. Execution enters each block at the block's entry point and leaves through
the block's exit point. The whole series can be regarded as a single block, with an entry point
and an exit point.

Rule 2 of Structured Programming: Two or more code blocks in the sequence are structured,
as shown in the figure.
o Structured Rule Three: Alternation

If-then-else is frequently called alternation (because there are alternative options). In structured
programming, each choice is a code block. If alternation is organized as in the flowchart at
right, then there is one entry point (at the top) and one exit point (at the bottom). The structure
should be coded so that if the entry conditions are fulfilled, then the exit conditions are satisfied
(just like a code block).

Rule 3 of Structured Programming: The alternation of two code blocks is structured, as


shown in the figure.

An example of an entry condition for an alternation method is: register $8 includes a signed
integer. The exit condition may be: register $8 includes the absolute value of the signed number.
The branch structure is used to fulfill the exit condition.

o Structured Rule 4: Iteration

Iteration (while-loop) is organized as at right. It also has one entry point and one exit point. The
entry point has conditions that must be satisfied, and the exit point has requirements that will
be fulfilled. There are no jumps into the form from external points of the code.
Rule 4 of Structured Programming: The iteration of a code block is structured, as shown in
the figure.

o Structured Rule 5: Nested Structures

In flowcharting conditions, any code block can be spread into any of the structures. If there is
a portion of the flowchart that has a single entry point and a single exit point, it can be
summarized as a single code block.

Rule 5 of Structured Programming: A structure (of any size) that has a single entry point and
a single exit point is equivalent to a code block. For example, we are designing a program to go
through a list of signed integers calculating the absolute value of each one. We may (1) first
regard the program as one block, then (2) sketch in the iteration required, and finally (3) put in
the details of the loop body, as shown in the figure.
The other control structures are the case, do-until, do-while, and for are not needed. However,
they are sometimes convenient and are usually regarded as part of structured programming. In
assembly language, they add little convenience.
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 10: Software Quality

➢ Software Quality Assurance: Methods

• Static Methods: This method is done without running the code. (IEEE Std 1028-
2008 Software Review)

IEEE Std 1028-2008: IEEE Standard for Software Reviews and Audits “A process or meeting
during which a software product, set of software products, or a software process is presented to
project personnel, managers, users, customers, user representatives, auditors or other interested
parties for examination, comment or approval.”

This method’s reviews are at listed below:

✓ Management reviews (“yönetim gözden geçirme”)


✓ Technical reviews (“teknik gözden geçirme”)
✓ Inspections (“detaylı inceleme”)
✓ Walkthroughs (“genel gözden geçirme”)
✓ Audits (“denetleme”)

HUMAN

PRODUCT INFORMATION
REVIEW ABOUT PRODUCT
OR
OR
PROCESS
PROCESS QUALITY
➢ Why review is done?

✓ It provides a systematic evaluation of the product or process from different


perspectives.
✓ Improves project timing and cost.
✓ Supports test effectiveness and reduces cost.
✓ Return on investment is high.
✓ It is a kind of education method.

• Dynamic Methods: This method is done running the code. These:

✓ Unit Test
✓ Integration Test
✓ System Test / Functional Test / Qualification Test
✓ Acceptance Test

➢ What is Quality?

✓ It is the ability to meet the needs.


✓ It is suitable for user needs.
✓ The first time is to do the right things.
✓ It is suitable for intended use.

➢ How to Ensure Quality?

We can ensure quality using two methods: Traditional mentality or sophisticated mentality.

1. Traditional mentality: Debugging - “Quality control”

✓ Techniques and activities used to determine whether a product or service meets


defined requirements.

✓ The understanding of providing quality with the use of products / services


2. Sophisticated mentality: Error prevention - “Quality assurance”

✓ All planned and systematic activities required to secure the defined requirements of a
product or service.

✓ The understanding of ensuring quality through the system that creates the product /
service.

➢ What is the quality software?

1. Meets the requirements,


2. Suitable for intended use,
3. Completed on time,
4. Realized within the determined budget limits,
5. Compliant with standards,
6. Maintainable software.

➢ Concept of Quality in Software

1. The quality of the software depends largely on how we develop the software.

2. The software development process defines how we develop the software.

3. So, We have to put the quality into the software product during the software
development stages.

4. Therefore, Attempting to ensure quality at the end of the software is both difficult
and costly.

➢ Review and Test For Quality in Software

Review and Test completes one by one and both are used in the Verification and Validation
process.
Verification: “Are we developing the product correctly?”

a. The software must comply with the standards.


Validation: “Are we developing the right product ?"

b. The software must fulfill real user requests.

➢ Software Quality Problems

1. Customer side problems:

➢ Failure to meet the requirements


➢ Not easy to understand and usable
➢ Not being able to maintain at the desired time
➢ Insufficient educational support

2. Software company side Problems:

▪ Delayed or unfinished projects


▪ High cost
▪ Employees' dissatisfaction
▪ Loss of trust in the firm

➢ Why is Quality Necessary in Software?

➢ An experienced engineer generates an error every 7-10 lines.


➢ It corresponds to thousands of lines in a medium-sized project.
➢ Most of the errors need to be corrected during the testing phase.
➢ As the tests take longer, the cost increases and the delivery is delayed.

➢ Software Review Process

➢ Management and technical reviews are made according to the needs of the project.
➢ The status and products of the effectiveness of a process are evaluated with review
activities.
➢ Review results are announced to all affected units.
➢ Corrective actions resulting from reviews are monitored until they are closed.
➢ Risks and problems are identified and recorded.

➢ Why Review?

• It ensures that the product or the process is systematically evaluated from different
perspectives.
• Improves project schedule and cost.
• It supports the test efficiency and reduces the cost.
• Return on investment is high.
• It is a kind of training method.

➢ Software Review Process

o Events and Tasks

➢ Project management reviews


a. The state of the project development; will be evaluated against project plans,
schedule, standards and guidelines.
b. The output of the assessment should be considered by the associated management
and should:
c. To ensure that the activities take place according to the plan,
d. Maintaining general control of the project by appropriately assigning resources,
to. Changing the direction of the project or determining the need for alternative planning,
e. Evaluating and managing the risks that may adversely affect the success of the
project.

➢ Technical reviews

Technical reviews will be conducted to evaluate software products or services and provide
evidence of:

a. The product or service is complete.


b. The product or service conforms to standards and definitions.
c. Changes to the product or service are made accordingly.
d. The product or service fits the defined calendar.
e. The product or service is ready for the next scheduled event.
f. Development, operation and maintenance of the product or service; It is made
according to the defined plans, schedule, standards and instructions of the
project.

➢ Review Process
Roles
a. Review leader ("review leader")
b. Reviewer ("reviewer")
c. Registrar ("recorder")
D. Author ("author")

Steps
a. Planning ("planning")
b. Opening meeting ("kickoff meeting")
c. Individual review ("individual checking")
D. Collective review ("logging meeting")
to. Correction and follow up ("rework and follow up"

Review Checklists

➢ It defines the review criteria for each type of document to be reviewed.


➢ Analysis document, design document, code, project plan, quality plan, etc.
➢ Conducting the review on a checklist increases the effectiveness of the review.
➢ ISO 9000 Certification

ISO (International Standards Organization) is a group or consortium of 63 countries


established to plan and fosters standardization. ISO declared its 9000 series of standards in
1987. It serves as a reference for the contract between independent parties. The ISO 9000
standard determines the guidelines for maintaining a quality system. The ISO standard mainly
addresses operational methods and organizational methods such as responsibilities, reporting,
etc. ISO 9000 defines a set of guidelines for the production process and is not directly
concerned about the product itself.

Types of ISO 9000 Quality Standards

The ISO 9000 series of standards is based on the assumption that if a proper stage is followed
for production, then good quality products are bound to follow automatically. The types of
industries to which the various ISO standards apply are as follows.

1. ISO 9001: This standard applies to the organizations engaged in design, development,
production, and servicing of goods. This is the standard that applies to most software
development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products
but are only involved in the production. Examples of these category industries contain
steel and car manufacturing industries that buy the product and plants designs from
external sources and are engaged in only manufacturing those products. Therefore,
ISO 9002 does not apply to software development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the
installation and testing of the products. For example, Gas companies.
➢ How to get ISO 9000 Certification?
An organization determines to obtain ISO 9000 certification applies to ISO registrar office for
registration. The process consists of the following stages:

1. Application: Once an organization decided to go for ISO certification, it applies to the


registrar for registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the
organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews
the document submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization
has compiled the suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful
completion of all the phases.
6. Continued Inspection: The registrar continued to monitor the organization time by
time.
➢ Software Engineering Institute Capability Maturity Model
(SEICMM)

The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.

The model defines a five-level evolutionary stage of increasingly organized and consistently
more mature processes.

CMM was developed and is promoted by the Software Engineering Institute (SEI), a research
and development center promote by the U.S. Department of Defense (DOD).

Capability Maturity Model is used as a benchmark to measure the maturity of an organization's


software process.

Methods of SEICMM

There are two methods of SEICMM:

Capability Evaluation: Capability evaluation provides a way to assess the software process
capability of an organization. The results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the results of the software process
capability assessment can be used to select a contractor.

Software Process Assessment: Software process assessment is used by an organization to


improve its process capability. Thus, this type of evaluation is for purely internal use.
SEI CMM categorized software development industries into the following five maturity levels.
The various levels of SEI CMM have been designed so that it is easy for an organization to
build its quality system starting from scratch slowly.

Level 1: Initial

Ad hoc activities characterize a software development organization at this level. Very few or
no processes are described and followed. Since software production processes are not limited,
different engineers follow their process and as a result, development efforts become chaotic.
Therefore, it is also called a chaotic level.

Level 2: Repeatable

At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are
used.

Level 3: Defined
At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured.
ISO 9000 goals at achieving this level.

Level 4: Managed

At this level, the focus is on software metrics. Two kinds of metrics are composed.

Product metrics measure the features of the product being developed, such as its size,
reliability, time complexity, understandability, etc.

Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the
average number of failures detected during testing per LOC, etc. The software process and
product quality are measured, and quantitative quality requirements for the product are met.
Various tools like Pareto charts, fishbone diagrams, etc. are used to measure the product and
process quality. The process metrics are used to analyze if a project performed satisfactorily.
Thus, the outcome of process measurements is used to calculate project performance rather than
improve the process.

Level 5: Optimizing

At this phase, process and product metrics are collected. Process and product measurement data
are evaluated for continuous process improvement.

Key Process Areas (KPA) of a software organization

Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas
(KPAs) that contains the areas an organization should focus on improving its software process
to the next level. The focus of each level and the corresponding key process areas are shown in
the figure.
SEI CMM provides a series of key areas on which to focus to take an organization from one
level of maturity to the next. Thus, it provides a method for gradual quality improvement over
various stages. Each step has been carefully designed such that one step enhances the capability
already built up.
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 11: Software Test

• What is Software Test?

Software testing is the systematic control of the software developed.

• Software Test Historical Development

The Software Test was carried out for debugging when it first appeared. Over time, it began to
be made to verify that the software is working correctly. The criteria for performing software
tests began to become more evident after the 1980s. The tests started to be carried out gradually
within the scope of the entire software development process. Today, besides this, tests are
carried out to prevent errors.

➢ Aims of the Software Test


The main aim of the software test is to identify possible defects before the software is delivered
to the user. We can also divide the software test into two parts: trial and acceptance.

• Trial Tests

These are the pretests performed to understand whether the software is working properly. Trial
tests sought to answer the following questions:

1. Does the software work?

2. If not working, the compilation and linking stages are reviewed.


3. If he is running; Does the software handle the inputs correctly and produce the correct
output?
4. What is the software's fault tolerance?
5. What is the behavior of the software regarding the use of system resources?
6. Do existing errors cause new errors?
• Acceptance Tests

Acceptance tests question whether the software is working correctly on the target system.
Answers to the following questions are sought.

1. Can the user use the program interface?

2. On the user's platform, does the program perform its functions?

3. Is the program sufficient to meet the wishes of the user?

4. Can the software be customized, can it be configured?

5. How much is the performance of the program affected when the system is under
heavy load?

➢ Software Testing

In order to perform the software test, we need:

• List of those to be tested


• Test planning (Which test, when to do)
• Test system (similar to the target system)
Source code
• Test aids (for measurement, data input-output)

Test scenarios are then applied individually. The results are collected, recorded; records are
compared with expected values and concluded.

If no result is obtained, errors are detected and corrected.

When the results are obtained, software with acceptable errors is delivered to the user.
➢ Test Methods

1. Black Box Test

This test is done at the software interface level. These are the tests performed to test
whether the software performs its functions. Input values are provided to the software,
the application is considered as a closed box and the output is directly looked at. If the
software gives the expected output, the test is successful.

2. Transparent Box Test

These are the tests in which the entire internal structure of the software is tested.

3. Designed Based Test

It is the test of the program by considering the inner faces of the modules as black
boxes. For example, the evaluation of whether the components in the program's interface are
suitable for the expected result, according to the different input-output values.

4. Code Based Test:

Design based testing is not sufficient when the internal structure of the modules needs
to be tested. In this case, code-based testing is done. In order to test the internal structure of the
modules, in the source code; is the search for errors such as logic errors, coding errors, spelling
errors.

Examples of these errors;

1. Logic errors: The first value is not assigned to the variable.

2. Coding errors: exceeding limits during transport of data with dynamic memory.

3. Flow path assumption: Assuming an input has a value between 1 and 10, but the input takes
the value 'a'.

4. Spelling errors: Spelling errors, syntax errors.Bu hatalara örnekler;

➢ Special System Tests


1. Embeded Systems

Embedded system software tests need to be done with the hardware, as they control the
hardware. For example, testing a printer.

2. Real Time Systems

Since real-time systems are not easy to find, tests are carried out with the possibilities at hand.
This means that if we cannot find the system, we use the like, if we cannot create the conditions,
we use the laboratory, if we cannot carry out the testing stages, we make assumptions.

3. Security Systems

Security systems should be working without interruption. Receivers, sensors etc. located on the
system. modules must be tested to be active all the time and make sure they do not generate
false alarms.

4. Large Packet Softwares

The full version of the packet software should be released after intense testing. In the first place,
versions such as alpha and beta should be released.

5. Database Manangment Systems

Database systems are important because they provide facilities such as record keeping, query
query, and access to records. They can lead to irrecoverable losses. For this reason, these
systems must be tested with great precision before they are put into use.

➢ Automatic Test Systems

Automation of the testing process is partially possible, as software testing is expensive on a


time and cost basis.

1. Smart Compilers: They scan the source code, check the type and produce machine code.
The audit process can be tight or flexible. Strict control smart compilers generate more reliable
code.

2. Stationary Analyzers: By examining the source code, they find weak points in their
structure and give a warning.
3. Simulation Environments: They enable the software to run on a virtual system.

4. Test Software: Tools that provide data or event input to a software unit that needs to be
tested.

5. Environmental Simulators: Specifically, they are simulators that can enter and exit from
the system through simulation, in order to dynamically test embedded, dedicated and real-time
control systems under real operating conditions.

6. Display Software: They are software that represent the measurement and calculation results
on the chart. They provide an easy evaluation of the results.

➢ Test Strategies

The major tests in system development are "V Model" and "W Model". These test models are
formed by combining different tests that appeal to different parts of the system. Those tests are
given below:

➢ Trial Tests

1. Unit Test

The smallest units of computer software are executables. The unit test is the testing of these
units.

2. Integration Test

Integration testing is done to bring together software units that work smoothly on hardware to
check if they are running smoothly on the system as a whole. For example, unexpected results
are determined based on the operation of one unit affecting the other unit.

a. Top-down Integration

First, the main control unit is tested, and then it is tested together with the units closest to it.

b. Down-top Integration

It starts by running and testing atomic units. Lower-level units are combined into clusters.
3. Proficiency Tests

After the software's errors are corrected, it is to check whether the software is sufficient or not.
Tests for system requirements are based on the Software Requirements Specification document.
It usually consists of two stages.

a. Verificitation

It is the verification that the software performs all the functions it needs to perform properly.
For example, if the output time of a procedure is given as 2 seconds, that procedure also calls 3
different subroutines and the run time of each subroutine is 1 second, the desired output is
produced in 3 seconds, which is an example of a situation that cannot pass the validation test.

b. Validation

Validation is the test of whether the results in the validation phase are really correct. The
prepared software is tested on real systems. For example, it may take 2 seconds for a module to
complete the function. This function may be completed in 2.5 seconds when integrated into the
system. This means that our 2 second run time available in theory is actually 2.5 seconds in the
current system.

c. Random Tests (Monkey Test)

A software that passes all tests is used unconsciously and arbitrarily. An attempt is made to
generate an error. If the error does not occur, this is good news.

4. System Test

System concept includes hardware as well as software. Therefore, when it comes to system
testing, the tests performed on computer-based systems for verification and integration
purposes should come to mind. For example, we can consider stages such as controlling
hardware connections, investigating software's possible interface problems, following the data
flow and preparing debugger test designs, preparing mechanisms to report potential errors.

5. Loading Test

The purpose of this test is to measure the data processing capacity by pushing the boundaries
of the system. It also allows us to calculate what can happen in case of overloading and to
prepare measures to control the situation beforehand. Generally, loading tests are performed for
systems with dense data flow. This test can be done in different ways; such as loading the
system with high amount of data, forcing the memory and disk usage of the system and loading
all inputs with high speed data.

6. Stretching Test

It is done to determine how software and hardware behave when abnormal conditions are
created. It is also possible to compare it to the loading test. Examples of these abnormal
conditions are as follows :

1. When some of the system hardware crashes, can the software survive the rest?

2. Tests to load the operating system and crash it.

3. To measure the response time of the system to sudden effects in case of overload.

4. Using more memory or processor power than expected.

7. Recovery Test

These are the tests carried out to ensure that the system is able to recover itself and bring it back
to the last working condition against errors that occur in the software and hardware units.
Usually, this recovery is achieved in two different ways.

The first way, a backup software unit works continuously with the main software. When the
main software crashes, the utility software takes over, thus preventing the user from losing data.

The second way is to design fault tolerant software. Namely, the software is designed in
different modules. If any module crashes, the crashed module will be restarted and the software
will be restored to its former healthy state. Do you think it is possible ?

8. Safety Test

Some computerized systems must perform their functions safely. In other words, a job is carried
out or not, there is no middle way.

For example, if the password is correct, connect to the database, if the room temperature
exceeds 27 degrees, such as operate the alarm system. In such systems, testing how the behavior
of the system is changed when a software or hardware defect occurs is a safety test.
9. Success Test

It is done to evaluate the performance of the system. For example, how long is the time between
data entry and exit? How much information does our system have in total capacity to process?
Answers such as questions are searched.

Performance tests are sometimes performed with stretching tests and the performance of the
system is measured in case of overloads.

➢ Acceptance Tests

These are tests that inform the designer about whether the system is acceptable to the customer.

1. Production Line Tests

It is a test based on testing the manufactured software with artificial data on a defined test
equipment at the manufacturer's own facilities. This test is also called Factory Sufficiency Test
or Factory Acceptance Test. It is one of the first line acceptance test applied to equipment to be
put into mass production and production line tests.

2. Using Line Tests

This test is also called campus tests. These are the tests performed with real data, under the
conditions of the ratio, to the hardware where the system will be used. Conditions such as
whether the system is working, electrical connections are ok, the software installed on the
system can control the peripherals of the system, the system can communicate with the
peripherals.

3. Trial Tests

In the field of application, tests are made with real data during use to try out possible situations
in real runs. Any extraordinary situation imaginable in these tests should be tried.

4. Alfa ve Beta Tests

If the software package is being prepared for a large number of users, official acceptance cannot
be made for each customer individually. The software is released under the trial process. These
processes are alpha and beta processes.
Alfa Test: The software developer presents the product to the user in a controlled environment.
The user uses the product and conveys their impressions to the developer.

Beta Test: The difference from the alpha test is that there is no obligation to use the product in
an environment controlled by the developer. For example, the customer takes the system,
integrates it into his own system and uses it there. Then he communicates with the developer
and shares his experiences.

➢ Acceptance Restricted
The criteria for system acceptance must be determined in advance, an agreement must be
reached and officially documented. The test scenario should be valid and invalid, and a Test
Result Report should be prepared.

Figural Errors: Errors in colors and shapes, fonts, abbreviations and alignments in the user
interface.

Minor Errors: They do not affect the system, they are easy to fix.

Oversized Errors: These are major errors that may require some part of the development
process to be redone.

Fatal Errors: Errors that cause the system to malfunction. Important functions cannot be
performed.

➢ Test Methods
Test management is important for large projects. There should be a test manager in the project.
A test group should be created from relevant and enthusiastic people who will assist this
manager. Task distribution should be made among these people. A system test plan should be
prepared, stating when, how and in what order. Customers should also participate in the
acceptance tests of the system, monitor and declare their thoughts. At the end of the test, the
results are announced. The product quality is evaluated by looking at the results. Cost, quality
and necessary improvement options are discussed. If there is a necessary improvement, task
distribution is made. Improvements are made or it is declared to the customer that it will be
made in the next version and the product is delivered to the customer.
CPE 310 SOFTWARE ENGINEERING

Spring Semester-2021

Dr. Nesrin AYDIN ATASOY

Week 12: Software Maintenance


After the computer-based systems are designed and developed and delivered to the user, the
maintenance phase begins. Hardware items are maintained by cleaning, replacing worn or faulty
parts. All hardware item is replaced with a new one. In software items, there is not all aging or
malfunction. Meeting the new demands emerging over time or eliminating the errors found
forms the basis of software maintenance. Software maintenance may require more time and
labor in software development.

Software maintenance is not periodic but occurs according to the developing conditions. For
example, adapting a previously prepared software to new computer architectures and operating
systems due to changing technology is an important maintenance task.

➢ Basics of Software Maintenance

Software maintenance is sometimes not involved in the software development process,


according to the contract between the customer and the developer. However, many application
areas today have varying demands. For this reason, even after the software is developed and
put into use, large changes and error correction requests may occur.

• Types of Care

Maintenance work covers not only the correction of errors but also the various types of work
that need to be done after delivery of the product.

o Corrective Care

Software testing does not always ensure that all defects are found and removed. There is always
a possibility of errors in a running software. Some software defects only occur during use. The
developer is informed to eliminate these defects. Works to investigate and eliminate the cause
of the defect are called corrective maintenance. Correction of the software defects found and
reported during use is done immediately or depending on the importance of the error,
corrections are applied with a few of them and a new version is released. This maintenance is
an activity that the developer must definitely carry out during the maintenance phase.

o Adaptive Care

Due to the rapid changes and technological developments in the world of information
processing, adaptive maintenance is called to adapt the software to new hardware, operating
systems, upgrade and update functions.

Businesses may need to make changes in the way things are run and the methods used over
time. They may want to update the software.

By looking at the developments in recent years, we see that the average life of a software is 10
years, but the hardware units have 1-2 years to remain in effect. In this case, either software
will continue to be used with old technology or current technology will be used by keeping
portability feature in the foreground. The main problem of using outdated technology is that it
encounters difficulties in supplying additional hardware requirements, spare parts or adding
new useful software tools to the system.

o Healing Care

After the software is developed and tested and presented to the user successfully, it is among
the perfective maintenance works to add new functions, make adjustments that increase the
performance and efficiency of the existing ones. In this way, new versions of the software are
created and made available to the user.

For example, if a software with an average database access of 5 seconds can reduce the search
time to 3 seconds using a new search algorithm, the implementation of this change is a healing
maintenance. Giving new functions to the software according to the demands of new users is
under the scope of healing care.

o Preventive Maintenance

In order to increase the reliability of the software as a better basis for future changes,
preliminary measures are included in the scope of preventive maintenance. For example,
making the design of a module that needs frequent changes more flexible is a preventive
maintenance that makes subsequent changes easier. This type of maintenance may be covered
by the developer's long-term maintenance agreement.
• Maintenance Team

In general, software developers do not allocate a dedicated team for maintenance work.
However, a specific team needs to be identified to evaluate and prioritize software problems or
new requirements.

The structure of the maintenance team is important. In this structure, there is at least one
technical advisor who has knowledge about the previously developed system. This person is
someone who knows the components and technical features of the system. The user's
maintenance requests are forwarded to a maintenance inspector. The maintenance supervisor
creates a change proposal using the technical advisor who has been involved with that project.
This proposal is discussed in the Change Control Board. If the suggestion is not accepted, the
user is informed about the situation. If the proposal is accepted, a responsible personnel is
appointed by the board for that change. Responsible personnel make a work plan and assign the
maintenance personnel. A new version of maintenance personnel is created in coordination with
the configuration management. After the new version has been tested, it is given to the user by
the delivery staff with the training.

Instead of the maintenance controller, the developer's technical support unit, product manager,
or manager of that project can collect problems and new requests for software reported by the
user.

In a small-scale development group, the Maintenance Supervisor and the Change Control Board
may be the same person. This task can also be done by the project manager or product manager.
In large projects, this board consists of managers and senior technical staff.

If the responsibilities are distributed before the maintenance phase of the project and the
appropriate team structure is provided, the confusion that can occur when a maintenance request
actually occurs is prevented. By defining the responsibilities in advance, while the staff is in
another development job, the negative situations that may occur when it is necessary to take it
from there and assign it for an emergency maintenance job is prevented.

➢ Maintenance Steps
Bakım evresinde bulunan yazılım için bir bakım işi ortaya çıktığında geliştirici tarafından
standart bir süreç izlenmelidir. Bu sürecin aşamaları tıpkı bir yazılım geliştirme sürecinde
benzer. Tek farkı, hazır olan bir belge ve kod yapısı üzerinde değişikliklerin uygulanmasıdır.
Bakım aşamalarını şu şekilde özetleyebiliriz.

Requeriment Analysis: At this stage, the problem or change is identified and classified.
Requirements for new arrangements and functions expected in the system are defined.
Accordingly, the existing Software Requirements Specification and Software Test
Identification document is updated.

Design: The current design is reviewed, new requests are added and the Software Design
Definition document is updated.

Implementation: The new design is reflected in the code and the necessary code change or
module development is made. If necessary, unit tests are made with the newly added tests and
made ready for integration. The software elements are integrated with each other and then
with the hardware.

Test: In addition to the newly added tests, the tests of the entire software are performed by
repeating certain tests that are systematically selected. Tests for this purpose are called
regression tests. Acceptance tests are conducted in front of the customer to prove the
reliability of the overall system.

Delivery: It is supplied to the customer as a new version of the software. In large-scale


software, a patch is produced with the installation and operation guide for the parts that have
been modified only. It is delivered and the manuals are updated.

• Reporting

Generally, requests for a change in the software are reported with the Change Proposal and the
problems are reported with the Software Trouble Report. The Change Proposal can be prepared
by the user to explain what kind of change the software wants, or it can be prepared as a
recommendation by the developer staff.

The Change Suggestion provided to meet the user's new request must contain at least the
following information:
• System or subsystem name, item name
• Definition of change
• The item, component or unit where the change will be made
• Other items, components or units that may be affected by the change made
• Estimated workforce to spend on making changes
• The priority level of the request
• Number of the request (to be able to track)
• Other explanatory information
• Signatures and dates of the authorities reviewing the proposal
• Decision (created at the end of the review)

The Software Problem Report used to report any software defect should contain the following
information:

• System or subsystem name, item name


• Clear definition of the problem (absolute expressions must be used)
• Input data causing an incorrect state
• Records, if any, created by the reporting system (log data)
• Access information of the person reporting the problem
• Problem Report number (to be able to track)
• Access information for engineers who investigate the problem in more detail
• Signatures and dates of the authorities reviewing the report
• Decision (created at the end of the review)

Sometimes these two documents are combined and a single template is used in the form of a
table in which blanks will be filled. These documents are filled in on a computer or manually.

➢ Ease of Maintenance

One of the important features of a qualified software is its ease of maintenance. We can define
this feature as the ease of understanding, correcting, improving and adapting the software.
Technically, it is possible to carry out any change requests. However, the important thing is to
do this at the lowest cost, in the shortest time, correctly and without destroying the software's
qualities. It should not be forgotten that once the software requests to change, it may change in
the future. Therefore, various factors affecting maintenance should be taken into account,
maintenance work should be carried out within a certain quality assurance framework and
should be collected in the future for quantitative values.

• Control Factors

The importance of software maintenance is better understood by both the developer and the
customer in the long run. A good understanding of this importance also depends on several
factors that control software maintenance. It is possible to group them as follows.

o According to the development environment

• Presence of software development personnel


• Understandability of the system structure
• Ease of operation of the system
• Using standard computer hardware (main, test, target system)
• Using the standard operating system
• Using standard programming languages
• Use of standard or portable libraries
• Use of standard certification methods
• Possibility to recreate test cases
• Existence of debugging tools

o According to the staff

• Development experience of the staff on the same subject


• Experience in the development process used
• Work discipline of the staff
• Knowledge of the personnel about the application area
• Finding personnel with experience in software entering the maintenance phase

o According to the customer

• Single or multiple users (for single customer order or mass use)


• Relationship between the customer and the developer throughout the software life
cycle (such as warranty period, technical support agreements, possibility of new
change requests)
• Frequency of changing customer requests
• Frequency of customer changing media
• Frequency of changing application area requests

➢ Quality of Care

In order for a software project to have a qualified maintenance phase, the following points
should be given importance at the beginning of the work:

• Maintenance requests should always be handled within the scope of an official


Exchange Audit Process.
• Maintenance requests should be able to be fulfilled quickly enough.
• Care requests and work should be monitored systematically.
• Maintenance procedures should always be formally defined and managed, and
should not be handled arbitrarily.
• Certification standards must be complied with during maintenance.
• Appropriate maintenance personnel should be used.

o Quantitative Measurements

The software is very difficult to measure, such as ease of maintenance, quality and reliability.
However, it is possible to make some quantitative measurements taking into account the
characteristics of the maintenance procedures. The important and common ones are:

• Time to identify and report the problem


• Delay in administrative transactions
• Time required to identify the change
• Distribution of problem reports by customers
• Distribution of problem reports by error types
• Distribution of problem statements by modules
• Time taken for correction
• Local test time
• Test period at the place of use
• Total time
• Total labor and cost

Almost all of the above can be recorded without difficulty during maintenance work. The
recorded data are given to managers about the effectiveness of the methods and tools they use,
and the possible cost estimation.

In order for software maintenance works to be evaluated, these data must be collected and
recorded. In particular, this information should be collected in every maintenance work and
stored according to the program or project identifier (name or code number) so that the
maintenance phases of software intended for long-term use can continue in a healthy manner.
Personnel information and experiences gained at the end of the maintenance work should also
be among the things to be recorded.

➢ Maintenance Issues

Knowing the problems encountered during maintenance will be beneficial to be prepared for.
Generally, the lack of maintainability feature of the developed software and the fact that it has
not been developed in a discipline constitute the source of the problems. We can list these
problems as follows:

• If too many versions of the software emerge, maintenance will be more difficult. It
is not even possible to maintain it if the necessary changes are not fully recorded.
• It is often impossible to keep track of the software development process in terms of
time and labor.
• It usually takes a lot of time to understand the code someone else wrote. If the
documentation and explanations in the code are insufficient, serious problems arise.
• Documentation may be inadequate, incomplete, inaccurate or absent. In this case, it
will only be necessary to read and understand the code.
• Staff continuity is a general problem that is likely to be encountered at all times.
o Rules for the Developer

Among the rules that developers aiming to produce high quality software should apply during
the maintenance phase are:

• The status of care requests should be systematically monitored.


• Requests for maintenance should always be handled under an official Change Audit
Process.
• Duration and cost estimation should be made correctly for maintenance requests.
• The problems and new requirements reported by the customer regarding the
software should be collected by the technical support unit, the organization manager,
the product manager or a person or team appointed by the project manager

➢ Maintenance of Undocumented Software

One problem that groups tasked with software development is difficult to deal with is to
maintain older software that has no registration and documentation. Due to various reasons, it
may be necessary to reuse the codes written ten or twenty years ago, adapt them to new
environments or continue to make changes. However, those who have developed these software
may no longer be in that group or even that company or organization. During the development,
a certain methodology has not been applied, and the certification that has to be made may have
been done very little or not at all. In such cases, the methods for maintenance can be:

• The same code can be modified.


• Code can be moved to other media.
• Reverse engineering can be done.
• Get the source code into the container
• Rewriting in a new language

• Reverse Engineering

Reverse engineering (reverse enginerring) has started to be applied mainly to electronic or


mechanical engineering. Apart from its manufacturer, developers conduct examinations and
tests to understand how a commercial espionage electronic system hardware or mechanical
parts work and how the product is produced. They try to reveal the secrets of production. In this
way, they try to obtain a copy of a product or develop a new one.

Reverse engineering applied in the software is also considered the same. The product being
studied may belong to another developer, or it may be previously produced software from the
same developer. However, the source code is a mystery because no identifying documents are
available. The software, which is dominated at the time of its development, turns into a
complete chaos after years and when no developers can be found. Reverse engineering applied
in such cases is the process of trying to redefine the software with a higher level of abstraction
than the source code. This is actually the recovery of the design. It is possible to obtain data,
architectural and procedural design from software source code with the help of various tools
developed for this purpose. After that, the necessary maintenance can be done.

In reverse engineering, the aim is to get the source code as input and get the full design
documentation as output. However, in practice it is not always possible for it to handle it like
this. The reason is whether the level of abstraction that the auxiliary tools necessary to do this
work will be acceptable or not. Each tool works according to its own logic and converts the
source code into a document. After that, it should be examined again by software engineers,
and if necessary, manual corrections should be made. The high level of abstraction provides a
better understanding of design by maintenance engineers, especially in large software.

Another area where reverse engineering is applied is to address the design from running
programs with no source code at all. The object code of running programs contains machine
code in binary state. Today, there are also tools that understand this machine code and turn it
into a readable programming language. However, the module, procedure, and variable names
may not contain logical expressions, but only identifiers to facilitate tracking. It is possible to
replace these identifiers later with the desired words.

In both methods, reverse engineering should be complete. In other words, no part of the code
should be skipped, the design should be fully cover.
CPE 310 SOFTWARE ENGINEERING
Spring Semester-2021
Dr. Nesrin AYDIN ATASOY
Week 13: Automated Testing

Automated tests use software to perform tasks without the manual instruction of a tester.

In manual testing, the tester will write the code they want to execute or plan the software path
they want to check is working properly. Automated tests take care of such things on the testers
behalf. Here’s a quick list of automated software and QA tools that QA analysts should know:

• Selenium
• Cucumber
• Katalon Studio

1. Selenium

Selenium is an open source, flexible library used to automate the testing of Web applications.
Selenium test scripts can be written in different programming languages such as Java, Python,
C# and many more. These test scripts can run on various browsers like Chrome, Safari, Firefox,
Opera and support various platforms like Windows, Mac OS, Linux, Solaris.

Selenium is mainly for automating web applications for testing purposes, but certainly not
limited to that. It allows you to open a browser of your choice and perform tasks like a human
would. For example:

• Clicking the buttons


• Entering forms

• Searching for specific information on web pages

• Extracting required data

It should be noted that data scraping with libraries like Selenium is against most sites' terms of
service. If you pull data too often or maliciously, your IP address may be banned from that web
page.

1.1. Selenium Tools

• Selenium IDE

Selenium IDE (Integrated Development Environment) is a Firefox extension. It is one of the


simplest frameworks in Selenium Suite. It allows us to save and play scripts. If you want to
create scripts and write more advanced and robust test cases using Selenium IDE, you should
use Selenium RC or Selenium WebDriver.

You can download it by clicking the link http://docs.seleniumhq.org/download/.

Features of Selenium IDE are as follows:

• It behaves like a normal user and processes and saves accordingly.

• Can be used to write functional tests.

• It works as a Firefox plug-in.

• Support for many languages (Java, .NET, Python, Ruby, PHP, Perl)

• Thanks to Selenium being open source, it works on many platforms (Windows, Linux, IOS)
without any problems.

• It is preferred more than other test tools thanks to its multi-language and platform support.
(UFT, QTP)

• Selenium RC

Selenium RC is developed for writing web application testing in different programming


languages. It is a QA to automate UI tests for web applications against any HTTP website or a
testing framework that allows a developer to write test cases in any programming language.
Selenium RC is the answer to a more powerful test suite for your applications. It follows a
client/server model that allows client libraries to run tests in a server-controlled browser. Today,
Selenium RC is officially deprecated.
• Selenium WebDriver

Selenium WebDriver is a browser automation framework that accepts commands and sends
them to a browser. It is implemented via a scanner-specific driver. It communicates directly
with the scanner and controls it. Selenium WebDriver supports various programming languages
such as Java, C#, PHP, Perl and JavaScript.
For example, you need to test whether the comment form on the bottom page of a site you are
going to test works. For this, we first enter the site and in the next step we come to the part
where the comment form is located at the bottom of the page with scroll. But Selenium cannot
give us the scroll operation as a code output. This is where WebDriver's libertarian structure
comes into play. It enables us to make our test cases more workable and controllable by using
WebDriver elements.

• Selenium Grid

Selenium Grid is a tool used with Selenium RC. It is used to run tests on different machines in
parallel with different browsers.

Selenium-Grid, which has been developed and continues to be developed by Selenium, runs on

different servers in parallel with different browsers. The main purpose here is to see test results

on combinations such as different operating systems, hardware, devices, to run test processes

in parallel in a distributed environment and to get test results quickly. When these tests run in

parallel, there is a serious time saving in terms of time.

It tests using Selenium Hub and Node structure.

Hub: The Hub used in Selenium works on the condition that there is only one. This structure,

which acts as a server, hosts many processes on itself, and you can test the same code on

different platforms and browsers by responding to requests from different clients.

Node: The structure consisting of one or more clients connected to the hub is called a node.

You can test with Nodes by making requests from many Nodes to a single Hub using selenium-
grid.
1.2. Web Applications Test Methods

Web application tests can be examined with 8 different methods:

• Functional Test

• Session and Cookie Test

• Database Test

• Interface Test

• Usability Test

• Compatibility Test,

• Performance Test

• Security Test

• Functional Test

1. Funcional Test

Figure 1: Functional Test


Link checks from pages to other pages: It should be ensured that the correct page is navigated

or redirected. In this way, it can be ensured that there are no dead pages or invalid redirects.

Links on the same page: Links on the page should be checked one by one. E-mail links and

web forms, if any, should not be skipped.

Homework: Perform Unit Test with Selenium WebDriver.

2. Cucumber

Cucumber is a widely used tool for Behaviour Driven Development because it provides an
easily understandable testing script for system acceptance and automation testing.

BDD (Behavioral Driven Development) is a software development approach that was


developed from Test Driven Development (TDD).

BDD includes test case development in the form of simple English statements inside a feature
file, which is human-generated. Test case statements are based on the system's behavior and
more user-focused.
BDD is written in simple English language statements rather than a typical programming
language, which improves the communication between technical and non-technical teams and
stakeholders.

In other words, "Cucumber is a software tool used by the testers to develop test cases for the
testing of behavior of the software."

Cucumber tool plays a vital role in the development of acceptance test cases for automation
testing. It is mainly used to write acceptance tests for web applications as per the behavior of
their functionalities.

In the Cucumber testing, the test cases are written in a simple English text, which
anybody can understand without any technical knowledge. This simple English
text is called the Gherkin language.

It allows business analysts, developers, testers, etc. to automate functional


verification and validation in an easily readable and understandable format (e.g.,
plain English).

We can use Cucumber along with Watir, Selenium, and Capybara, etc. It supports
many other languages like PHP, Net, Python, Perl, etc.

The use of the application consists of two parts, as can be seen in the picture
above. These; Features and Glue Code. It can also be easily converted to a
different programming language. In addition, the following headings are included
in the Cucumber Terminology.

• Feature : We define a behavior in the Feature part (eg: display the home
page for the application, make sure as a user that the page is loaded, I want
to see the home page, etc.
• Scenario: We create our scenario that is suitable for our request in the
feature section.
• Given-When-Then : Given defines the prerequisite state, the When event,
and Then finalizes the event in the when keyword.

• And

• But

Figure 3. Behavior Driven Development (BDD) with Cucumber – Example Scenario

2.1. BDD with Cucumber Test

Tests are the ability to easily convert Feature and Scenario statements of application use cases
written in almost plain text into runnable unit tests on platforms belonging to Java or other
languages. No coding knowledge is required for writing and reading application scenarios and
feature files. Thus, business analysts and domain experts can easily understand when they read
the features to determine the scope and limits of the tests.

So how does the Behavior driven development method work with Cucumber?
1. The expected behavior of the code to be tested is written in plain text.
2. Write several Step definitions that will perform the test.
3. The code to be tested is written.
4. The test is run.
In terms of test automation, Cucumber does not provide the opportunity to control the browser
from the screen, for such needs it is necessary to use tools such as selenium web driver.
Example:
Test scenarios start with Feature and continue with the name of the scenario to be tested on the
Scenario side. Then, the steps continue to be written using sub-headings and commands such
as Given, When, Then.

Here, as we mentioned above, we will talk about a test application written in ruby on a sign-
up scenario (Cucumber Feature and Scenario steps.). First, we direct the test to our application
running in our locale, then we give the links to be clicked during the test and the urls of the
pages to be directed accordingly.
Finally, the email and password validation processes required for the sign-in process are
performed, and the necessary information is entered in the form in our system, and the
scenario is tested.

✓ BDD Benefits

Normally, a project management process goes like this:

• Planning
• Design
• Development
• Test
• Delivery

As a problem in this method, it is a problem to communicate with the customer in the process
from the Planning phase to the Design phase. After all, you can't make the client read code.
That's why there should be a BA (Business Analyst) person in between.
With BDD there is a slight displacement in the process;
• Planning
• Design
• Test
• Development
• Delivery

The benefit of BDD with this relocation is that it becomes easier to create BDD scenarios
through stories created by BA (Business Analyst) and to establish relationships with customers
through these scenarios.

If we talk about the Agile method that we saw above, Agile includes the user in the process.
Agile, together with the user, adjusts all its organization and systems according to the customer.
“If you are doing business with Agile methodology and not using BDD for application testing,
you are contradicting yourself. ”

Of course, the test can be not only following the behavior as described here, but also testing the
operation of all the units running in the background, both individually (unit-testing) and
together (integration-testing).
3. Katalon
Katalon Platform is an automation testing software tool developed by Katalon, Inc. The
software is built on top of the open-source automation frameworks Selenium, Appium with a
specialized IDE interface for web, API, mobile and desktop application testing. Its initial release
for internal use was in January 2015. Its first public release was in September 2016. In 2018,
the software acquired 9% of market penetration for UI test automation, according to The State
of Testing 2018 Report by SmartBear.

Katalon is recognized as a March 2019 and March 2020 Gartner Peer Insights Customers’
Choice for Software Test Automation.

You can test your Web, Mobile and Desktop (latest version 7.0) applications,
as well as use them in your test automation processes of your backend
services. Thus, you can manage your testing processes in a hybrid way on a
single platform. You can easily integrate the scripts you have prepared into
your CI/CD processes, so you can automate your software quality processes.

✓ Katalon Features
✓ It is a Java-based application.
✓ Scripts prepared without writing an additional script can be run
separately or simultaneously in many browsers such as
Chrome, Firefox, Safari, Edge.
✓ Thanks to the Record&Play feature, the processes can be
prepared easily without having knowledge about script writing.
✓ With Slack integration, real-time feedback and communication
between team members can be provided.
✓ You can enable git integration for source control.
✓ You can start the application by running the run file without
any installation, you can start using it quickly with the
keywords in it. (https://katalon.com/download)
✓ Many of its features are free, and in new versions, paid features
have been started to be used.

✓ Katalon Studio consists of many utilities such as built-in


keywords, custom keywords, objects spying and recording, and
code refactoring that simplifies your code process.

✓ It works with the Page Object Model (POM) design model, which
aims to improve test maintenance and eliminate code duplication.
✓ It uses the selenium library in the background for web automation
and the appium library for mobile automation.
✓ For test data, it provides a data file object that can query data from
external sources such as CSV file, excel file, relational database.
✓ Katalon Studio offers BDD testing capability with files with the
.feature extension.

✓ Katalon Studio uses Grid — TestOps Cloud to run tests entirely in the
cloud and automatically deliver results to Katalon Analytics. Katalon
Analytics is an artificial intelligence supported platform that provides
users with detailed dashboards and reports about test executions.
KAYNAKLAR
2. https://www.mobilhanem.com/web-uygulama-testleri-nelerdir/
3. https://teknoloji.org/selenium-kutuphanesi-nedir-nasil-kullanilir/
4. https://medium.com/@ilkebasalak/selenium-nedir-8c7d908c93e6
5. https://cucumber.io/
6. https://www.javatpoint.com/cucumber-testing
7. http://www.defnesarlioglu.com/cucumber-ile-behaviour-driven-development/
8. https://www.linkedin.com/pulse/cucumber-ile-behaviour-driven-development-bdd-
halil-bozan/?originalSubdomain=tr

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy